CN115905046B - Network card driving data packet processing method and device, electronic equipment and storage medium - Google Patents

Network card driving data packet processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115905046B
CN115905046B CN202211667383.9A CN202211667383A CN115905046B CN 115905046 B CN115905046 B CN 115905046B CN 202211667383 A CN202211667383 A CN 202211667383A CN 115905046 B CN115905046 B CN 115905046B
Authority
CN
China
Prior art keywords
cache
network card
data
target
descriptor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211667383.9A
Other languages
Chinese (zh)
Other versions
CN115905046A (en
Inventor
彭元志
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kedong Guangzhou Software Technology Co Ltd
Original Assignee
Kedong Guangzhou Software Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kedong Guangzhou Software Technology Co Ltd filed Critical Kedong Guangzhou Software Technology Co Ltd
Priority to CN202211667383.9A priority Critical patent/CN115905046B/en
Publication of CN115905046A publication Critical patent/CN115905046A/en
Application granted granted Critical
Publication of CN115905046B publication Critical patent/CN115905046B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The embodiment of the invention discloses a network card driving data packet processing method, a device, electronic equipment and a storage medium, wherein the method comprises the following steps: determining a current number of caches of the target descriptor in the cache; under the condition that the current cache quantity of the target descriptors is equal to the set residual cache quantity, the cache memory is instructed to prefetch the target descriptors with the set cache quantity from the memory; instructing the cache to prefetch a cache address of target data according to a data pointer of the target descriptor; and responding to a cache address reading instruction sent by the network card, and sending the cache address prefetched by the cache memory to the network card so that the network card carries out real-time transceiving processing on the target data based on the cache address. The technical scheme of the embodiment of the invention can optimize the data prefetching capability and the caching performance of the cache memory, thereby improving the real-time processing capability of the network card on the data.

Description

Network card driving data packet processing method and device, electronic equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computers and data processing, in particular to a network card driving data packet processing method, a device, electronic equipment and a storage medium.
Background
In recent years, as the main frequency of a CPU (Central Processing Unit, a central processing unit) increases and the number of cores increases, CPU performance has been significantly improved. But the overall performance improvement of CPU-based devices as processors is not high. The main factor affecting the overall performance improvement of the device is the stall latency of CPU to memory instruction fetches. At present, to solve the speed difference between the CPU and the main memory, a Cache memory (Cache) may be added between the CPU and the main memory, and a Cache prefetch mechanism may be introduced. Fig. 1 is a schematic diagram of a CPU cache structure in the prior art. As shown in fig. 1, according to the data reading sequence and the degree of compactness combined with the CPU, the CPU cache may be divided into a first level cache and a second level cache, and a part of the high-end CPU may be further configured with a third level cache. The Cache is transparent and invisible to most programmers. The programmer does not need to pay attention to the working principle of the Cache and related information when programming. For example, there is no need to pay attention to whether the Cache exists or not, and information such as the Cache level of the Cache and the size of each level of the Cache. The strategy of loading instructions and/or data from the memory by the Cache and the time when the Cache writes the processed data back to the memory are not required to be concerned.
Because of the locality phenomena of time locality and space locality of program execution, the Cache prefetch mechanism can utilize the locality phenomena of program execution to improve system performance. The Cache prefetching mechanism is to predict data/instructions and store the predicted data/instructions into the Cache in advance. The Cache prefetching mechanism needs to combine spatial locality and temporal locality, and related information such as current execution state, historical execution process, software hint, and the like, and adopts a reasonable method to fetch the Cache before the data/instruction is used. Thus, when the data/instruction needs to be used by the processor, the processor can quickly load the required data/instruction from the Cache for operation and execution. Where temporal locality refers to the fact that the instruction/data that the program will use may be the instruction/data that is currently in use. Thus, currently used instructions/data may be prefetched for storage in the Cache for use by the processor after use. Illustratively, taking the instruction of the loop statement as an example, the processor needs to repeatedly execute the instruction in the loop statement before the condition for loop termination is satisfied. Thus, instructions in the loop statement may be prefetched into the Cache for use by the processor. Spatial locality means that the instructions/data that the program will use may be spatially adjacent or similar to the instructions/data currently being used. Therefore, when the processor processes the current instruction/data, the instruction/data of the adjacent area can be prefetched into the Cache from the memory, so that when the processor needs to process the instruction/data of the adjacent memory area, the instruction/data of the adjacent memory area can be directly read from the Cache, and the access time of the memory is saved. For example, taking an array needing sequential processing as an example, other data adjacent to the currently processed data in the array can be prefetched into the Cache in sequence for use by the processor.
The inventors have found that the following drawbacks exist in the prior art in the process of implementing the present invention: the network card is used as a hardware module which needs to interact with the memory data to carry out data transceiving processing, and the process of processing one data message needs to read the memory for a plurality of times in the process of carrying out the transceiving processing on the data. Therefore, the network card driver must ensure that all data to be read is prefetched into the Cache, otherwise, the performance of the network card is severely degraded once the Cache is missed. However, the Cache prefetching mechanism is basically implemented automatically by a hardware prefetching unit of the CPU, but the hardware prefetching unit is not necessarily capable of improving the efficiency of program execution, depending on how the program is executed. Although the processor of the partial system architecture introduces a software instruction capable of prefetching the Cache, the current operating system does not introduce a unified software prefetching interface to the driver of the network card for use, so the current network card driver does not introduce a software prefetching mode to optimize and improve the performance of the driver. That is, the data prefetching capability and the Cache performance of the current Cache still cannot meet the requirement of efficient execution of the network card program.
Disclosure of Invention
The embodiment of the invention provides a network card driving data packet processing method, a device, electronic equipment and a storage medium, which can optimize the data prefetching capacity and the caching performance of a cache memory, thereby improving the real-time processing capacity of the network card on data.
According to an aspect of the present invention, there is provided a network card driver packet processing method, including:
determining a current number of caches of the target descriptor in the cache;
under the condition that the current cache quantity of the target descriptors is equal to the set residual cache quantity, the cache memory is instructed to prefetch the target descriptors with the set cache quantity from the memory;
instructing the cache to prefetch a cache address of target data according to a data pointer of the target descriptor;
and responding to a cache address reading instruction sent by the network card, and sending the cache address prefetched by the cache memory to the network card so that the network card carries out real-time transceiving processing on the target data based on the cache address.
According to another aspect of the present invention, there is provided a network card driver packet processing apparatus, including:
a current buffer number determining module, configured to determine a current buffer number of the target descriptor in the cache;
A target descriptor prefetching module, configured to instruct the cache memory to prefetch, from a memory, a target descriptor of a set number of caches, if it is determined that the current number of caches of the target descriptor is equal to the set number of remaining caches;
a cache address prefetching module, configured to instruct the cache memory to prefetch a cache address of target data according to a data pointer of the target descriptor;
and the cache address sending module is used for responding to a cache address reading instruction sent by the network card and sending the cache address prefetched by the cache memory to the network card so that the network card can send and receive the target data in real time based on the cache address.
According to another aspect of the present invention, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the network card driver packet processing method according to any one of the embodiments of the present invention.
According to another aspect of the present invention, there is provided a computer readable storage medium storing computer instructions for causing a processor to implement the network card driving data packet processing method according to any one of the embodiments of the present invention when executed.
The embodiment of the invention determines the current cache quantity of the target descriptors in the cache memory through the processor, and prefetches the target descriptors with the set cache quantity from the memory when the current cache quantity of the target descriptors is equal to the set residual cache quantity, so as to prefetch the cache addresses of the target data according to the data pointers of the target descriptors. When the network card sends a cache address reading instruction, the cache address prefetched by the cache memory can be sent to the network card, so that the network card can carry out real-time receiving and transmitting processing on target data based on the cache address, the problem that the data processing capacity of the network card is reduced due to lower data prefetching capacity and cache performance of the cache memory in the prior art is solved, and the data prefetching capacity and the cache performance of the cache memory can be optimized, thereby improving the real-time processing capacity of the network card on data.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the invention or to delineate the scope of the invention. Other features of the present invention will become apparent from the description that follows.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of a CPU cache architecture in the prior art;
FIG. 2 is a schematic diagram of a data message processing flow in a network card packet receiving and transmitting process in the prior art;
fig. 3 is a flowchart of a method for processing a network card driver packet according to a first embodiment of the present invention;
fig. 4 is a flowchart of a network card driver packet processing method according to a second embodiment of the present invention;
fig. 5 is a schematic diagram of a network card driver packet processing device according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of an electronic device according to a fourth embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the present invention, a technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in which it is apparent that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, shall fall within the scope of the present invention.
It should be noted that the term "object" and the like in the description of the present invention and the claims and the above drawings are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the invention described herein may be implemented in sequences other than those illustrated or otherwise described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
Fig. 2 is a schematic diagram of a data packet processing flow in a network card packet receiving and transmitting process in the prior art. In a specific example, as shown in fig. 2, the basic procedure of processing a data packet of a network card receiving and transmitting packet is as follows:
first, the CPU configures a mapping relationship between a data buffer pointer (i.e., rx Desc) of a reception descriptor and a data buffer (Mbuf, CPU memory), and simultaneously configures a mapping relationship between a transmission descriptor data buffer pointer (i.e., tx Desc) and a data buffer.
When the network card receives the data message, the network card reads the receiving descriptor from the memory, and stores the received data message into the corresponding data buffer according to the data buffer pointer in the receiving descriptor. Then, the network card updates the member and field information of the control structure in the receiving descriptor, and the control structure can confirm the receiving of the message after updating. The purpose of the update receiving descriptor control structure is to confirm the information of the data message received by the CPU, such as the length, type and other relevant information of the data message. Meanwhile, the network card also needs to update the receiving queue register to count the received data messages, if the current value of the receiving queue register is 6, after receiving one data message, the current value of the receiving queue register is 7, which indicates that a new data message is received.
When the network card needs to send the data message, the network card needs to read the message header of the data message from the data buffer area, and determines the forwarding port according to the message header information. Specifically, the network card reads the message information from the control structure of the transmission descriptor, fills it into the transmission descriptor of the transmission queue (the queue formed by a plurality of transmission descriptors), and updates the current value of the transmission queue register. Further, the network card reads the transmission descriptor from the memory, and checks whether the data packet is transmitted by the hardware. If the data packet is determined to be sent, the control structure body of the corresponding sending descriptor is read from the memory, and the data buffer area is released.
Example 1
Fig. 3 is a flowchart of a network card driving packet processing method according to a first embodiment of the present invention, where the method may be implemented by a network card driving packet processing device, which may be implemented by software and/or hardware, and may be generally integrated in a processor, where the processor may be applied to any type of electronic device, such as a terminal device or a server device, and used in conjunction with a cache memory for providing a data prefetching function. Accordingly, as shown in fig. 3, the method includes the following operations:
s110, determining the current cache quantity of the target descriptor in the cache memory.
The target descriptor may be a receiving descriptor or a transmitting descriptor, and the embodiment of the present invention does not limit the type of the target descriptor. The current number of caches may be the number of target descriptors currently stored in the Cache.
The process of processing a data message by the current network card can know that the network card needs to read the memory for multiple times when receiving and transmitting the data message. The processor needs 3-5 clock cycles to read data from the primary Cache, tens of clock cycles to read data from the secondary Cache, tens of clock cycles to read data from the tertiary Cache, and hundreds of clock cycles to read the memory. Therefore, it is necessary to ensure that all data to be read by the network card driver are in the Cache, so that Cache miss is avoided, and the performance of network card data processing is reduced.
In the embodiment of the invention, in order to improve the data processing performance of the network card driver, the processor can introduce a software instruction capable of prefetching the Cache into the network card driver so as to control the Cache through the software instruction and accelerate the efficient execution of the network card driver.
Specifically, the processor may send a software prefetch instruction to the Cache, and first prefetch a certain number of target descriptors, such as send descriptors and/or receive descriptors, in the Cache. After the target descriptor is prefetched, the processor can continue to send the software prefetching instruction, so that the Cache can continue to prefetch the corresponding Cache address according to the data buffer pointer in the prefetched target descriptor. Correspondingly, the network card can access the Cache to read target data in the Cache address from the Cache or write the target data into the Cache address, so that the rapid receiving and transmitting processing of the target data is realized.
In the process that the processor sends the software prefetching instruction to instruct the Cache to prefetch the target descriptor and the Cache address, the processor can judge the current Cache quantity of the target descriptor in the Cache in real time so as to continue to issue the software prefetching instruction according to the current Cache quantity of the target descriptor in the Cache.
Optionally, the software prefetch instruction of the Cache may be an assembly instruction, and the assembly instruction may be first packaged, which provides the following general interface: (1) HW_PREFETCH0: data are put in each level of cache; (2) HW_PREFETCH1: placing data in each level of cache other than L1 (level one cache); (3) HW_PREFETCH2: data is placed in each level of cache outside of L1 and L2 (level two caches).
And S120, indicating the cache to prefetch the target descriptors with the set cache number from the memory under the condition that the current cache number of the target descriptors is equal to the set residual cache number.
S130, indicating the cache memory to prefetch the cache address of the target data according to the data pointer of the target descriptor.
The setting of the remaining buffer memory amount may be a value set according to the actual requirement, such as 1 or 2. The set number of caches may be a preset number of one-time prefetch target descriptors, for example, may instruct the Cache to prefetch 4 or 8 target descriptors at a time. The embodiment of the invention does not limit the specific values of the set remaining buffer quantity and the set buffer quantity. The target data may be data cached in the cache address, or may be data to be stored in the cache address, where the data type of the target data is specifically determined according to the type of the target descriptor.
It should be noted that, the software prefetch instruction is in some hot spot areas, or the performance related area can load data to the Cache through display, so as to improve the efficiency of program execution. However, incorrect use of the software prefetch instruction may cause excessive load in the Cache or increase the proportion of useless data, thereby causing a problem of reduced program performance, and may affect the execution efficiency of other programs. For example, a program loads a large amount of data to the tertiary Cache, which can affect the normal execution of other programs. Therefore, when prefetching instruction to Cache area data by software, the rationality of prefetching needs to be ensured so as to correctly optimize program performance.
In the embodiment of the invention, in order to ensure the rationality of the software prefetching instruction, the processor can instruct the Cache to prefetch the target descriptors with the set Cache quantity at one time, and determine the current Cache quantity of the target descriptors in the Cache in real time in the program execution process. If the current Cache quantity of the target descriptors in the Cache is determined to be equal to the set remaining Cache quantity, and the current Cache quantity of the target descriptors reaches a Cache boundary, a software prefetching instruction can be continuously sent to the Cache, the Cache is instructed to continue prefetching the target descriptors with the set Cache quantity from the memory, for example, the target descriptors with the set Cache quantity are sequentially prefetched according to the sequence of the target descriptors, and the Cache is instructed to continue prefetching the Cache address of the target data according to the data pointers of the target descriptors.
And S140, responding to a cache address reading instruction sent by the network card, and sending the cache address prefetched by the cache memory to the network card so that the network card carries out real-time transceiving processing on the target data based on the cache address.
The cache address reading instruction may be an instruction for reading a cache address.
Correspondingly, because the Cache pre-fetches the Cache data of the network card for data receiving and sending, when the network card sends the Cache address reading instruction, the processor can directly send the Cache pre-fetched Cache data to the network card. The network card can rapidly transmit and receive the target data based on the Cache address prefetched by the Cache without accessing the memory again to transmit and receive the target data, thereby realizing the real-time processing of the target data and improving the real-time performance of the network card for processing the data message.
At present, different industrial ethernet networks have high requirements on network instantaneity, for example, in an EtherCAT (Control Automation Technology, ethernet control automation technology) industrial protocol, in a master-slave DC (Distributed Clocks, distributed clock mode) synchronous mode, the accuracy requirement of a master station on the sending time of process data is extremely high, and key factors affecting the process data sending instantaneity are an operating system and a network card driver. The operating system is responsible for sending out the process data on time, the network card drive needs to be optimized, and the sending time and the jitter time delay of the process data are shortened as much as possible.
According to the network card driving data packet processing method provided by the embodiment of the invention, the data such as the receiving descriptor/sending descriptor and the data Cache which are about to be used in the process of receiving and sending the packet are preloaded into the Cache from the memory through the Cache software prefetching instruction, so that the data about to be used by the network card is stored in the Cache, the real-time network card driving optimization method based on the Cache software prefetching is realized, the cost of directly reading the data from the memory by the network card is greatly reduced, the waiting time of a processor is also reduced, the data prefetching capability and the caching performance are improved, the Cache hit rate and the real-time performance of processing the data packet by the network card are further improved, the delay time of receiving and sending the packet by the network card is reduced, and the jitter in communication is reduced, so that the network card has good real-time performance and stability, the real-time processing capability of the network card on the data is improved, and the network card driving method can be effectively applied to various fields with higher requirements on the real-time performance of the network card.
The embodiment of the invention determines the current cache quantity of the target descriptors in the cache memory through the processor, and prefetches the target descriptors with the set cache quantity from the memory when the current cache quantity of the target descriptors is equal to the set residual cache quantity, so as to prefetch the cache addresses of the target data according to the data pointers of the target descriptors. When the network card sends a cache address reading instruction, the cache address prefetched by the cache memory can be sent to the network card, so that the network card can carry out real-time receiving and transmitting processing on target data based on the cache address, the problem that the data processing capacity of the network card is reduced due to lower data prefetching capacity and cache performance of the cache memory in the prior art is solved, and the data prefetching capacity and the cache performance of the cache memory can be optimized, thereby improving the real-time processing capacity of the network card on data.
Example two
Fig. 4 is a flowchart of a network card driver packet processing method according to a second embodiment of the present invention, where the present embodiment is implemented based on the foregoing embodiment, and in this embodiment, a specific alternative implementation manner of an initialization process of data prefetching and a network card performing data transceiving processing based on a cache address is provided. Accordingly, as shown in fig. 4, the method of this embodiment may include:
s210, initializing the network card and the target descriptor.
It will be appreciated that the processor first needs to perform a data prefetch-related initialization operation before sending a software prefetch instruction to the Cache to prefetch data. Since the software prefetch instruction is applied to the prefetch of the network card data, the network card and the target descriptor can be initialized.
In an optional embodiment of the present invention, the initializing the network card and the destination descriptor may include: allocating the space of the network card control block structure body and initializing the value of the network card control block structure body; allocating a space of a transmission queue buffer and/or an acceptance queue buffer; allocating the space of the target descriptor, and configuring the mapping relation between the pointer of the data buffer area and the data buffer area in the target descriptor; configuring an associated register of a transmitting unit and/or a receiving unit of the network card, and enabling the transmitting unit and/or the receiving unit.
The network card control block structure may be a structure for managing data related to the network card itself, such as a MAC (Media Access Control Address ) address, a start address of a descriptor, and a data buffer pointer. The starting address of the descriptor may be used to prefetch the target descriptor. The transmit queue buffer and the receive queue buffer are referred to as memory buffers mbfu in memory.
Specifically, when the processor initializes the network card and the target descriptor, the space of the network card control block structure body can be allocated, and the value of the network card control block structure body is initialized, so as to realize the initialization process of the network card control block structure body. At the same time, space for the transmit queue buffer and/or the receive queue buffer, and space for the target descriptor, are also allocated. After the space of the target descriptor is allocated, the mapping relation between the data buffer pointer and the data buffer in the target descriptor is also required to be configured, so as to realize the initialization process of the target descriptor. Furthermore, a transmitting unit and/or a receiving unit of the network card are required to be configured, an associated register is set for the transmitting unit and/or the receiving unit, and the transmitting unit and/or the receiving unit is enabled.
In an optional embodiment of the invention, before initializing the network card and the target descriptor, the method may further include: determining a network card driver, a data structure of the target descriptor and a data buffer area; and according to the line structure of the cache memory, performing cache line alignment on the network card driver, the data structure of the target descriptor and the data buffer area.
It will be appreciated that if the data structure of the network card driver cannot be aligned with the Cache lines, the problem of the data structure of the network card driver crossing the Cache lines easily occurs. Therefore, the Cache line alignment operation needs to be completed first before the data prefetch is performed. Specifically, objects such as a data structure and a data buffer area of a network card driver and a target descriptor (including a sending descriptor and a receiving descriptor) which need to be subjected to Cache line alignment can be determined, and the objects such as the data structure and the data buffer area of the network card driver and the target descriptor which need to be subjected to Cache line alignment are subjected to Cache line alignment through related instructions. For example, a CACHE LINE alignment statement may be made by "_attribute __ (aligned (cache_line_size)))" to complete the CACHE LINE alignment operation.
S220, prefetching the network card control block structure body and the target descriptors of the set cache quantity through a prefetching function.
The network card control block structure is used for storing the starting address of the target descriptor.
Wherein the prefetch function may be used to prefetch the data required for the Cache.
In an initial stage, the processor may send a software prefetch instruction to the Cache. The Cache can prefetch the network card control block structure and target descriptors for setting the Cache quantity through a prefetching function in response to the software prefetching instruction. The Cache prefetching network card control block structure body can acquire the data of the initial address of the target descriptor stored in the network card control block structure body, and then prefetches the target descriptors with the set Cache quantity according to the initial address of the target descriptor.
S230, determining the current cache quantity of the target descriptor in the cache memory.
S240, judging whether the current buffer memory number of the target descriptor is equal to the set residual buffer memory number, if yes, executing S250, otherwise, executing S260.
S250, the cache memory is instructed to prefetch target descriptors with set cache quantity from the memory.
S260, indicating the cache memory to prefetch the cache address of the target data according to the data pointer of the target descriptor.
Correspondingly, if the processor determines that the current Cache number of the target descriptor is equal to the set remaining Cache number, which indicates that the current Cache number of the target descriptor is at the Cache boundary, a software prefetch instruction may be continuously sent to the Cache to instruct the Cache to continue prefetching the target descriptor with the set Cache number from the memory. Otherwise, if the current Cache number of the target descriptor is not at the Cache boundary, a software prefetch instruction may be sent to the Cache to instruct the Cache to prefetch the Cache address of the target data according to the data pointer of the target descriptor.
And S270, responding to a cache address reading instruction sent by the network card, and sending the cache address prefetched by the cache memory to the network card so that the network card carries out real-time transceiving processing on the target data based on the cache address.
In an optional embodiment of the present invention, the target descriptor includes a receive descriptor, and the target data is a target data packet received by the network card; the network card is used for: and storing the received target data message in a cache address prefetched by the cache memory, and updating the structure body data of the receiving descriptor and a receiving queue register after determining that the target data message is stored.
Because the data required by the network card is prefetched into the Cache, the network card can send a Cache address reading instruction to the Cache when receiving and processing the received target data message, acquire the Cache address further prefetched by the Cache through the prefetched receiving descriptor, and store the received target data message in the Cache address prefetched by the Cache through the receiving descriptor. After the storage of the target data message is completed, the network card can update the structure body data of the receiving descriptor and the receiving queue register to confirm that the receiving of the target data message is completed.
It should be noted that, since the received target data packet is written into the buffer address through the network card, the network card may set the structure data of the reception descriptor, such as the packet length and the descriptor type of the structure data of the reception descriptor.
In an optional embodiment of the present invention, the target descriptor includes a transmission descriptor, and the target data is a target data packet sent by the network card; the network card is used for: and reading the target data message from the cache address prefetched by the cache memory, performing a packet sending operation on the target data message, and updating the structure body data and the sending queue register of the sending descriptor after determining that the sending of the target data message is completed.
Because the data required by the network card are all prefetched into the Cache, when the network card performs transmission processing on the target data message to be transmitted, a Cache address reading instruction can be transmitted to the Cache, the Cache address further prefetched by the Cache through the prefetched transmission descriptor is obtained, and the target data message to be transmitted is read from the Cache address, so that the target data message is transmitted. After the transmission of the target data message is completed, the network card can update the structure body data of the transmission descriptor and the transmission queue register to confirm that the transmission of the target data message is completed.
Specifically, the packet sending operation is performed on the target data packet, which may be reading a packet header of the target data packet, and determining a forwarding port according to the packet header information. The network card reads the message information of the target data message from the control structure body of the transmission descriptor, fills the message information into the transmission descriptor of the transmission queue, and updates the current value of the transmission queue register. Further, the network card reads the transmission descriptor prefetched by the Cache, and checks whether a data packet is transmitted by hardware. If the data packet is determined to be sent, the control structure body of the corresponding sending descriptor is read from the Cache prefetching, and the data buffer area is released.
In an optional embodiment of the invention, before the instructing the cache to prefetch the cache address of the target data according to the data pointer of the target descriptor, the method may further include: setting the structural body data of the transmission descriptor; wherein, the structure body data of the sending descriptor comprises a data packet length and a descriptor type.
Since the destination data packet is written into the buffer address by the processor, the processor may set the structure data of the transmission descriptor, such as the packet length and the descriptor type of the structure data of the transmission descriptor.
In summary, the method for processing the network card driving data packet provided by the embodiment of the invention realizes the prefetching of the Cache at different time points of the receiving function and the sending function. Specifically, during the data receiving stage, the processor instructs the Cache to prefetch the network card control block structure and receive descriptors into the Cache through the prefetch function. The processor determines whether the received descriptor that has been prefetched in the Cache is located at a Cache boundary. If it is determined that there is a cache boundary, e.g., only one receive descriptor remains, then the next receive descriptor for a set number of caches (e.g., 4) is prefetched. Furthermore, the Cache is instructed to prefetch the Cache address of the storage message according to the prefetched data buffer pointer of the receiving description, so that the reading and the analysis of the subsequent data packet are facilitated, and the data packet is quickly stored to the corresponding Cache address. In the data transmission stage, the processor instructs the Cache to prefetch the network card control block structure and transmit the descriptor into the Cache through the prefetch function. The processor determines whether the transmission descriptor already prefetched in the Cache is located at a Cache boundary. If it is determined that there is a cache boundary, e.g., only one transmit descriptor remains, then the next transmit descriptor for a set number of caches (e.g., 4) is prefetched. Furthermore, the Cache is instructed to prefetch the Cache address of the storage message according to the prefetched data buffer pointer of the transmission description, so that the network card can conveniently read the data packet from the prefetched Cache address of the Cache, and data can be quickly transmitted.
According to the network card driving data packet processing method provided by the embodiment of the invention, the reasonable Cache software prefetching instruction is used for preloading a certain amount of data such as the receiving descriptor/sending descriptor and the data Cache which are about to be used in the process of receiving and sending the packet, so that partial data about to be used by the network card is stored in the Cache, the real-time network card driving optimization method based on the Cache software prefetching is realized, the cost of directly reading the data from the memory by the network card is greatly reduced, the waiting time of a processor is also reduced, the data prefetching capability and the buffering performance are improved, the hit rate of the Cache and the real-time performance of processing the data packet by the network card are further improved, the delay time of receiving and sending the packet by the network card is reduced, and the jitter of the communication is reduced, so that the network card has good real-time performance and stability, the real-time processing capability of the network card on the data is improved, and the network card can be effectively applied to various fields with higher requirements on real-time performance.
It should be noted that any permutation and combination of the technical features in the above embodiments also belong to the protection scope of the present invention.
Example III
Fig. 5 is a schematic diagram of a network card driver packet processing device according to a third embodiment of the present invention, as shown in fig. 5, where the device includes: the current buffer amount determining module 310, the target descriptor prefetch module 320, the buffer address prefetch module 330, and the buffer address transmitting module 340, wherein:
A current buffer number determining module 310, configured to determine a current buffer number of the target descriptor in the cache;
a target descriptor prefetch module 320, configured to instruct the cache to prefetch, from the memory, the target descriptor with the set number of caches, if it is determined that the current number of caches of the target descriptor is equal to the set number of remaining caches;
a cache address prefetching module 330, configured to instruct the cache memory to prefetch a cache address of target data according to the data pointer of the target descriptor;
and the cache address sending module 340 is configured to send a cache address prefetched by the cache memory to the network card in response to a cache address reading instruction sent by the network card, so that the network card performs real-time transceiving processing on the target data based on the cache address.
The embodiment of the invention determines the current cache quantity of the target descriptors in the cache memory through the processor, and prefetches the target descriptors with the set cache quantity from the memory when the current cache quantity of the target descriptors is equal to the set residual cache quantity, so as to prefetch the cache addresses of the target data according to the data pointers of the target descriptors. When the network card sends a cache address reading instruction, the cache address prefetched by the cache memory can be sent to the network card, so that the network card can carry out real-time receiving and transmitting processing on target data based on the cache address, the problem that the data processing capacity of the network card is reduced due to lower data prefetching capacity and cache performance of the cache memory in the prior art is solved, and the data prefetching capacity and the cache performance of the cache memory can be optimized, thereby improving the real-time processing capacity of the network card on data.
Optionally, the target descriptor includes a receiving descriptor, and the target data is a target data packet received by the network card; the network card is used for: and storing the received target data message in a cache address prefetched by the cache memory, and updating the structure body data of the receiving descriptor and a receiving queue register after determining that the target data message is stored.
Optionally, the target descriptor includes a transmission descriptor, and the target data is a target data packet sent by the network card; the network card is used for: and reading the target data message from the cache address prefetched by the cache memory, performing a packet sending operation on the target data message, and updating the structure body data and the sending queue register of the sending descriptor after determining that the sending of the target data message is completed.
Optionally, the network card driving data packet processing device may further include a structure body data setting module, configured to: setting the structural body data of the transmission descriptor; wherein, the structure body data of the sending descriptor comprises a data packet length and a descriptor type.
Optionally, the network card driving data packet processing device may further include an initialization module, configured to: initializing the network card and the target descriptor; pre-fetching the network card control block structure body and the target descriptors of the set cache quantity through a pre-fetching function; the network card control block structure is used for storing the starting address of the target descriptor.
Optionally, the initialization module is specifically configured to: allocating the space of the network card control block structure body and initializing the value of the network card control block structure body; allocating a space of a transmission queue buffer and/or an acceptance queue buffer; allocating the space of the target descriptor, and configuring the mapping relation between the pointer of the data buffer area and the data buffer area in the target descriptor; configuring an associated register of a transmitting unit and/or a receiving unit of the network card, and enabling the transmitting unit and/or the receiving unit.
Optionally, the network card driving data packet processing device may further include a cache line alignment module, configured to: determining a network card driver, a data structure of the target descriptor and a data buffer area; and according to the line structure of the cache memory, performing cache line alignment on the network card driver, the data structure of the target descriptor and the data buffer area.
The network card driving data packet processing device can execute the network card driving data packet processing method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method. Technical details not described in detail in this embodiment may refer to the network card driving data packet processing method provided in any embodiment of the present invention.
Since the above-described network card driving packet processing device is a device capable of executing the network card driving packet processing method according to the embodiment of the present invention, based on the network card driving packet processing method according to the embodiment of the present invention, those skilled in the art can understand the specific implementation of the network card driving packet processing device according to the embodiment of the present invention and various modifications thereof, so how the network card driving packet processing device implements the network card driving packet processing method according to the embodiment of the present invention will not be described in detail herein. As long as the person skilled in the art implements the device for implementing the method for processing the network card driving data packet in the embodiment of the present invention, the device is within the scope of protection intended by the application.
Example IV
Fig. 6 shows a schematic diagram of the structure of an electronic device 10 that may be used to implement an embodiment of the invention. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. Electronic equipment may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices (e.g., helmets, glasses, watches, etc.), and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the inventions described and/or claimed herein.
As shown in fig. 6, the electronic device 10 includes at least one processor 11, and a memory, such as a Read Only Memory (ROM) 12, a Random Access Memory (RAM) 13, etc., communicatively connected to the at least one processor 11, in which the memory stores a computer program executable by the at least one processor, and the processor 11 may perform various appropriate actions and processes according to the computer program stored in the Read Only Memory (ROM) 12 or the computer program loaded from the storage unit 18 into the Random Access Memory (RAM) 13. In the RAM 13, various programs and data required for the operation of the electronic device 10 may also be stored. The processor 11, the ROM 12 and the RAM 13 are connected to each other via a bus 14. An input/output (I/O) interface 15 is also connected to bus 14.
Various components in the electronic device 10 are connected to the I/O interface 15, including: an input unit 16 such as a keyboard, a mouse, etc.; an output unit 17 such as various types of displays, speakers, and the like; a storage unit 18 such as a magnetic disk, an optical disk, or the like; and a communication unit 19 such as a network card, modem, wireless communication transceiver, etc. The communication unit 19 allows the electronic device 10 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The processor 11 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of processor 11 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various processors running machine learning model algorithms, digital Signal Processors (DSPs), and any suitable processor, controller, microcontroller, etc. The processor 11 performs the various methods and processes described above, such as the network card driven packet processing method.
In some embodiments, the network card drive packet processing method may be implemented as a computer program tangibly embodied on a computer-readable storage medium, such as storage unit 18. In some embodiments, part or all of the computer program may be loaded and/or installed onto the electronic device 10 via the ROM 12 and/or the communication unit 19. When the computer program is loaded into RAM 13 and executed by processor 11, one or more steps of the network card drive packet processing method described above may be performed. Alternatively, in other embodiments, the processor 11 may be configured to perform the network card driven data packet processing method in any other suitable manner (e.g., by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), load programmable logic devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
A computer program for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer programs may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the computer programs, when executed by the processor, cause the functions/acts specified in the flowchart and/or block diagram block or blocks to be implemented. The computer program may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of the present invention, a computer-readable storage medium may be a tangible medium that can contain, or store a computer program for use by or in connection with an instruction execution system, apparatus, or device. The computer readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. Alternatively, the computer readable storage medium may be a machine readable signal medium. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on an electronic device having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) through which a user can provide input to the electronic device. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the internet.
The computing system may include clients and servers. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service are overcome.

Claims (10)

1. The network card driving data packet processing method is characterized by comprising the following steps:
determining a current number of caches of the target descriptor in the cache;
under the condition that the current cache quantity of the target descriptors is equal to the set residual cache quantity, the cache memory is instructed to prefetch the target descriptors with the set cache quantity from the memory;
instructing the cache to prefetch a cache address of target data according to a data pointer of the target descriptor;
and responding to a cache address reading instruction sent by the network card, and sending the cache address prefetched by the cache memory to the network card so that the network card carries out real-time transceiving processing on the target data based on the cache address.
2. The method of claim 1, wherein the destination descriptor includes a receive descriptor, and the destination data is a destination data packet received by the network card; the network card is used for:
and storing the received target data message in a cache address prefetched by the cache memory, and updating the structure body data of the receiving descriptor and a receiving queue register after determining that the target data message is stored.
3. The method of claim 1, wherein the destination descriptor includes a send descriptor, and the destination data is a destination data packet sent by the network card; the network card is used for:
and reading the target data message from the cache address prefetched by the cache memory, performing a packet sending operation on the target data message, and updating the structure body data and the sending queue register of the sending descriptor after determining that the sending of the target data message is completed.
4. A method according to claim 3, further comprising, prior to said instructing said cache to prefetch a cache address of target data according to a data pointer of said target descriptor:
setting the structural body data of the transmission descriptor;
wherein, the structure body data of the sending descriptor comprises a data packet length and a descriptor type.
5. The method of claim 1, further comprising, prior to said determining the current number of caches of the target descriptor in the cache:
initializing the network card and the target descriptor;
pre-fetching the network card control block structure body and the target descriptors of the set cache quantity through a pre-fetching function;
The network card control block structure is used for storing the starting address of the target descriptor.
6. The method of claim 5, wherein initializing the network card and the destination descriptor comprises:
allocating the space of the network card control block structure body and initializing the value of the network card control block structure body;
allocating a space of a transmission queue buffer and/or an acceptance queue buffer;
allocating the space of the target descriptor, and configuring the mapping relation between the pointer of the data buffer area and the data buffer area in the target descriptor;
configuring an associated register of a transmitting unit and/or a receiving unit of the network card, and enabling the transmitting unit and/or the receiving unit.
7. The method as recited in claim 1, further comprising:
determining a network card driver, a data structure of the target descriptor and a data buffer area;
and according to the line structure of the cache memory, performing cache line alignment on the network card driver, the data structure of the target descriptor and the data buffer area.
8. A network card driven packet processing apparatus, comprising:
A current buffer number determining module, configured to determine a current buffer number of the target descriptor in the cache;
a target descriptor prefetching module, configured to instruct the cache memory to prefetch, from a memory, a target descriptor of a set number of caches, if it is determined that the current number of caches of the target descriptor is equal to the set number of remaining caches;
a cache address prefetching module, configured to instruct the cache memory to prefetch a cache address of target data according to a data pointer of the target descriptor;
and the cache address sending module is used for responding to a cache address reading instruction sent by the network card and sending the cache address prefetched by the cache memory to the network card so that the network card can send and receive the target data in real time based on the cache address.
9. An electronic device, the electronic device comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores a computer program executable by the at least one processor to enable the at least one processor to perform the network card driven packet processing method of any one of claims 1-7.
10. A computer storage medium storing computer instructions for causing a processor to implement the network card driven data packet processing method of any one of claims 1-7 when executed.
CN202211667383.9A 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium Active CN115905046B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211667383.9A CN115905046B (en) 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211667383.9A CN115905046B (en) 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115905046A CN115905046A (en) 2023-04-04
CN115905046B true CN115905046B (en) 2023-07-07

Family

ID=86493743

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211667383.9A Active CN115905046B (en) 2022-12-23 2022-12-23 Network card driving data packet processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115905046B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116827880B (en) * 2023-08-29 2023-11-17 珠海星云智联科技有限公司 Cache space management method and device

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330616B1 (en) * 1998-09-14 2001-12-11 International Business Machines Corporation System for communications of multiple partitions employing host-network interface, and address resolution protocol for constructing data frame format according to client format
US7647436B1 (en) * 2005-04-29 2010-01-12 Sun Microsystems, Inc. Method and apparatus to interface an offload engine network interface with a host machine
CN102184151A (en) * 2011-04-29 2011-09-14 杭州华三通信技术有限公司 PCI-E (peripheral component interconnect express) to PCI bridge device and method for actively prefetching data thereof
CN113225307A (en) * 2021-03-18 2021-08-06 西安电子科技大学 Optimization method, system and terminal for pre-reading descriptors in offload engine network card
CN113535395A (en) * 2021-07-14 2021-10-22 西安电子科技大学 Descriptor queue and memory optimization method, system and application of network storage service

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6425130B2 (en) * 1996-06-25 2002-07-23 Matsushita Electric Industrial Co., Ltd. Video network server for distributing sound and video image information to a plurality of terminals
US7631106B2 (en) * 2002-08-15 2009-12-08 Mellanox Technologies Ltd. Prefetching of receive queue descriptors

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6330616B1 (en) * 1998-09-14 2001-12-11 International Business Machines Corporation System for communications of multiple partitions employing host-network interface, and address resolution protocol for constructing data frame format according to client format
US7647436B1 (en) * 2005-04-29 2010-01-12 Sun Microsystems, Inc. Method and apparatus to interface an offload engine network interface with a host machine
CN102184151A (en) * 2011-04-29 2011-09-14 杭州华三通信技术有限公司 PCI-E (peripheral component interconnect express) to PCI bridge device and method for actively prefetching data thereof
CN113225307A (en) * 2021-03-18 2021-08-06 西安电子科技大学 Optimization method, system and terminal for pre-reading descriptors in offload engine network card
CN113535395A (en) * 2021-07-14 2021-10-22 西安电子科技大学 Descriptor queue and memory optimization method, system and application of network storage service

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
任敏华 ; 刘宇 ; 罗云宝 ; 赵永建 ; 张激 ; .两级链表在交换控制芯片描述符管理中的应用.计算机工程.2013,82-84+89. *
苏文 ; 章隆兵 ; 高翔 ; 苏孟豪.基于Cache锁和直接缓存访问的网络处理优化方法.计算机研究与发展.2014,681-690. *

Also Published As

Publication number Publication date
CN115905046A (en) 2023-04-04

Similar Documents

Publication Publication Date Title
CN110275841B (en) Access request processing method and device, computer equipment and storage medium
US11809321B2 (en) Memory management in a multiple processor system
US20210097019A1 (en) Time sensitive networking device
WO2018076793A1 (en) Nvme device, and methods for reading and writing nvme data
US7124207B1 (en) I2O command and status batching
CN106155960B (en) It is shaken hands the UART serial port communication method with EDMA based on GPIO
WO2015078219A1 (en) Information caching method and apparatus, and communication device
US20120072674A1 (en) Double-buffered data storage to reduce prefetch generation stalls
CN107728936B (en) Method and apparatus for transmitting data processing requests
US9390036B2 (en) Processing data packets from a receive queue in a remote direct memory access device
CN103645994A (en) Data processing method and device
US20190294543A1 (en) Selective downstream cache processing for data access
CN115905046B (en) Network card driving data packet processing method and device, electronic equipment and storage medium
US7299341B2 (en) Embedded system with instruction prefetching device, and method for fetching instructions in embedded systems
CN107870780B (en) Data processing apparatus and method
KR102579097B1 (en) Apparatus and method for writing back instruction execution result and processing apparatus
CN112612728B (en) Cache management method, device and equipment
US20230179546A1 (en) Processor and implementation method, electronic device, and storage medium
US10169272B2 (en) Data processing apparatus and method
US7698505B2 (en) Method, system and computer program product for data caching in a distributed coherent cache system
CN112799723A (en) Data reading method and device and electronic equipment
CN110557341A (en) Method and device for limiting data current
CN115934625B (en) Doorbell knocking method, equipment and medium for remote direct memory access
CN117132446A (en) GPU data access processing method, device and storage medium
CN112883041B (en) Data updating method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant