CN115934323B - Cloud computing resource calling method and device, electronic equipment and storage medium - Google Patents

Cloud computing resource calling method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115934323B
CN115934323B CN202211541332.1A CN202211541332A CN115934323B CN 115934323 B CN115934323 B CN 115934323B CN 202211541332 A CN202211541332 A CN 202211541332A CN 115934323 B CN115934323 B CN 115934323B
Authority
CN
China
Prior art keywords
network card
computing device
driving module
data
cloud computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211541332.1A
Other languages
Chinese (zh)
Other versions
CN115934323A (en
Inventor
赵二城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Capitalonline Data Service Co ltd
Original Assignee
Capitalonline Data Service Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Capitalonline Data Service Co ltd filed Critical Capitalonline Data Service Co ltd
Priority to CN202211541332.1A priority Critical patent/CN115934323B/en
Publication of CN115934323A publication Critical patent/CN115934323A/en
Application granted granted Critical
Publication of CN115934323B publication Critical patent/CN115934323B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The application discloses a cloud computing resource calling method, a cloud computing resource calling device, electronic equipment and a storage medium, and relates to the technical field of computers. According to the embodiment of the application, the first CPU of the local computing device copies the data to be processed to the first network card communication driving module, and the first network card communication driving module transmits the data to be processed to the second network card communication driving module of the cloud computing device through the network card. The cloud computing device copies the data to be processed to the corresponding processing unit through the second network card communication driving module, returns the processing result to the second network card communication driving module after obtaining the processing result, and sends the processing result to the local computing device through the second network card communication driving module. The local computing device obtains the processing result through the first network card communication driving module and returns the processing result to the first CPU. Therefore, the dynamic adjustment and the on-demand allocation of the computing capacities of different computing resources can be realized, the cost of local users is further reduced, and the data processing efficiency is improved.

Description

Cloud computing resource calling method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and apparatus for invoking cloud computing resources, an electronic device, and a storage medium.
Background
In the use of conventional computing resources, such as graphics processors GPU, data transfer is required between the CPU and the hardware on the local computing device. The use mode can not dynamically adjust and allocate the computing power of different computing resources according to the requirement, so that program blockage, resource waste and the like can be caused by mismatching of the computing resources of the local computing equipment. Our society is currently in the big data age, and the total data amount is explosively increased, especially in the fields of artificial intelligence and the like, and more needs to rely on enough large computing resources. The existing method for calling the computing resources is to call the computing resources of the external equipment of the local computing equipment, and has the problems of higher requirements on storage sites, maintenance cost, management cost and the like of the external equipment; the external equipment is limited by fields, hardware and other aspects, so that the computing power of the computing resources cannot be dynamically adjusted during data processing, and the situation that the computing power is insufficient or idle when the complex situation is faced can be caused; in addition, chips on the market are relatively short, the arrival period required for purchasing external equipment with larger calculation force is generally longer, and short-time calculation force deficiency and larger loss can be caused when calculation force expansion is urgently needed.
Disclosure of Invention
The embodiment of the application provides a method, a device, electronic equipment and a storage medium for calling cloud computing resources, so as to achieve the purposes of dynamic adjustment and on-demand distribution of CPU and cloud computing resources.
In a first aspect, an embodiment of the present application provides a method for calling a cloud computing resource, which is applied to a local computing device, where the local computing device is configured with a first network card for communicating with a cloud, and the method includes:
copying data to be processed of a first CPU of the local computing device to a first network card communication driving module so as to send the data to be processed to a cloud computing device through the first network card communication driving module;
and acquiring a processing result obtained after the cloud computing equipment calls a processing unit corresponding to the data to be processed to process the data through the first network card communication driving module, and returning the processing result to the first CPU, wherein the processing unit comprises at least one of a graphic processor, an artificial intelligent processor, a deep learning processor, a general graphic processor, an AI accelerator and a field programmable logic gate array.
In a second aspect, an embodiment of the present application provides a method for calling a cloud computing resource, which is applied to a cloud computing device, where the cloud computing device is configured with a second network card for communicating with a local computing device, and the method includes:
Copying the data to be processed to a corresponding processing unit through a second network card communication driving module, wherein the processing unit comprises at least one of a graphic processor GPU, an artificial intelligent processor NPU, a deep learning processor DPU, a general graphic processor GPGPU, an AI accelerator and a field programmable gate array FPGA;
and acquiring a processing result obtained after the processing unit is called to process the data to be processed, and returning the processing result to the second network card communication driving module so as to send the processing result to the local computing equipment through the second network card communication driving module.
In a third aspect, an embodiment of the present application provides a calling device of a cloud computing resource, deployed on a local computing device, where the local computing device is configured with a first network card for communicating with a cloud, and includes:
the data copying module is used for copying the data to be processed of the first CPU of the central processor of the local computing device to the first network card communication driving module so as to send the data to be processed to the cloud computing device through the first network card communication driving module;
the result acquisition module is used for acquiring a processing result obtained after the cloud computing equipment invokes a processing unit corresponding to the data to be processed to process the data through the first network card communication driving module, wherein the processing unit comprises at least one of a graphic processor, an artificial intelligent processor, a deep learning processor, a general graphic processor, an AI accelerator and a field programmable logic gate array;
And the result returning module is used for returning the processing result to the first CPU of the local computing device.
In a fourth aspect, an embodiment of the present application provides a device for invoking cloud computing resources, deployed on a cloud computing device, where the cloud computing device is configured with a second network card for communicating with a local computing device, and the device includes:
the data copying module is used for copying the data to be processed to a corresponding processing unit through the second network card communication driving module, and the processing unit comprises at least one of a graphic processor, an artificial intelligence processor, a deep learning processor, a general graphic processor, an AI accelerator or a field programmable logic gate array;
the result acquisition module is used for acquiring a processing result obtained after the processing unit is called to process the data to be processed;
and the result returning module is used for returning the processing result to the second network card communication driving module so as to send the processing result to the local computing equipment through the second network card communication driving module.
In a fifth aspect, embodiments of the present application provide a local computing device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any of the embodiments of the present application when the computer program is executed.
In a sixth aspect, embodiments of the present application provide a cloud computing device, including a memory, a processor, and a computer program stored on the memory, where the processor implements the method of any embodiment of the present application when the computer program is executed.
In a seventh aspect, embodiments of the present application provide a computing device readable storage medium, where a computer program is stored in the computing device readable storage medium, where the computer program, when executed by a processor, implements a method for invoking cloud computing resources according to any embodiment of the present application.
The application has the following advantages:
according to the method and the device, cloud computing resources are called through the driving software, and the communication driving module is added to the cloud computing equipment, so that a CPU in the cloud computing equipment can not participate in a resource calling process; meanwhile, as the communication driving module is added on the local computing equipment, when the CPU of the local computing equipment invokes the external computing resource, the CPU is not limited by the geographic position of the external computing resource any more, and the cloud computing resource can be invoked, so that the purposes of dynamically adjusting the computing capacity of different computing resources and distributing the computing capacity according to the needs are achieved, the cost is saved for a user, and the utilization rate of the computing resource is improved.
According to the embodiment of the application, the first CPU of the local computing device copies the data to be processed to the first network card communication driving module, and the first network card communication driving module transmits the data to be processed to the second network card communication driving module of the cloud computing device through the network card. The cloud computing device copies the data to be processed to the corresponding processing unit through the second network card communication driving module, after the processing unit processes the data to be processed, the processing result is obtained and returned to the second network card communication driving module, and the processing result is sent to the local computing device through the second network card communication driving module. The local computing device acquires the processing result through the first network card communication driving module and returns the processing result to the first CPU, so that the process of calling cloud computing resources is completed. In the above scheme, the second CPU in the cloud computing device may not participate in the process of resource call.
In the process of calling the computing resources, as the kernel communication driving module and the network card communication driving module are added on the cloud computing equipment, a second CPU in the cloud computing equipment can complete data exchange with the local computing equipment only by means of the two driving modules without participating in the process of calling the resources; meanwhile, as the kernel communication driving module and the network card communication driving module are added on the local computing equipment, when the CPU of the local computing equipment invokes the external computing resource, the CPU is not limited by the geographic position of the external computing resource any more, and the cloud computing resource can be invoked, which is equivalent to converting the original limited computing resource into the computing resource which can be infinitely expanded. Therefore, the computing resource can overcome the limit of local computing equipment and save the time for calling the computing resource, thereby achieving the purposes of dynamically adjusting the computing power of different computing resources and distributing the computing resources according to the needs, and leading the cost of the user to be reduced, the utilization rate of the computing resource to be improved and the emergency to be more flexible.
The foregoing description is merely an overview of the technical solutions of the present application, and in order to make the technical means of the present application more clearly understood, it is possible to implement the present application according to the content of the present specification, and in order to make the above and other objects, features and advantages of the present application more clearly understood, the following detailed description of the present application will be given.
Drawings
In the drawings, the same reference numerals refer to the same or similar parts or elements throughout the several views unless otherwise specified. The figures are not necessarily drawn to scale. It is appreciated that these drawings depict only some embodiments according to the application and are not to be considered limiting of its scope.
FIG. 1 is a schematic diagram of one approach to implementing computing resource invocation in accordance with the related art;
FIG. 2 is a schematic diagram of one solution for implementing computing resource invocation provided herein;
FIG. 3 is a flowchart of a method for invoking cloud computing resources applied to a local computing device according to an embodiment of the present application;
FIG. 4 is a flowchart of a method for invoking cloud computing resources applied to a cloud computing device according to an embodiment of the present disclosure;
FIG. 5 is a block diagram illustrating a cloud computing resource calling device deployed on a local computing device according to an embodiment of the present application;
FIG. 6 is a block diagram illustrating a configuration of a cloud computing resource calling device deployed at a cloud computing device according to an embodiment of the present application;
FIG. 7 is a block diagram of a local computing device electronic device used to implement embodiments of the present application;
fig. 8 is a block diagram of a cloud computing device electronic device used to implement an embodiment of the present application.
Detailed Description
Hereinafter, only certain exemplary embodiments are briefly described. As will be recognized by those of skill in the pertinent art, the described embodiments may be modified in various different ways without departing from the spirit or scope of the present application. Accordingly, the drawings and description are to be regarded as illustrative in nature, and not as restrictive.
In order to facilitate understanding of the technical solutions of the embodiments of the present application, the following describes related technologies of the embodiments of the present application. The following related technologies may be optionally combined with the technical solutions of the embodiments of the present application, which all belong to the protection scope of the embodiments of the present application.
In a related art before this application, when implementing a call of a computing resource, a local computing device calls a computing resource of an external device of the local computing device, as shown in fig. 1, which is a schematic diagram of a scheme for implementing the call of the computing resource in the related art. The CPU (Central Processing Unit ) of the local computing device copies the data to be processed to the System Memory (System Memory), the System Memory sends the data to be processed to the GPU Memory (GPU Memory) through the PCI (Peripheral Component Interconnect) bus channel, the GPU (Graphics Processing Unit, graphics processor) reads the data in the GPU Memory to process the data, and then the processing result is fed back to the System Memory through the PCI bus channel again to complete data exchange, so that the purpose that the local computing device invokes the external device GPU computing resources is achieved. However, since the manner in which the GPU and the CPU transmit data through the PCI bus channel may be limited by various hardware devices, for example, the PCI parallel bus cannot connect too many devices, the extensibility is poor, and inter-line interference may occur when multiple local computing devices are simultaneously operated, so that the system cannot work normally; for another example, with the GPU, the user needs to purchase the GPU physical server, so that the GPU has high requirements on storage sites, maintenance cost, management cost and the like; secondly, because the calculation power of the GPU is limited by the memory size, the calculation power of the CPU and the GPU cannot be dynamically adjusted during data processing, and the situation that the calculation power is insufficient or the calculation power is idle can be caused when the GPU faces the scenes of AI application requiring a large amount of calculation power, or day-night multiplexing with alternate idle time and busy time and the like; the chips on the market are in shortage again, the arrival period required for purchasing the GPU with larger calculation force is generally longer, and if the user needs to purchase the GPU to expand the calculation force urgently, the shortage of the short-time calculation force can be caused, so that larger loss is caused.
In view of this, the embodiments of the present application provide a new scheme for calling computing resources to solve the above technical problems in whole or in part.
The embodiment of the application relates to a cloud computing resource calling method, which is applied to a scene of computing resource calling between local computing equipment and cloud computing equipment, wherein the cloud computing resource is deployed in the cloud computing equipment, so that computing power can be conveniently expanded according to needs, a network card which is communicated with a local computing equipment Client is deployed at a user side of the cloud computing equipment, the local computing equipment is connected with the cloud through a public network or a special line, and the Client of the local computing equipment can be a thick Client (Rich or Thick Client) or a presentation form of a Thin Client (Thin Client) or a Smart Client (Smart Client). After logging in the account of the client, the user invokes the computing resource of the cloud computing device by accessing each portal provided by the client.
The embodiment of the application can be applied to various scenes needing to be called for computing resources, including but not limited to AI application, block chain and cloud computing, so as to achieve the purposes of dynamically adjusting and distributing different computing resources according to requirements, further save cost and improve efficiency.
In order to more clearly show the method for invoking the computing resource provided in the embodiment of the present application, a specific application example of the solution of the embodiment of the present application is given below, and fig. 2 is a schematic diagram of one solution for implementing the invoking of the computing resource provided in the present application. The figure relates to a GPU server and a client server, wherein the client server can be one or a plurality of client servers; the operating systems used by the GPU server and the client server are Linux operating systems. The client server communicates with the GPU server via a network card, which may be an RDMA (Remote Direct Memory Access remote direct data access) network card. Firstly, a CPU of a client server copies data to be processed to a System Memory (System Memory), the System Memory copies the data to be processed to a kernel communication driving module nv_peer_memory, then the kernel communication driving module nv_peer_memory copies the data to be processed to a network card communication driving module ofa-kernel, and the network card communication driving module ofa-kernel of the client server transmits the data to be processed to ofa-kernel of a GPU server through a network card, so that the process of transmitting the data to be processed from the client server to the GPU server is completed; secondly, the network card communication driving module ofa-kernel of the GPU server copies the data to be processed into the GPU memory through the kernel communication driving module nv_peer_memory of the GPU server, the GPU reads the data in the GPU memory for data processing, and after the data processing, the processing result is fed back to the client server through the kernel communication driving module nv_peer_memory and the network card communication driving module ofa-kernel of the GPU server in the original way, so that data exchange is completed. According to the scheme, the CPU in the GPU server is not needed, the time for calling the computing resources can be saved, meanwhile, the purposes of dynamically adjusting the computing power of the GPU and the CPU and distributing the computing resources of the GPU according to the requirements are achieved, and the utilization rate of the computing resources of the GPU is improved.
The embodiment of the application provides a method for calling cloud computing resources, which is applied to local computing equipment, wherein the local computing equipment is provided with a first network card for communicating with a cloud, and fig. 3 is a flowchart of the method for calling cloud computing resources, which is applied to the local computing equipment in an embodiment of the application. As shown in fig. 3, the method includes steps S301 to S302:
in step S301, data to be processed of a first CPU of a central processor of a local computing device is copied to a first network card communication driving module, and the data to be processed is sent to a cloud computing device through the first network card communication driving module.
The network card related to the embodiment of the application can be connected with different computing devices to communicate on a network, and can be a network card for remote direct data access, such as a RDMA (Remote Direct Memory Access) network card. The first network card can be recorded on the local computing device, and the second network card can be recorded on the cloud computing device. The first network card on the local computing device is used for communicating with the second network card on the cloud computing device, and the first network card can be RDMA (Remote Direct Memory Access) network card or other types of network cards as long as the first network card can communicate with the second network card, so that the application is not limited in any way.
The types of local computing devices involved may include physical machines, virtual machines, or computer systems of a plurality of servers, client devices, or other suitable types of devices interconnected to one another in a local area network having a plurality of network devices. For example, the local computing device may be placed in a corporate, government agency, school, etc., environment, and the Client of the local computing device may be a thick Client (Rich or Thick Client), a Thin Client (Thin Client), or a Smart Client (Smart Client). The operating system of the local computing device client may be a Linux operating system, a windows operating system, etc., which is not limited in this application.
The related data to be processed come from the local computing equipment and can be user input data received by the local computing equipment and data input by external computing equipment; but may also be various data generated in the local computing device such as transaction amounts generated by the financial transaction application, etc. Various data types, such as pictures, videos, documents, etc., may be included, or a mixture of multiple types of data, which is not limiting in this application.
The related network card communication driving module is a module capable of driving the network card to communicate. The network card communication driving module which is applied to the local computing equipment and can be recorded as a first network card communication driving module is a network card communication driving module with optimized performance, so that the first network card communication driving module can drive the first network card to communicate and can exchange data with other driving modules in the local computing equipment.
In one possible implementation manner, when the data to be processed of the first CPU in the local computing device is copied to the first network card communication driving module, the first CPU of the local computing device may copy the data to be processed into the operating system memory; copying the data to be processed to an operating system kernel communication driving module by using an operating system memory; and copying the data to be processed to the first network card communication driving module by the operating system kernel communication driving module. And the processing results of the data to be processed after being processed by the computing resources of the cloud computing equipment can be returned by executing the process in the opposite direction of the path.
In this embodiment of the present application, since the first CPU cannot copy the data to be processed directly to the first network card communication driving module, the data to be processed may be copied to the kernel communication driving module of the local computing device that may communicate with the first network card communication driving module through the operating system memory, and then the kernel communication driving module copies the data to be processed to the first network card communication driving module, so as to drive the first network card to communicate with the cloud computing device through the first network card communication driving module.
The kernel communication driving module is located inside the local computing device system and is used for exchanging data with a system memory in the local computing device or communicating with other driving modules. The module is a kernel communication driving module with performance being optimized, so that the kernel communication driving module can exchange data with the network card communication driving module and can exchange data with a system memory in the local computing equipment.
The cloud computing device related to the embodiment of the application is configured with a network card, which is marked as a second network card, and is also provided with a communication driving module, which is marked as a second network card communication driving module, and the first network card communication driving module of the local computing device sends the data to be processed to the second network card communication driving module of the cloud computing device; the second network card can skip the second CPU of the central processing unit of the cloud computing device to access the processing unit corresponding to the cloud.
The second network card is a network card capable of performing remote direct data access, is configured on the cloud computing device, and is used for communicating with the first network card on the local computing device, and can be a RDMA (Remote Direct Memory Access) network card or other types of network cards capable of performing remote direct data access. The related second network card communication driving module is a network card communication driving module which is applied to the cloud computing equipment and has the performance adjusted, and the second network card communication driving module can drive the second network card to communicate and can exchange data with other driving modules in the cloud computing equipment.
Corresponding processing units are involved including, but not limited to, one or more of graphics processor GPU (Graphics Processing Unit), artificial intelligence processor NPU (Neural network Processing Unit), deep learning processor DPU (Deep learning Processing Unit), general graphics processor GPGPU (General-purpose computing on graphics processing units), AI accelerators, field programmable gate array FPGA (Field Programmable Gate Array). The processing units can process different parts of the data to be processed sent by one local computing device together, and can also process the data to be processed respectively sent by a plurality of local computing devices respectively; the application is not limited in this respect, and a single type of data to be processed, such as a GPU that processes only graphics data, or multiple types of data to be processed, such as an NPU that processes both graphics data and document data, may be processed.
In step S302, a processing result obtained after the cloud computing device invokes a processing unit corresponding to the data to be processed to perform data processing is obtained through the first network card communication driving module, and the processing result is returned to the first CPU of the local computing device. The corresponding processing unit may comprise at least one of a graphics processor, an artificial intelligence processor, a deep learning processor, a general purpose graphics processor, an AI accelerator, a field programmable gate array.
In one possible implementation manner, when the processing result is returned to the first CPU of the local computing device, the first network card communication driving module may copy the processing result to the operating system kernel communication driving module of the local computing device; copying the processing result to the memory of the operating system by the kernel communication driving module of the operating system; and finally, copying the processing result to a first CPU of the local computing device by the memory of the operating system.
In this embodiment of the present invention, since the first CPU cannot directly read the processing result from the first network card communication driving module, the first CPU may communicate with the first network card communication driving module via the kernel communication driving module, copy the processing result in the first network card communication driving module to the kernel communication driving module, and then copy the processing result from the kernel communication driving module to the operating system memory, so that the first CPU may read the processing result in the operating system memory. The process is opposite to the path of the first CPU copying the data to be processed to the first network card communication driving module.
Fig. 4 is a flowchart of a method for invoking cloud computing resources applied to a cloud computing device, where a second network card for communicating with a local computing device is configured, according to an embodiment of the present application, and the method includes steps S401 to S402:
In step S401, the data to be processed is copied to a corresponding processing unit through the second network card communication driving module, where the corresponding processing unit includes at least one of a graphics processor, an artificial intelligence processor, a deep learning processor, a general graphics processor, an AI accelerator, and a field programmable logic gate array.
The types of local computing devices to which embodiments of the present application relate may include physical machines, virtual machines, or may be computer systems of multiple servers, client devices, or other suitable types of devices interconnected to one another in a local area network having multiple network devices. For example, the local computing device may be placed in a company, government agency, school, or the like, the Client of the local computing device may be a thick Client (Rich or Thick Client), or may be a Thin Client (Thin Client) or a Smart Client (Smart Client), and the operating system of the local computing device Client may be a Linux operating system, a windows operating system, or the like, which is not limited in this application.
The operating system of the cloud computing device client may be a Linux operating system, a windows operating system, or the like, and may be the same as or different from the operating system of the local computing device, which is not limited in this application.
The data to be processed come from the local computing equipment and can be user input data received by the local computing equipment and data input by external computing equipment; but may also be various data generated in the local computing device such as transaction amounts generated by the financial transaction application, etc. Various data types, such as pictures, videos, documents, etc., may be included, or a mixture of multiple types of data, which is not limiting in this application. The second network card related to the embodiment of the application is a network card capable of performing remote direct data access, is configured on the cloud computing device, and is used for communicating with the first network card on the local computing device, and may be a RDMA (Remote Direct Memory Access) network card or other types of network cards capable of performing remote direct data access.
The related network card communication driving module is a module capable of driving the network card to communicate. The second network card communication driving module can be recorded as a second network card communication driving module applied to the cloud computing device, and the second network card communication driving module can drive the second network card to communicate and can exchange data with other driving modules in the cloud computing device.
In one possible implementation manner, when the data to be processed is copied to the corresponding processing unit through the second network card communication driving module, the second network card communication driving module may first copy the data to be processed to the cloud computing device kernel communication driving module; copying the data to be processed to the memory of the corresponding processing unit by the cloud computing equipment kernel communication driving module; and then the corresponding processing unit processes the data to be processed in the memory.
The second network card communication driving module cannot directly transmit the data to be processed to the corresponding processing unit, so that the data to be processed can be copied to the kernel communication driving module through the kernel communication driving module of the cloud computing device, and then copied to the memory of the corresponding processing unit through the kernel communication driving module, and therefore computing resources of the corresponding processing unit are called to process the data to be processed in the memory. The processing result of the corresponding processing unit after processing the data to be processed can be returned to the second network card communication driving module through the opposite direction of the path.
The kernel communication driving module related to the embodiment of the application is located inside the cloud computing device system and is used for mobilizing computing resources inside the cloud computing device system or communicating with other driving modules. The module is a kernel communication driving module with performance being optimized, so that the kernel communication driving module can exchange data with the network card communication driving module and can exchange data with the memory of a corresponding processing unit in the cloud computing equipment.
Corresponding processing units are involved including, but not limited to, one or more of graphics processor GPU (Graphics Processing Unit), artificial intelligence processor NPU (Neural network Processing Unit), deep learning processor DPU (Deep learning Processing Unit), general graphics processor GPGPU (General-purpose computing on graphics processing units), AI accelerators, field programmable gate array FPGA (Field Programmable Gate Array). The processing units can process different parts of the data to be processed sent by one local computing device together, and can also process the data to be processed respectively sent by a plurality of local computing devices respectively; the application is not limited in this respect, and a single type of data to be processed, such as a GPU that processes only graphics data, or multiple types of data to be processed, such as an NPU that processes both graphics data and document data, may be processed.
In step S402, the cloud computing device obtains a processing result obtained by calling the corresponding processing unit to perform data processing on the data to be processed, and returns the processing result to the second network card communication driving module, so as to send the processing result to the local computing device through the second network card communication driving module.
In one possible implementation manner, when the cloud computing device obtains a processing result obtained by calling the corresponding processing unit to perform data processing on the data to be processed, the corresponding processing unit may copy the processing result to a memory of the processing unit; copying the processing result to a kernel communication driving module of the cloud computing device by a memory of the processing unit; and finally, copying the processing result to a second network card communication driving module through a kernel communication driving module of the cloud computing equipment.
Because the corresponding processing unit cannot directly communicate with the second network card communication driving module, the processing result can be copied to the kernel communication driving module from the memory of the corresponding processing unit through the kernel communication driving module of the cloud computing device, and then copied to the second network card communication driving module from the kernel communication driving module. The path is opposite to the process of copying the data to be processed to the memory of the corresponding processing unit by the second network card communication driving module.
The local computing device related to the embodiment of the application is configured with a first network card, and the second network card communication driving module of the cloud computing device sends the processing result to the first network card communication driving module of the local computing device.
The first network card is used for communicating with a network card (second network card) on the cloud computing device, and the first network card may be RDMA (Remote Direct Memory Access) network card or may be another type of network card, as long as the first network card can communicate with the second network card, which is not limited in this application. The second network card is a network card capable of performing remote direct data access, is configured on the cloud computing device and is used for communicating with the first network card on the local computing device, and can be a RDMA (Remote Direct Memory Access) network card or other types of network cards capable of performing remote direct data access.
The related second network card communication driving module is a network card communication driving module which is applied to the cloud computing equipment and has the performance adjusted, and the second network card communication driving module can drive the second network card to communicate and can exchange data with other driving modules in the cloud computing equipment.
Corresponding to the application scene and the method of the method provided by the embodiment of the application, the embodiment of the application also provides a device for calling the cloud computing resource. Fig. 5 is a block diagram of a cloud computing resource calling device deployed in a local computing device according to an embodiment of the present application, where the data sharing device may include:
The data copying module 501 is configured to copy data to be processed of a first CPU of a central processor of the local computing device to a first network card communication driving module, so that the data to be processed is sent to the cloud computing device through the first network card communication driving module;
the result obtaining module 502 is configured to obtain, through the first network card communication driving module, a processing result obtained after the cloud computing device invokes a processing unit corresponding to the data to be processed to perform data processing, where the corresponding processing unit includes at least one of a graphics processor, an artificial intelligence processor, a dedicated data processor, a general graphics processor, an AI accelerator, and a field programmable logic gate array;
the result returning module 503 is configured to return the processing result to the first CPU of the local computing device.
In one possible implementation, the data copying module 501 may include:
the system memory copy submodule is used for copying the data to be processed into the operating system memory of the local computing device through the first CPU of the local computing device;
the first kernel drive copy submodule is used for copying the data to be processed to the kernel communication drive module of the local computing equipment operating system through the operating system memory;
And the first communication drive copy sub-module is used for copying the data to be processed to the first network card communication drive module through the operating system kernel communication drive module.
In a possible implementation manner, the cloud computing device is configured with a second network card, and the first network card communication driving module sends the data to be processed to the second network card communication driving module of the cloud computing device; the second network card can skip the second CPU of the cloud computing device to access the processing unit of the cloud.
In one possible implementation manner, the result returning module 503 may include:
the first communication driving result return sub-module is used for copying the processing result to the operating system kernel communication driving module through the first network card communication driving module;
the first kernel driving result returning sub-module is used for copying the processing result to the memory of the operating system through the kernel communication driving module of the operating system;
and the system memory result return sub-module is used for copying the processing result to the first CPU of the local computing equipment through the operating system memory.
The processing result is a processing result of the cloud computing device after the processing unit corresponding to the cloud computing device processes the data to be processed, and the processing unit corresponding to the cloud computing device includes, but is not limited to, one or more of a graphics processor GPU (Graphics Processing Unit), an artificial intelligence processor NPU (Neural network Processing Unit), a deep learning processor DPU (Deep learning Processing Unit), a General graphics processor GPGPU (General-purpose computing on graphics processing units), an AI accelerator, and a field programmable gate array FPGA (Field Programmable Gate Array).
Corresponding to the application scene and the method of the method provided by the embodiment of the application, the embodiment of the application also provides a device for calling the cloud computing resource. Fig. 6 is a block diagram of a cloud computing resource calling device deployed in a cloud computing device according to an embodiment of the present application, where the data sharing device may include:
the data copying module 601 is configured to copy data to be processed sent by the local computing device to a corresponding processing unit through the second network card communication driving module, where the corresponding processing unit includes at least one of a graphics processor, an artificial intelligence processor, a deep learning processor, a general graphics processor, an AI accelerator, and a field programmable logic gate array;
the result obtaining module 602 is configured to obtain a processing result obtained after the corresponding processing unit is invoked to perform data processing on the data to be processed;
and the result returning module 603 is configured to return the processing result to the second network card communication driving module, so that the processing result is sent to the local computing device through the second network card communication driving module.
In a possible implementation manner, the data copy module 601 may include:
the second communication drive copy submodule is used for copying the data to be processed into the cloud computing equipment kernel communication drive module through the second network card communication drive module;
The second kernel driving copy submodule is used for copying the data to be processed into the memory of the corresponding processing unit through the kernel communication driving module of the cloud computing equipment;
and the processing unit memory copy submodule is used for carrying out data processing on the data to be processed in the memory of the corresponding processing unit.
In one possible implementation manner, the result obtaining module 602 may include:
the processing unit memory result return sub-module is used for copying the processing result into the memory of the processing unit through the corresponding processing unit;
the second kernel driving result returning sub-module is used for copying the processing result to the kernel communication driving module of the cloud computing device through the memory of the processing unit;
and the second communication driving result return sub-module is used for copying the processing result to the second network card communication driving module through the kernel communication driving module of the cloud computing device.
In a possible implementation manner, the local computing device is configured with a first network card, and the second network card communication driving module sends the processing result to the first network card communication driving module of the local computing device.
The functions of each module in each device of the embodiments of the present application may be referred to the corresponding descriptions in the above methods, and have corresponding beneficial effects, which are not described herein.
Corresponding to the application scenario and method of the method provided in the embodiments of the present application, the embodiments of the present application further provide a local computing device, and fig. 7 is a block diagram of a local computing device electronic device used to implement the embodiments of the present application. As shown in fig. 7, the local computing electronic device includes:
a memory 701 and a processor 702, the memory 701 storing a computer program executable on the processor 702. The processor 702, when executing the computer program, implements the methods of the embodiments described above. The number of memories 701 and processors 702 may be one or more.
The electronic device further includes:
and the communication interface 703 is used for communicating with external equipment and performing data interaction transmission.
If the memory 701, the processor 702, and the communication interface 703 are implemented independently, the memory 701, the processor 702, and the communication interface 703 may be connected to each other and perform communication with each other through buses. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 7, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 701, the processor 702, and the communication interface 703 are integrated on a chip, the memory 701, the processor 702, and the communication interface 703 may communicate with each other through internal interfaces.
Corresponding to the application scenario and the method of the method provided by the embodiment of the present application, the embodiment of the present application further provides a cloud computing device, and fig. 8 is a block diagram of an electronic device for implementing the cloud computing device of the embodiment of the present application. As shown in fig. 8, the cloud computing electronic device includes:
a memory 801 and a processor 802, the memory 801 storing a computer program executable on the processor 802. The processor 802 implements the methods of the above-described embodiments when executing the computer program. The number of memories 801 and processors 802 may be one or more.
The electronic device further includes:
and the communication interface 803 is used for communicating with external equipment and carrying out data interaction transmission.
If the memory 801, the processor 802, and the communication interface 803 are implemented independently, the memory 801, the processor 802, and the communication interface 803 can be connected to each other through a bus and perform communication with each other. The bus may be an industry standard architecture (Industry Standard Architecture, ISA) bus, an external device interconnect (Peripheral Component Interconnect, PCI) bus, or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, among others. The bus may be classified as an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in fig. 8, but not only one bus or one type of bus.
Alternatively, in a specific implementation, if the memory 801, the processor 802, and the communication interface 803 are integrated on a chip, the memory 801, the processor 802, and the communication interface 803 may complete communication with each other through internal interfaces.
The embodiment of the application provides a computer equipment readable storage medium, which stores a computer program, and the computer program realizes the method for calling the cloud computing resource provided in the embodiment of the application when being executed by a processor.
It should be appreciated that the processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Srocessing, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), field programmable gate arrays (Field Programmable gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or any conventional processor or the like. It is noted that the processor may be a processor supporting an advanced reduced instruction set machine (Advanced RISC Machines, ARM) architecture.
Further, optionally, the memory may include a read-only memory and a random access memory, and may further include a nonvolatile random access memory. The memory may be volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), programmable ROM (PROM), erasable Programmable ROM (EPROM), electrically Erasable EPROM (EEPROM), or flash Memory, among others. Volatile memory can include random access memory (Random Access Memory, RAM), which acts as external cache memory. By way of example, and not limitation, many forms of RAM are available. For example, static RAM (SRAM), dynamic RAM (Dynamic Random Access Memory, DRAM), synchronous DRAM (SDRAM), double Data Rate Synchronous DRAM (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronous DRAM (SLDRAM), and Direct RAM (DR RAM).
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on a computer, the processes or functions in accordance with the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. Computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another.
In the description of the present specification, a description referring to terms "one embodiment," "some embodiments," "examples," "specific examples," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the present application. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
Furthermore, the terms "first," "second," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying a relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include at least one such feature. In the description of the present application, the meaning of "a plurality" is two or more, unless explicitly defined otherwise.
Any process or method description in a flowchart or otherwise described herein may be understood as representing modules, segments, or portions of code which include one or more executable instructions for implementing specific logical functions or steps of the process. And the scope of the preferred embodiments of the present application includes additional implementations in which functions may be performed in a substantially simultaneous manner or in an opposite order from that shown or discussed, including in accordance with the functions that are involved.
Logic and/or steps represented in the flowcharts or otherwise described herein, e.g., a ordered listing of executable instructions for implementing logical functions, can be embodied in any computer-readable medium for use by or in connection with an instruction execution system, apparatus, or device, such as a computer-based system, processor-containing system, or other system that can fetch the instructions from the instruction execution system, apparatus, or device and execute the instructions.
It is to be understood that portions of the present application may be implemented in hardware, software, firmware, or a combination thereof. In the above-described embodiments, the various steps or methods may be implemented in software or firmware stored in a memory and executed by a suitable instruction execution system. All or part of the steps of the methods of the embodiments described above may be performed by a program that, when executed, comprises one or a combination of the steps of the method embodiments, instructs the associated hardware to perform the method.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing module, or each unit may exist alone physically, or two or more units may be integrated in one module. The integrated modules may be implemented in hardware or in software functional modules. The integrated modules described above, if implemented in the form of software functional modules and sold or used as a stand-alone product, may also be stored in a computer-readable storage medium. The storage medium may be a read-only memory, a magnetic or optical disk, or the like.
The foregoing is merely specific embodiments of the present application, but the scope of the present application is not limited thereto, and any person skilled in the art can easily think of various changes or substitutions within the technical scope of the present application, and these should be covered in the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for invoking cloud computing resources, which is applied to a local computing device, wherein a first network card for communicating with the cloud is configured on the local computing device, and the method comprises:
copying data to be processed of a first CPU of the local computing device from an operating system kernel communication driving module to a first network card communication driving module so as to send the data to be processed to a cloud computing device through the first network card communication driving module;
the cloud computing equipment acquires a processing result obtained by calling a processing unit corresponding to the data to be processed through the first network card communication driving module, and returns the processing result to the first CPU, wherein the processing unit comprises at least one of a graphic processor, an artificial intelligent processor, a deep learning processor, a general graphic processor, an AI accelerator and a field programmable logic gate array;
The cloud computing device is configured with a second network card, the first network card communication driving module sends the data to be processed to the second network card communication driving module of the cloud computing device, and the second network card can skip a second CPU of the cloud computing device to access the processing unit of the cloud.
2. The method of claim 1, wherein copying the data to be processed of the first CPU of the local computing device from the operating system kernel communication driver module to the first network card communication driver module comprises:
copying the data to be processed into an operating system memory by a first CPU of the local computing device;
copying the data to be processed to the operating system kernel communication driving module by the operating system memory;
and the operating system kernel communication driving module copies the data to be processed to the first network card communication driving module.
3. The method of claim 1, wherein the returning the processing result to the first CPU comprises:
the first network card communication driving module copies the processing result to the operating system kernel communication driving module;
The kernel communication driving module of the operating system copies the processing result to the memory of the operating system;
and copying the processing result to a first CPU of the local computing device by the memory of the operating system.
4. The method for calling the cloud computing resource is characterized by being applied to a cloud computing device, wherein a second network card for communicating with a local computing device is configured on the cloud computing device, and the method comprises the following steps:
copying data to be processed to a cloud computing equipment kernel communication driving module through a second network card communication driving module, copying the data to be processed to a memory of a corresponding processing unit through the cloud computing equipment kernel communication driving module, and performing data processing on the data to be processed in the memory of the processing unit through the processing unit, wherein the processing unit comprises at least one of a graphic processor, an artificial intelligent processor, a deep learning processor, a general graphic processor, an AI accelerator and a field programmable logic gate array;
copying a processing result to a memory of the processing unit through the processing unit, copying the processing result to a kernel communication driving module of the cloud computing device through the memory of the processing unit, copying the processing result to a second network card communication driving module through the kernel communication driving module of the cloud computing device, and returning the processing result to the second network card so as to send the processing result to the local computing device through the second network card.
5. The method of claim 4, wherein the local computing device is configured with a first network card and the second network card communication driver module sends the processing result to the first network card communication driver module of the local computing device.
6. The utility model provides a calling device of high in the clouds computing resource, disposes in local computing device, its characterized in that, be configured with on the local computing device be used for with the high in the clouds first network card of communication, include:
the data copying module is used for copying the data to be processed of the first CPU of the local computing device from the operating system kernel communication driving module to the first network card communication driving module so as to send the data to be processed to the cloud computing device through the first network card communication driving module;
the result acquisition module is used for acquiring a processing result obtained after the cloud computing equipment invokes a processing unit corresponding to the data to be processed to process the data through the first network card communication driving module, wherein the processing unit comprises at least one of a graphic processor, an artificial intelligent processor, a deep learning processor, a general graphic processor, an AI accelerator and a field programmable logic gate array;
The result returning module is used for returning the processing result to the first CPU of the local computing device;
the cloud computing device is configured with a second network card, the first network card communication driving module sends the data to be processed to the second network card communication driving module of the cloud computing device, and the second network card can skip a second CPU of the cloud computing device to access the processing unit of the cloud.
7. A cloud computing resource calling device deployed on a cloud computing device, wherein a second network card for communicating with a local computing device is configured on the cloud computing device, the device comprising:
the data copying module is used for copying data to be processed to a cloud computing device kernel communication driving module through a second network card communication driving module, copying the data to be processed to a memory of a corresponding processing unit through the cloud computing device kernel communication driving module, and performing data processing on the data to be processed in the memory of the processing unit through the processing unit, wherein the processing unit comprises at least one of a graphic processor, an artificial intelligent processor, a deep learning processor, a general graphic processor, an AI accelerator and a field programmable logic gate array;
The result acquisition module is used for copying a processing result to a memory of the processing unit through the processing unit, copying the processing result to a kernel communication driving module of the cloud computing device through the memory of the processing unit, and copying the processing result to the second network card communication driving module through the kernel communication driving module of the cloud computing device;
and the result returning module is used for returning the processing result to the second network card so as to send the processing result to the local computing equipment through the second network card.
8. A local computing device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any one of claims 1-3 when the computer program is executed.
9. A cloud computing device comprising a memory, a processor and a computer program stored on the memory, the processor implementing the method of any of claims 4-5 when the computer program is executed.
10. A computing device readable storage medium having stored therein a computer program which, when executed by a processor, implements the method of invoking the cloud computing resource of any of claims 1-3 or any of claims 4-5.
CN202211541332.1A 2022-12-02 2022-12-02 Cloud computing resource calling method and device, electronic equipment and storage medium Active CN115934323B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211541332.1A CN115934323B (en) 2022-12-02 2022-12-02 Cloud computing resource calling method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211541332.1A CN115934323B (en) 2022-12-02 2022-12-02 Cloud computing resource calling method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115934323A CN115934323A (en) 2023-04-07
CN115934323B true CN115934323B (en) 2024-01-19

Family

ID=86650181

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211541332.1A Active CN115934323B (en) 2022-12-02 2022-12-02 Cloud computing resource calling method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115934323B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101063963A (en) * 2006-04-26 2007-10-31 韩国电子通信研究院 File movement method supporting data zero-copy technique
CN107102957A (en) * 2016-02-22 2017-08-29 深圳市知穹科技有限公司 The method and system that a kind of internal memory based between GPU and NIC is directly exchanged at a high speed
CN113326228A (en) * 2021-07-30 2021-08-31 阿里云计算有限公司 Message forwarding method, device and equipment based on remote direct data storage
CN113472830A (en) * 2020-03-31 2021-10-01 华为技术有限公司 Communication method and device
CN113595807A (en) * 2021-09-28 2021-11-02 阿里云计算有限公司 Computer system, RDMA network card and data communication method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9311230B2 (en) * 2013-04-23 2016-04-12 Globalfoundries Inc. Local direct storage class memory access

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101063963A (en) * 2006-04-26 2007-10-31 韩国电子通信研究院 File movement method supporting data zero-copy technique
CN107102957A (en) * 2016-02-22 2017-08-29 深圳市知穹科技有限公司 The method and system that a kind of internal memory based between GPU and NIC is directly exchanged at a high speed
CN113472830A (en) * 2020-03-31 2021-10-01 华为技术有限公司 Communication method and device
CN113326228A (en) * 2021-07-30 2021-08-31 阿里云计算有限公司 Message forwarding method, device and equipment based on remote direct data storage
CN113595807A (en) * 2021-09-28 2021-11-02 阿里云计算有限公司 Computer system, RDMA network card and data communication method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
GPUrdma: GPU-side library for high performance networking from GPU kernels;Daoud, Feras .et;《PROCEEDINGS OF THE 6TH INTERNATIONAL WORKSHOP ON RUNTIME AND OPERATING SYSTEMS FOR SUPERCOMPUTERS, (ROSS 2016)》;全文 *
一种TCP/IP卸载的数据零拷贝传输方法;王小峰;时向泉;苏金树;;计算机工程与科学(02);全文 *
基于通用以太网卡的高性能通信库设计实现;胡长军等;《计算机工程与应用》;全文 *

Also Published As

Publication number Publication date
CN115934323A (en) 2023-04-07

Similar Documents

Publication Publication Date Title
EP0106213B1 (en) Decentralized information processing system and initial program loading method therefor
CN111679921B (en) Memory sharing method, memory sharing device and terminal equipment
CA2428481A1 (en) Identity-based distributed computing for device resources
CN111163130B (en) Network service system and data transmission method thereof
US9584628B2 (en) Zero-copy data transmission system
CN1530842A (en) Method and system for data transmission in multi-processor system
CN112906075A (en) Memory sharing method and device
CN113918101A (en) Method, system, equipment and storage medium for writing data cache
WO2017166997A1 (en) Inic-side exception handling method and device
CN105357271A (en) Information processing method and corresponding device
CN115934323B (en) Cloud computing resource calling method and device, electronic equipment and storage medium
CN110659143B (en) Communication method and device between containers and electronic equipment
CN103902472B (en) Internal storage access processing method, memory chip and system based on memory chip interconnection
CN111881104A (en) NFS server, data writing method and device thereof, and storage medium
CN108563492B (en) Data acquisition method, virtual machine and electronic equipment
WO2023030178A1 (en) Communication method based on user-mode protocol stack, and corresponding apparatus
CN115562887A (en) Inter-core data communication method, system, device and medium based on data package
CN102999393B (en) Method, device and electronic equipment that a kind of data are transmitted
CN114281516A (en) Resource allocation method and device based on NUMA attribute
CN104618121A (en) Switch and server system
CN115695454B (en) Data storage method, device and equipment of MEC host and storage medium
CN115174484A (en) RDMA (remote direct memory Access) -based data transmission method, device, equipment and storage medium
CN111754332B (en) Service request processing method and device, storage medium and electronic equipment
WO2024060228A1 (en) Data acquisition method, apparatus and system, and storage medium
CN108664323B (en) Data transmission method and device based on multiple processors

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant