CN116934572B - Image processing method and apparatus - Google Patents

Image processing method and apparatus Download PDF

Info

Publication number
CN116934572B
CN116934572B CN202311195878.0A CN202311195878A CN116934572B CN 116934572 B CN116934572 B CN 116934572B CN 202311195878 A CN202311195878 A CN 202311195878A CN 116934572 B CN116934572 B CN 116934572B
Authority
CN
China
Prior art keywords
processing
thread
processing flow
hardware resource
target hardware
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311195878.0A
Other languages
Chinese (zh)
Other versions
CN116934572A (en
Inventor
李帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Priority to CN202311195878.0A priority Critical patent/CN116934572B/en
Publication of CN116934572A publication Critical patent/CN116934572A/en
Application granted granted Critical
Publication of CN116934572B publication Critical patent/CN116934572B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • G06F9/526Mutual exclusion algorithms
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus. The method comprises the following steps: running a first thread, wherein the first thread is used for executing the current processing flow of an image algorithm, the image algorithm is divided into a plurality of processing flows, and each processing flow comprises preprocessing and exclusive resource processing; when the preprocessing of the current processing flow is completed, the first thread creates a recursive sub-thread and executes the exclusive resource processing of the current processing flow through a target hardware resource when the target hardware resource is available; the recursive sub-thread is used for executing the next processing flow of the current processing flow, and creating a new recursive sub-thread when the preprocessing execution of the next processing flow is finished, until the multiple processing flows respectively correspond to corresponding threads. The embodiment of the invention can realize reasonable scheduling of each processing flow of the image algorithm.

Description

Image processing method and apparatus
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus.
Background
The electronic device runs an image algorithm when photographing or editing the image. If the image algorithm is called to the hardware resource with the exclusive characteristic for multiple times, such as called to the GPU or called to the NPU for multiple times, the image algorithm can be divided into multiple processing flows according to the multiple times of calling to the hardware resource. The electronic device may then execute the multiple process flows in a serial or asynchronous parallel manner. The serial system means that after one process flow is executed, the next process flow is executed. The asynchronous parallel mode means that the plurality of processing flows are executed simultaneously. The above serial or asynchronous parallel modes in the related art do not fully consider the hardware and software capabilities of the electronic device, which easily results in long processing time of the image algorithm and affects the efficiency of image processing.
Disclosure of Invention
The embodiment of the invention aims to provide an image processing method and device for solving the technical problems in the prior art.
The embodiment of the invention provides an image processing method, which comprises the following steps: running a first thread, wherein the first thread is used for executing the current processing flow of an image algorithm, the image algorithm is divided into a plurality of processing flows, each processing flow comprises preprocessing and exclusive resource processing, and the current processing flow is one of the plurality of processing flows; in each processing flow, the preprocessing needs to be completed before the exclusive resource processing; the exclusive resource processing is processing that needs to be executed by calling an exclusive hardware resource, and the exclusive hardware resource can only support the calling of the processing flow at the same time.
Wherein, when the pre-processing of the current processing flow is performed, the first thread creates a recursive sub-thread and performs the exclusive resource processing of the current processing flow through a target hardware resource when the target hardware resource is available; in some embodiments, after the pre-processing of the current processing flow is performed, the current processing flow further determines whether the current processing flow is a last processing flow of the multiple processing flows, and if the current processing flow is the last processing flow of the multiple processing flows, the first thread does not create the recursive sub-thread any more. If the current process flow is not the last process flow in the multiple process flows, the first thread creates a recursive sub-thread.
In some embodiments, the recursive sub-thread created by the first thread runs in parallel with the first thread. The recursive sub-thread is used for executing the next processing flow of the current processing flow, and the recursive sub-thread is also used for creating a new recursive sub-thread when the preprocessing of the next processing flow is executed until the multiple processing flows respectively correspond to corresponding threads.
An image processing method as described above, wherein preferably, a first thread is executed, the first thread being configured to execute a current processing procedure, and the method comprises:
if the current processing flow is the first processing flow of the image algorithm, the first thread is a main thread;
if the current processing flow is a non-first processing flow of the image algorithm, the first thread is a recursive sub-thread, and the first thread is created by a last processing flow of the current processing flow.
In the image processing method as described above, it is preferable that the exclusive resource processing of the current processing flow has a parallel execution time with the preprocessing of the next processing flow.
An image processing method as described above, wherein preferably, each of the processing flows further includes post-processing that is performed after the exclusive resource processing;
when the exclusive resource processing of a current processing flow is executed by the target hardware resource, there is a parallel execution time for the post-processing of a last processing flow of the current processing flow and the pre-processing of the next processing flow.
An image processing method as described above, wherein preferably, before the exclusive resource processing of the current processing flow is performed by the target hardware resource when the target hardware resource is available, the method further includes:
the first thread determining whether the target hardware resource is available;
and if the target hardware resource is not available, the first thread waits until the target hardware resource is released by the last processing flow.
An image processing method as described above, wherein preferably, the method further comprises:
when the target hardware resource is available, the first thread adds a mutual exclusion lock for the target hardware resource so that the target hardware resource only executes the exclusive resource processing of the current processing flow;
the method further comprises the steps of: and after the target hardware resource finishes executing the exclusive resource processing of the current processing flow, the first thread unlocks the target hardware resource so that the target hardware resource can be used by the next processing flow.
The image processing method as described above, wherein preferably, the adding, by the first thread, a mutex lock to the target hardware resource includes:
And setting a mutual exclusion lock mark for the API interface of the target hardware resource, wherein the mutual exclusion lock mark is used for enabling other threads to not detect the API interface of the target hardware resource or enabling the API interface of the target hardware resource detected by the other threads to be in an non-callable state.
An image processing method as described above, wherein preferably, each of the processing flows further includes post-processing that is performed after the exclusive resource processing; the first processing flow of the multiple processing flows is executed by the main thread; the main thread is further to:
determining whether the post-processing of the multiple processing flows is finished;
and if the execution is finished, determining an image processing result of the image algorithm.
The embodiment of the invention also provides electronic equipment, which comprises a memory for storing program instructions and a processor for executing the program instructions, wherein the program instructions, when executed by the processor, trigger the electronic equipment to execute the image processing method according to any one of the above.
Embodiments of the present invention also provide a computer readable storage medium having a computer program stored therein, which when run on an electronic device, causes the electronic device to perform the image processing method as set forth in any one of the preceding claims.
In the embodiment of the invention, the image algorithm is split into n times of processing flows, and each processing flow comprises exclusive resource processing. In the embodiment of the invention, n times of processing flows are packaged into a recursive function. A corresponding recursive sub-thread may be created for each process flow by a recursive function. By controlling the time for creating the recursion sub-thread, the exclusive resource processing of n times of processing flows can be completely staggered, and the exclusive hardware resources can be always kept in a working state, so that reasonable scheduling of each time of processing flows is realized, and the chip capacity of the exclusive hardware resources can be fully utilized.
Drawings
Fig. 1 is a schematic structural diagram of an electronic device according to an embodiment of the present invention;
fig. 2 is a software structural block diagram of an electronic device according to an embodiment of the present invention;
FIG. 3 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 4 is a flowchart of an image processing method according to an embodiment of the present invention;
FIG. 5 is a flowchart of another image processing method according to an embodiment of the present invention;
fig. 6 is a timing chart of an image processing method according to an embodiment of the present invention.
Detailed Description
The embodiments described below by referring to the drawings are illustrative only and are not to be construed as limiting the invention.
Referring to fig. 1, a schematic structural diagram of an electronic device according to an embodiment of the present invention is provided. As shown in fig. 1, the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, a user identification module (subscriber identification module, SIM) card interface 195, and the like.
It should be understood that the illustrated structure of the embodiment of the present invention does not constitute a specific limitation on the electronic device 100. In other embodiments of the present application, electronic device 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, electronic device 100 implements display functionality through a GPU, a display screen 194, and an application processor, among others. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
In some embodiments, the display screen 194 is used to display images, videos, and the like. The display 194 includes a display panel. The display panel may employ a liquid crystal display (liquid crystal display, LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode (AMOLED) or an active-matrix organic light-emitting diode (matrix organic light emitting diode), a flexible light-emitting diode (flex), a mini, a Micro led, a Micro-OLED, a quantum dot light-emitting diode (quantum dot light emitting diodes, QLED), or the like. In some embodiments, the electronic device 100 may include 1 or N display screens 194, N being a positive integer greater than 1.
In some embodiments, electronic device 100 may implement capture functionality through an ISP, camera 193, video codec, GPU, display screen 194, and application processor, among others.
The ISP is used to process data fed back by the camera 193. For example, when photographing, the shutter is opened, light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electric signal, and the camera photosensitive element transmits the electric signal to the ISP for processing and is converted into an image visible to naked eyes. ISP can also optimize the noise, brightness and skin color of the image. The ISP can also optimize parameters such as exposure, color temperature and the like of a shooting scene. In some embodiments, the ISP may be provided in the camera 193.
The camera 193 is used to capture still images or video. The object generates an optical image through the lens and projects the optical image onto the photosensitive element. The photosensitive element may be a charge coupled device (charge coupled device, CCD) or a Complementary Metal Oxide Semiconductor (CMOS) phototransistor. The photosensitive element converts the optical signal into an electrical signal, which is then transferred to the ISP to be converted into a digital image signal. The ISP outputs the digital image signal to the DSP for processing. The DSP converts the digital image signal into an image signal in a standard RGB, YUV, or the like format. In some embodiments, electronic device 100 may include 1 or N cameras 193, N being a positive integer greater than 1.
The digital signal processor is used for processing digital signals, and can process other digital signals besides digital image signals. For example, when the electronic device 100 selects a frequency bin, the digital signal processor is used to fourier transform the frequency bin energy, or the like.
Video codecs are used to compress or decompress digital video. The electronic device 100 may support one or more video codecs. In this way, the electronic device 100 may play or record video in a variety of encoding formats, such as: dynamic picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
The NPU is a neural-network (NN) computing processor, and can rapidly process input information by referencing a biological neural network structure, for example, referencing a transmission mode between human brain neurons, and can also continuously perform self-learning. Applications such as intelligent awareness of the electronic device 100 may be implemented through the NPU, for example: image recognition, face recognition, speech recognition, text understanding, etc.
The external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to enable expansion of the memory capabilities of the electronic device 100. The external memory card communicates with the processor 110 through an external memory interface 120 to implement data storage functions. For example, files such as music, video, etc. are stored in an external memory card.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data created during use of the electronic device 100 (e.g., audio data, image data, phonebook, etc.), and so on. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications of the electronic device 100 and data processing by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
In some embodiments, the processor 110 may include a CPU and hardware resources with exclusive characteristics, such as a GPU or NPU. The CPU runs an image algorithm during the display of the image/video, the capturing of the image/video, or the processing of the image/video. If the image algorithm is called to a hardware resource with an exclusive property for multiple times, for example, to the GPU or to the NPU for multiple times, the CPU may divide the image algorithm into multiple processing flows according to the call of the image algorithm to the hardware resource, where each processing flow includes the call to the exclusive hardware resource. In the related art, the multiple process flows are generally performed in a serial or asynchronous parallel manner. If a serial mode is adopted, the next processing flow is executed after the next processing flow is executed, but the mode does not fully utilize the chip capacity of the CPU and the exclusive hardware resource, so that the running time of the image algorithm is long. If an asynchronous parallel mode is adopted, the CPU runs the multiple processing flows of the image algorithm simultaneously, so that the load pressure of the CPU is increased, and the problems of resource preemption, unreasonable scheduling of the processing flows and the like caused by the fact that the multiple processing flows wait for calling hardware resources simultaneously exist.
In order to solve the above technical problems, an embodiment of the present invention provides an image processing method. When executing the image processing method of the embodiment of the invention, the CPU divides the image algorithm into n times of processing flows, and each processing flow comprises exclusive resource processing. The exclusive resource processing is processing that needs to be executed by calling an exclusive hardware resource. The exclusive hardware resource can only support the invocation of the processing flow once at the same time. In the embodiment of the invention, the n times of processing flows can be packaged into a recursive function, and the processing flows of the n times of processing flows are called in a recursive mode, so that the exclusive resource processing of the n times of processing flows is completely staggered, and the exclusive hardware resources are in a continuous running state, so that the chip capacity of the exclusive hardware resources is fully utilized.
Referring to fig. 2, a software architecture block diagram of an electronic device according to an embodiment of the present invention is provided. As shown in fig. 2, the hierarchical architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the electronic device is divided into an application layer, an image algorithm layer, and a hardware layer. The embodiment of the invention mainly protects an image processing method, and the electronic device is more commonly used for an image algorithm when running a camera system, so the embodiment of the invention is illustrated by taking a camera application as an example. Of course, for other applications running the image algorithm, the adaptation may be performed with reference to the design architecture of the layers of fig. 2.
As shown in fig. 2, the application layer mainly includes various types of application programs, such as including a camera application. In some embodiments, the application layer may also contain applications for beautifying, editing images, and the like.
The image algorithm layer is located between the application layer and the hardware layer and is used for responding to an image processing instruction issued by the camera application and a hardware module used for calling the hardware layer. In some embodiments, the image algorithm layer deploys an image algorithm scheduling module. The image algorithm scheduling module is used for executing the image processing method according to the embodiment of the invention and scheduling the image algorithm when executing the image processing method according to the embodiment of the invention.
The hardware layer includes various types of hardware of the electronic device, such as exclusive hardware resources including the electronic device, e.g., GPU, NPU, etc.
Referring to fig. 3, a flowchart of an image processing method according to an embodiment of the present invention is provided. The method shown in fig. 3 is applied to the electronic device shown in fig. 1 or fig. 2. As shown in fig. 3, the processing steps of the method include:
301, a camera application initiates an image processing request to an image algorithm scheduling module. Optionally, the image processing request may include an image processing instruction, such as an image preview instruction, a photographing instruction, a video recording instruction, and the like. Optionally, the image processing request may further include configuration information for performing image processing, such as image resolution information, pixel format information, photographing mode information, lens parameter information, and image beautification setting information.
302, an image algorithm scheduling module determines an image to be processed and an image algorithm to be executed on the image to be processed according to the image processing request.
303, the image algorithm scheduling module divides the image algorithm into n processing flows, each of which includes preprocessing and exclusive resource processing. n is a positive integer greater than or equal to 2. Preprocessing refers to processing that needs to be completed before exclusive resource processing. In some embodiments, the kth pre-process is performed before the kth exclusive resource process is performed, where k is any one of the n processing flows. Optionally, the main thread packages the n times of processing flows into a recursive function, and the entry of the recursive function is that the preprocessing execution of each processing flow is finished.
304, the image algorithm scheduling module schedules n times of processing flows of the image algorithm in a recursion mode so as to completely stagger the exclusive resource processing of the n times of processing flows.
Referring to fig. 4, a flowchart of an image processing method according to an embodiment of the present invention is provided. The method of fig. 4 is illustrated as step 304. In some embodiments, the image algorithm scheduling module may create the main thread after receiving an image processing request sent by the image application. The main thread can determine an image to be processed and an image algorithm to be executed on the image to be processed according to the image processing request. Further, the main thread may divide the image algorithm into n processing flows, each of which includes pre-processing and exclusive resource processing. Optionally, the main thread packages the n times of processing flows into a recursive function, and the entry of the recursive function is that the preprocessing execution of each processing flow is finished. As shown in fig. 4, the main thread adopts a function recursion manner to schedule n times of processing flows of the image algorithm, including:
401, the main thread executes the 0 th process flow of the n process flows. Optionally, the 0 th process flow is the first process flow of the n process flows, that is, the n process flows include the 0 th, 1 st, 2 nd and … … th n-1 st process flows. Of course, in some embodiments, the first process of the n-th process may also be referred to as the 1 st process, i.e., the n-th process includes the 1 st, 2 nd, and 3 rd processes … … nth. For convenience of description, in the embodiment of the present invention, the first process flow in the n process flows is referred to as the 0 th process flow, and the n process flows include the 0 th, 1 st, and 2 nd … … th n-1 st process flows, respectively.
402, when the pre-processing execution of the 0 th processing flow is completed, the main thread Cheng Chuangjian is the 1 st recursive sub-thread and the main thread calls the target hardware resource to execute the exclusive resource processing of the 0 th processing flow. Alternatively, the target hardware resource is, for example, a GPU. Of course, in some embodiments, the target hardware resource may also be an NPU or other possible exclusive hardware resource. In some embodiments, the main thread Cheng Chuangjian, after the 1 st recursive sub-thread, the 1 st recursive sub-thread runs in parallel with the main thread. The 1 st sub-thread is used for executing the 1 st processing flow in the n processing flows.
403, in the process of the main thread calling the hardware resource to execute the exclusive resource processing of the 0 th processing flow, the 1 st recursive sub thread executes the 1 st processing flow of the n-th processing flow. Optionally, the 1 st process flow is one of n process flows, and the 1 st process flow and the 0 th process flow are different process flows. In some embodiments, the 1 st recursive sub-thread performs the 1 st process flow, specifically, performs the preprocessing of the 1 st process flow. At the same time, the target hardware resource is executing exclusive resource processing of the 0 th processing flow. That is, the preprocessing of the 1 st processing flow and the exclusive resource processing of the 0 th processing flow are executed in parallel.
404, when the preprocessing execution of the 1 st processing flow is completed, the 1 st recursive sub-thread creates a 2 nd recursive sub-thread and the 1 st recursive sub-thread determines whether the main thread releases the target hardware resource. Optionally, the 2 nd recursive sub-thread runs in parallel with the 1 st recursive sub-thread and the main thread. The 2 nd recursive sub-thread is used to execute the 2 nd process flow of the n process flows.
In some embodiments, when the pre-processing execution of the 1 st process is completed, if the main thread does not release the target hardware resource, the 1 st recursive sub-thread waits until the target hardware resource is available and then invokes the target hardware resource to execute the exclusive resource processing of the 1 st process. Of course, if it is determined in step 404 that the main thread has released the target hardware resource, the 1 st recursive sub thread directly invokes the target hardware resource to execute the exclusive resource processing of the 1 st process flow.
405, in the process of running the main thread and the 1 st recursive sub thread, the 2 nd recursive sub thread executes the 2 nd processing flow. The 2 nd processing flow is one of the n processing flows, and the 2 nd processing flow, the 0 th processing flow and the 1 st processing flow are different processing flows. The 2 nd recursive sub-thread executes the 2 nd processing flow, specifically, the preprocessing of the 2 nd processing flow.
406, when the preprocessing of the 2 nd processing flow is completed, the 2 nd recursive sub-thread creates a new sub-thread again until the n processing flows are corresponding to the respective threads. Of course, after the preprocessing execution of the 2 nd processing flow is completed, the 2 nd recursive sub-thread waits for the 1 st recursive sub-thread to release the target hardware resource in addition to creating a new recursive sub-thread, so as to execute the exclusive hardware resource of the 2 nd processing flow through the target hardware resource when the target hardware resource is available. According to the embodiment of the invention, the 1 st sub-thread and the 2 nd sub-thread … … n-1 st sub-thread can be respectively created for the 1 st and 2 nd … … n-1 st processing flows. The 1 st and 2 nd … … n-1 st sub-threads respectively call the exclusive hardware resources to execute exclusive resource processing of the respective processing flows when the exclusive hardware resources are available. In some embodiments, each recursive sub thread obtains the execution result of the respective exclusive resource processing and then sends the execution result to the main thread.
407, the main thread acquires the processing results of the n times of processing flows, and reports the image processing results to the camera application for displaying the image.
In the embodiment of the invention, the image algorithm is split into n times of processing flows, and each processing flow comprises exclusive resource processing. In the embodiment of the invention, n times of processing flows are packaged into a recursive function. A corresponding recursive sub-thread may be created for each process flow by a recursive function. By controlling the time for creating the recursion sub-thread, the exclusive resource processing of n times of processing flows can be completely staggered, and the exclusive hardware resources can be always kept in a working state, so that reasonable scheduling of each time of processing flows is realized, and the chip capacity of the exclusive hardware resources can be fully utilized.
Referring to fig. 5, a flowchart of another image processing method according to an embodiment of the present invention is provided. The method of fig. 5 is illustrated as step 304. In the embodiment of the invention, the main thread divides the image algorithm into n times of processing flows, and each processing flow comprises preprocessing, exclusive resource processing and post-processing. Where pre-processing refers to processing that needs to be completed before exclusive resource processing. Post-processing refers to processing that needs to be performed after exclusive resource processing is completed. In some embodiments, pre-processing and post-processing may be done using non-exclusive resources, such as may be done in a CPU, without invoking exclusive hardware resources. The method according to the embodiment of the present invention will be described with reference to fig. 4 and 5.
As shown in fig. 5, in step 402, when the main thread finishes executing the pre-processing of the 0 th processing flow, the main thread Cheng Chuangjian is the 1 st recursive sub thread and the main thread calls the target hardware resource to execute the exclusive resource processing of the 0 th processing flow. Before the main thread calls the target hardware resource to execute the exclusive hardware resource of the 0 th processing flow, the main thread adds a mutual exclusion lock for the target hardware resource, so that the target hardware resource only executes the exclusive resource processing of the current processing flow.
In some embodiments, the main thread adds a mutex lock for the target hardware resource, comprising: the main thread sets a mutual exclusion lock mark for the API interface of the target hardware resource, wherein the mutual exclusion lock mark is used for enabling other threads to detect the API interface of the target hardware resource or enabling the API interface of the target hardware resource detected by the other threads to be in a non-callable state.
As shown in fig. 5, when the main thread executes the exclusive resource processing of the 0 th processing flow, the 1 st recursive sub thread executes the preprocessing of the 1 st processing flow. When the preprocessing of the 1 st processing flow is finished, the 1 st recursion sub-line creates a 2 nd recursion sub-thread, and the 2 nd recursion sub-thread is used for executing the 2 nd processing flow. Optionally, when the 2 nd recursive sub-thread executes the 2 nd processing flow, the 1 st recursive sub-thread synchronously waits for the main thread to unlock the target hardware resource.
As shown in fig. 5, when the main thread finishes the exclusive resource processing of the 0 th processing flow by calling the target hardware resource, the main thread unlocks the target hardware resource, so that the target hardware resource can be used by the next processing flow. After that, the main line Cheng Zhihang is subjected to post-treatment of the 0 th treatment flow.
As shown in fig. 5, after the main thread unlocks the target hardware resource, the 1 st recursive sub thread adds a mutex lock to the target hardware resource, and invokes the target hardware resource to execute exclusive resource processing of the 1 st processing flow.
The main thread, the 1 st recursive sub thread, and the 2 nd recursive sub thread are executed in parallel. Based on this, there is a time for parallel execution of the post-processing of the 0 th processing flow, the exclusive resource processing of the 1 st processing flow, and the pre-processing of the 2 nd processing flow.
As shown in fig. 5, after the 1 st recursion sub-thread invokes the target hardware resource to process the exclusive resource of the 1 st processing flow, the 1 st recursion sub-thread releases the target hardware resource and performs post-processing of the 1 st processing flow.
As shown in fig. 5, after the 1 st recursive sub-thread releases the target hardware resource, the 2 nd recursive sub-thread locks the target hardware resource and invokes the target hardware resource to execute the exclusive resource processing of the 2 nd processing flow. It should be noted that, after the 2 nd recursive sub-thread finishes the pre-processing execution of the 2 nd processing flow, the 2 nd recursive sub-thread creates the 3 rd recursive sub-thread, and the 3 rd recursive sub-thread is used for executing the 3 rd processing flow, except waiting for the 1 st recursive sub-thread to release the target hardware resource, and the 3 rd recursive sub-thread is not shown in fig. 5.
Wherein the 1 st, 2 nd and 3 rd recursive sub-threads are executed in parallel. Based on this, there is a time for parallel execution of the post-processing of the 1 st processing flow, the exclusive resource processing of the 2 nd processing flow, and the pre-processing of the 3 rd processing flow.
In some embodiments, after each recursive sub-thread performs the pre-processing of its corresponding processing flow, before creating the next recursive sub-thread, it is further determined whether the image algorithm has an unexecuted processing flow, and if so, the next recursive sub-thread is created until all n processing flows of the image algorithm correspond to processing threads.
In some embodiments, after each recursive sub-thread performs post-processing of its corresponding process flow, the recursive sub-thread sends post-processing results to the main thread. And the main thread acquires post-processing results of the n times of processing flows and reports the image processing results to the camera application for displaying the images.
In the embodiment of the invention, the image algorithm is divided into n times of processing flows, and the n times of processing flows are packaged into a recursive function. The post-processing of the n times of processing flows can be completely staggered by recursively calling the n times of processing flows, and the post-processing of the k-1 times of processing flows, the pre-processing of the k+1th times of processing flows and the exclusive resource processing of the k times of processing flows can be executed in parallel, so that the chip capacity of the CPU is fully utilized, the load pressure of the CPU is prevented from being too high, and the chip capacities of the CPU and the exclusive hardware resources are fully utilized.
In the embodiment of the invention, the image algorithm is divided into n times of processing flows, and each processing flow comprises preprocessing, exclusive resource processing and post-processing. As shown in fig. 6, the preprocessing and post-processing in each processing flow is deployed to be performed on non-exclusive hardware resources (e.g., CPUs), and the exclusive resource processing is performed on exclusive hardware resources (e.g., GPUs or NPUs). As shown in fig. 6, the 1 st processing flow is recursively called after the execution of the pre-processing of the 0 th processing flow is completed, and the 0 th processing flow starts to occupy the exclusive hardware resource. As in the a+b period in fig. 6, the exclusive resource processing of the 0 th processing flow and the preprocessing of the 1 st processing flow are in a parallel state. Similarly, when the preprocessing of the 1 st processing flow is completed, the 2 nd processing flow is recursively called, and according to the mode, the n processing flows of the image starting algorithm can be recursively called. As shown in fig. 6, each time the exclusive hardware resource is called to execute the exclusive resource processing, a global mutex lock is added to the exclusive hardware resource, so as to ensure that the exclusive hardware resource only loads and runs the processing flow once at the same time. As shown in fig. 6, when the pre-processing of the 1 st processing flow is completed, the exclusive resource processing of the 0 th processing flow is not completed yet, and the 1 st processing flow waits for the 0 th processing flow to release the exclusive hardware resource due to the existence of the global mutex lock. I.e. the 1 st process flow is in a waiting state during period b. In the waiting period, the 1 st processing flow can recursively call the 2 nd processing flow, so that the 1 st processing flow can also run the pre-processing of the 2 nd processing flow at the waiting time. In some embodiments, after the 0 th process flow releases the exclusive hardware resource, i.e., during the period c, the 0 th process flow performs post-processing, the 1 st process flow invokes the exclusive hardware resource to perform exclusive resource processing, and the 2 nd process flow performs pre-processing.
In the embodiment of the invention, the image algorithm is divided into n times of processing flows, and each processing flow is divided into preprocessing, exclusive resource processing and post-processing. In the embodiment of the invention, n times of processing flows are packaged into a recursive function, and a recursive sub-thread can be created for each processing flow according to the recursive function. By executing each of the created recursive sub-threads in parallel and controlling the creation timing of each of the recursive sub-threads, the post-processing of the kth-1 processing flow, the pre-processing of the kth+1th processing flow, and the exclusive resource processing of the kth processing flow can be executed in parallel, so that the exclusive resource processing of the n processing flows is completely staggered, and the exclusive resource processing is always in an operating state, thereby fully exerting the chip capability of exclusive hardware resources. Further, the embodiment of the invention can enable the post-processing of the k-1 th processing flow and the pre-processing of the k+1 th processing flow to be executed in parallel through the scheduling of each processing flow so as to fully exert the chip capacity of the CPU processor, avoid the occupation of CPU processor resources and avoid the too high load of the CPU processor.
In connection with the above embodiments, the image processing method according to the embodiment of the present invention may be generally described as:
An image processing method includes: running a first thread, wherein the first thread is used for executing the current processing flow of an image algorithm, the image algorithm is divided into a plurality of processing flows, each processing flow comprises preprocessing and exclusive resource processing, and the current processing flow is one of the plurality of processing flows;
when the preprocessing of the current processing flow is completed, the first thread creates a recursive sub-thread and executes the exclusive resource processing of the current processing flow through a target hardware resource when the target hardware resource is available;
the recursive sub-thread is used for executing the next processing flow of the current processing flow, and creating a new recursive sub-thread when the preprocessing execution of the next processing flow is finished until the multiple processing flows respectively correspond to corresponding threads.
In some embodiments, a first thread is run, the first thread for executing a current processing flow of an image algorithm, comprising:
if the current processing flow is the first processing flow of the image algorithm, the first thread is a main thread;
If the current processing flow is a non-first processing flow of the image algorithm, the first thread is a recursive sub-thread, and the first thread is created by a last processing flow of the current processing flow.
In some embodiments, the exclusive resource processing of the current processing flow is performed in parallel with the pre-processing of the next processing flow.
In some embodiments, each of the process flows further includes post-processing, the post-processing performed after the exclusive resource processing;
when the exclusive resource processing of a current processing flow is executed by the target hardware resource, there is a parallel execution time for the post-processing of a last processing flow of the current processing flow and the pre-processing of the next processing flow.
In some embodiments, before the exclusive resource processing of the current processing flow is performed by the target hardware resource when the target hardware resource is available, the method further comprises:
the first thread determining whether the target hardware resource is available;
and if the target hardware resource is not available, the first thread waits until the target hardware resource is released by the last processing flow.
In some embodiments, the method further comprises: when the target hardware resource is available, the first thread adds a mutual exclusion lock for the target hardware resource so that the target hardware resource only executes the exclusive resource processing of the current processing flow;
the method further comprises the steps of: and after the target hardware resource finishes executing the exclusive resource processing of the current processing flow, the first thread unlocks the target hardware resource so that the target hardware resource can be used by the next processing flow.
In some embodiments, the first thread adding a mutex lock to the target hardware resource includes:
and setting a mutual exclusion lock mark for the API interface of the target hardware resource, wherein the mutual exclusion lock mark is used for enabling other threads to not detect the API interface of the target hardware resource or enabling the API interface of the target hardware resource detected by the other threads to be in an non-callable state.
In some embodiments, each of the process flows further includes post-processing, the post-processing performed after the exclusive resource processing; the first processing flow of the multiple processing flows is executed by the main thread; the main thread is further to:
Determining whether the post-processing of the multiple processing flows is finished;
and if the execution is finished, determining an image processing result of the image algorithm.
The embodiment of the invention also provides electronic equipment, which comprises: a memory for storing program instructions and a processor for executing the program instructions, wherein the program instructions, when executed by the processor, trigger the electronic device to perform the image processing method as claimed in any one of the preceding claims.
The embodiment of the invention also provides a computer readable storage medium, wherein instructions are stored in the computer readable storage medium, and when the instructions run on a computer, the instructions cause the computer to execute the image processing method.
Embodiments of the present invention also provide a computer program product comprising instructions which, when executed on a computer or on any of the at least one processors, cause the computer to perform the above-described image processing method.
The embodiment of the invention also provides a chip which comprises a processor and a data interface, wherein the processor reads the instructions stored in the memory through the data interface so as to execute the corresponding operation and/or flow of the image processing method.
Optionally, the chip further comprises a memory, the memory is connected with the processor through a circuit or a wire, and the processor is used for reading and executing the computer program in the memory. Further optionally, the chip further comprises a communication interface, and the processor is connected to the communication interface. The communication interface is used for receiving data and/or information to be processed, and the processor acquires the data and/or information from the communication interface and processes the data and/or information. The communication interface may be an input-output interface.
The memory may be read-only memory (ROM), other types of static storage devices that can store static information and instructions, random access memory (random access memory, RAM) or other types of dynamic storage devices that can store information and instructions, electrically erasable programmable read-only memory (electrically erasable programmable read-only memory, EEPROM), compact disc read-only memory (compact disc read-only memory) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media, or any other magnetic storage device that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, etc.
The electronic device, the computer storage medium, or the computer program product provided in the embodiments of the present application are configured to perform the corresponding methods provided above, and therefore, the advantages achieved by the electronic device, the computer storage medium, or the computer program product may refer to the advantages of the corresponding methods provided above, which are not described herein.
In the embodiments of the present application, "at least one" means one or more, and "a plurality" means two or more. "and/or", describes an association relation of association objects, and indicates that there may be three kinds of relations, for example, a and/or B, and may indicate that a alone exists, a and B together, and B alone exists. Wherein A, B may be singular or plural. The character "/" generally indicates that the context-dependent object is an "or" relationship. "at least one of the following" and the like means any combination of these items, including any combination of single or plural items. For example, at least one of a, b and c may represent: a, b, c, a and b, a and c, b and c or a and b and c, wherein a, b and c can be single or multiple.
Those of ordinary skill in the art will appreciate that the various elements and algorithm steps described in the embodiments disclosed herein can be implemented as a combination of electronic hardware, computer software, and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
It will be clear to those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described systems, apparatuses and units may refer to corresponding procedures in the foregoing method embodiments, and are not repeated herein.
In several embodiments provided herein, any of the functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, including several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform all or part of the steps of the methods described in the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read Only Memory (ROM), a random access memory (random access memory, RAM), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
While the foregoing is directed to embodiments of the present invention, other and further embodiments of the invention may be devised without departing from the basic scope thereof, and the scope thereof is determined by the claims that follow.

Claims (10)

1. An image processing method, comprising:
running a first thread, wherein the first thread is used for executing the current processing flow of an image algorithm, the image algorithm is divided into a plurality of processing flows, each processing flow comprises preprocessing and exclusive resource processing, and the current processing flow is one of the plurality of processing flows; wherein the preprocessing is done using non-exclusive resources; the exclusive resource processing is completed by using a target hardware resource, wherein the target hardware resource is an exclusive hardware resource;
when the preprocessing of the current processing flow is completed, the first thread creates a recursive sub-thread and executes the exclusive resource processing of the current processing flow through the target hardware resource when the target hardware resource is available;
The recursive sub-thread is used for executing the next processing flow of the current processing flow, and creating a new recursive sub-thread when the preprocessing execution of the next processing flow is finished until the multiple processing flows respectively correspond to corresponding threads;
after the first thread creates the recursive sub-thread, the recursive sub-thread runs in parallel with the first thread, and when the preprocessing of the next processing flow executed by the recursive sub-thread is completed, the recursive sub-thread executes the exclusive resource processing of the next processing flow through the target hardware resource after judging that the first thread has released the target hardware resource.
2. The method of claim 1, wherein running a first thread for executing a current processing flow of an image algorithm comprises:
if the current processing flow is the first processing flow of the image algorithm, the first thread is a main thread;
if the current processing flow is a non-first processing flow of the image algorithm, the first thread is a recursive sub-thread, and the first thread is created by a last processing flow of the current processing flow.
3. The method of claim 1, wherein the exclusive resource processing of a current process flow is performed in parallel with a pre-processing of the next process flow.
4. A method according to claim 1 or 3, wherein each of the process flows further comprises a post-process, the post-process being performed after the exclusive resource process;
when the exclusive resource processing of a current processing flow is executed by the target hardware resource, there is a parallel execution time for the post-processing of a last processing flow of the current processing flow and the pre-processing of the next processing flow.
5. The method of claim 1, wherein the method further comprises, before performing the exclusive resource processing of the current process flow by the target hardware resource when the target hardware resource is available:
the first thread determining whether the target hardware resource is available;
and if the target hardware resource is not available, the first thread waits until the target hardware resource is released by the last processing flow.
6. The method according to claim 1, wherein the method further comprises:
When the target hardware resource is available, the first thread adds a mutual exclusion lock for the target hardware resource so that the target hardware resource only executes the exclusive resource processing of the current processing flow;
the method further comprises the steps of:
and after the target hardware resource finishes executing the exclusive resource processing of the current processing flow, the first thread unlocks the target hardware resource so that the target hardware resource can be used by the next processing flow.
7. The method of claim 6, wherein the first thread adding a mutex lock to the target hardware resource comprises:
and setting a mutual exclusion lock mark for the API interface of the target hardware resource, wherein the mutual exclusion lock mark is used for enabling other threads to not detect the API interface of the target hardware resource or enabling the API interface of the target hardware resource detected by the other threads to be in an non-callable state.
8. The method of claim 1, wherein each of the process flows further comprises post-processing, the post-processing performed after the exclusive resource processing; the first processing flow of the multiple processing flows is executed by the main thread; the main thread is further to:
Determining whether the post-processing of the multiple processing flows is finished;
and if the execution is finished, determining an image processing result of the image algorithm.
9. An electronic device, comprising: the electronic device comprising a memory for storing program instructions and a processor for executing the program instructions, wherein the program instructions, when executed by the processor, trigger the electronic device to perform the method of any of the preceding claims 1-8.
10. A computer readable storage medium, characterized in that the computer readable storage medium has stored therein a computer program which, when run on an electronic device, causes the electronic device to perform the method of any of the preceding claims 1-8.
CN202311195878.0A 2023-09-18 2023-09-18 Image processing method and apparatus Active CN116934572B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311195878.0A CN116934572B (en) 2023-09-18 2023-09-18 Image processing method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311195878.0A CN116934572B (en) 2023-09-18 2023-09-18 Image processing method and apparatus

Publications (2)

Publication Number Publication Date
CN116934572A CN116934572A (en) 2023-10-24
CN116934572B true CN116934572B (en) 2024-03-01

Family

ID=88380740

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311195878.0A Active CN116934572B (en) 2023-09-18 2023-09-18 Image processing method and apparatus

Country Status (1)

Country Link
CN (1) CN116934572B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112907437A (en) * 2021-03-26 2021-06-04 长沙景嘉微电子股份有限公司 Method and device for running multiple 3D processes, electronic equipment and storage medium
WO2022033024A1 (en) * 2020-08-12 2022-02-17 中国银联股份有限公司 Distributed training method and apparatus of deep learning model
CN114359020A (en) * 2021-12-30 2022-04-15 上海京像微电子有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN115629884A (en) * 2022-12-12 2023-01-20 荣耀终端有限公司 Thread scheduling method, electronic device and storage medium
CN116048745A (en) * 2022-11-08 2023-05-02 网易有道信息技术(北京)有限公司 Time-sharing scheduling method and system of multi-module GPU, electronic equipment and storage medium
WO2023108800A1 (en) * 2021-12-15 2023-06-22 中国科学院深圳先进技术研究院 Performance analysis method based on cpu-gpu heterogeneous architecture, and device and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005284749A (en) * 2004-03-30 2005-10-13 Kyushu Univ Parallel computer
US7346720B2 (en) * 2005-10-21 2008-03-18 Isilon Systems, Inc. Systems and methods for managing concurrent access requests to a shared resource

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022033024A1 (en) * 2020-08-12 2022-02-17 中国银联股份有限公司 Distributed training method and apparatus of deep learning model
CN112907437A (en) * 2021-03-26 2021-06-04 长沙景嘉微电子股份有限公司 Method and device for running multiple 3D processes, electronic equipment and storage medium
WO2023108800A1 (en) * 2021-12-15 2023-06-22 中国科学院深圳先进技术研究院 Performance analysis method based on cpu-gpu heterogeneous architecture, and device and storage medium
CN114359020A (en) * 2021-12-30 2022-04-15 上海京像微电子有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN116048745A (en) * 2022-11-08 2023-05-02 网易有道信息技术(北京)有限公司 Time-sharing scheduling method and system of multi-module GPU, electronic equipment and storage medium
CN115629884A (en) * 2022-12-12 2023-01-20 荣耀终端有限公司 Thread scheduling method, electronic device and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
芯片级多线程处理器的操作系统调度研究;邵立松;孔金珠;戴华东;;计算机工程(15);第283-285页 *

Also Published As

Publication number Publication date
CN116934572A (en) 2023-10-24

Similar Documents

Publication Publication Date Title
KR102149187B1 (en) Electronic device and control method of the same
US20210358523A1 (en) Image processing method and image processing apparatus
CN114498028A (en) Data transmission method, device, equipment and storage medium
CN116934572B (en) Image processing method and apparatus
CN113627328A (en) Electronic device, image recognition method thereof, system on chip, and medium
WO2023124428A1 (en) Chip, accelerator card, electronic device and data processing method
CN116052701B (en) Audio processing method and electronic equipment
CN114443894A (en) Data processing method and device, electronic equipment and storage medium
CN114172596B (en) Channel noise detection method and related device
CN116048742A (en) Data processing method and electronic equipment
US20220301278A1 (en) Image processing method and apparatus, storage medium, and electronic device
CN114727082B (en) Image processing apparatus, image signal processor, image processing method, and medium
CN116028383B (en) Cache management method and electronic equipment
CN116723418B (en) Photographing method and related device
CN117389745B (en) Data processing method, electronic equipment and storage medium
CN115460343B (en) Image processing method, device and storage medium
CN113749614B (en) Skin detection method and apparatus
CN113966516A (en) Model-based signal reasoning method and device
CN117956264A (en) Shooting method, electronic device, storage medium, and program product
CN115830362A (en) Image processing method, apparatus, device, medium, and product
CN116703729A (en) Image processing method, terminal, storage medium and program product
CN117690177A (en) Face focusing method, face focusing device, electronic equipment and storage medium
CN117692751A (en) Image processing method, terminal, storage medium and program product
CN117156293A (en) Photographing method and related device
CN117714768A (en) Video display method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant