CN113807166A - Image processing method, device and storage medium - Google Patents

Image processing method, device and storage medium Download PDF

Info

Publication number
CN113807166A
CN113807166A CN202110876555.2A CN202110876555A CN113807166A CN 113807166 A CN113807166 A CN 113807166A CN 202110876555 A CN202110876555 A CN 202110876555A CN 113807166 A CN113807166 A CN 113807166A
Authority
CN
China
Prior art keywords
image
target
processed
chip
frequency characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110876555.2A
Other languages
Chinese (zh)
Other versions
CN113807166B (en
Inventor
刘瑞哲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Shushang Times Technology Co ltd
Original Assignee
Shenzhen Shushang Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Shushang Times Technology Co ltd filed Critical Shenzhen Shushang Times Technology Co ltd
Priority to CN202110876555.2A priority Critical patent/CN113807166B/en
Priority to CN202410338655.3A priority patent/CN118334361A/en
Publication of CN113807166A publication Critical patent/CN113807166A/en
Application granted granted Critical
Publication of CN113807166B publication Critical patent/CN113807166B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration using histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Neurology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium, where the method includes: acquiring an image to be processed; performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set; and carrying out image recognition on the target feature set through an artificial intelligence chip to obtain a target recognition result. By adopting the embodiment of the application, the video processing efficiency can be improved.

Description

Image processing method, device and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method, an image processing apparatus, and a storage medium.
Background
With the widespread use of electronic devices (mobile phones, tablet computers, etc.), the electronic devices are developed toward diversification and personalization, and the electronic devices have more and more applications, and have more and more powerful functions, so that the electronic devices become indispensable electronic products in the life of users.
At present, an electronic device may include a video processing chip (e.g., a Graphics Processing Unit (GPU)) and an Artificial Intelligence chip (AI chip), where the video processing process mainly includes processing an image (e.g., image segmentation, graying, and image enhancement) by the video processing chip and then transmitting the processed image to the AI chip for processing, such as image recognition, and when the AI chip performs image recognition, it needs to extract features in the image first and then perform operations such as matching according to the features to obtain a result.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device and a storage medium, which can improve video processing efficiency.
In a first aspect, an embodiment of the present application provides an image processing method, where the method includes:
acquiring an image to be processed;
performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set;
and carrying out image recognition on the target feature set through an artificial intelligence chip to obtain a target recognition result.
In a second aspect, an embodiment of the present application provides an image processing apparatus, including: an acquisition unit, an extraction unit and an identification unit, wherein,
the acquisition unit is used for acquiring an image to be processed;
the extraction unit is used for extracting the features of the image to be processed through a video processing chip to obtain a target feature set;
and the identification unit is used for carrying out image identification on the target characteristic set through an artificial intelligence chip to obtain a target identification result.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, a communication interface, and one or more programs, where the one or more programs are stored in the memory and configured to be executed by the processor, the program includes instructions for executing the steps in the first aspect of the embodiment of the present application, and the processor includes a video processing chip or an artificial intelligence chip.
In a fourth aspect, an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program for electronic data exchange, where the computer program enables a computer to perform some or all of the steps described in the first aspect of the embodiment of the present application.
In a fifth aspect, embodiments of the present application provide a computer program product, where the computer program product includes a non-transitory computer-readable storage medium storing a computer program, where the computer program is operable to cause a computer to perform some or all of the steps as described in the first aspect of the embodiments of the present application. The computer program product may be a software installation package.
The embodiment of the application has the following beneficial effects:
it can be seen that, in the image processing method, the image processing apparatus, and the storage medium described in the embodiments of the present application, an image to be processed is acquired, a video processing chip performs feature extraction on the image to be processed to obtain a target feature set, and an artificial intelligence chip performs image recognition on the target feature set to obtain a target recognition result.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic flow chart of another image processing method provided in the embodiments of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 4 is a block diagram of functional units of an image processing apparatus according to an embodiment of the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "first," "second," and the like in the description and claims of the present application and in the above-described drawings are used for distinguishing between different objects and not for describing a particular order. Furthermore, the terms "include" and "have," as well as any variations thereof, are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may include other steps or elements not listed or inherent to such process, method, article, or apparatus in one possible example.
Reference herein to "an embodiment" means that a particular feature, structure, or characteristic described in connection with the embodiment can be included in at least one embodiment of the application. The appearances of the phrase in various places in the specification are not necessarily all referring to the same embodiment, nor are separate or alternative embodiments mutually exclusive of other embodiments. It is explicitly and implicitly understood by one skilled in the art that the embodiments described herein can be combined with other embodiments.
In this embodiment, the electronic device related to this embodiment may include various handheld devices (mobile phones, tablet computers, etc.) having a wireless communication function, desktop computers, vehicle-mounted devices, wearable devices (smart watches, smart bands, wireless headsets, augmented reality/virtual reality devices, smart glasses), computing devices or other processing devices connected to wireless modems, and various forms of User Equipment (UE), Mobile Stations (MS), terminal devices (terminal device), and the like. For convenience of description, the above-mentioned devices are collectively referred to as electronic devices, and the electronic devices may also be servers.
The following describes embodiments of the present application in detail.
Referring to fig. 1, fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application, where the image processing method includes:
101. and acquiring an image to be processed.
In this embodiment of the application, the image to be processed may be one or more images, and the image to be processed may be a grayscale image or a color image, for example, the image to be processed may be at least one frame image in the target video. The electronic device may include a video Processing chip, which may be a Central Processing Unit (CPU) or a GPU, and an artificial intelligence chip (AI chip), which may be an embedded Neural-Network Processing Unit (NPU).
Optionally, in the step 101, acquiring an image to be processed may include the following steps:
11. acquiring target chip parameters of the artificial intelligence chip;
12. determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between preset chip parameters and shooting parameters;
13. acquiring target environment parameters;
14. determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between preset environment parameters and shooting parameters;
15. determining the intersection of the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
16. and shooting according to the target shooting parameter set to obtain the image to be processed.
In the embodiment of the present application, the chip parameter may be at least one of the following: chip model, chip specification, chip run, chip manufacturer, chip production date, chip core number, clock frequency, bandwidth, operating voltage, stability, design accuracy, and the like, without limitation. The shooting parameter may be at least one of: sensitivity, exposure duration, focal length, white balance parameters, zoom parameters, and the like, without limitation. The environmental parameter may be at least one of: ambient light weather, ambient temperature, ambient humidity, atmospheric pressure, and the like, without limitation. The electronic device may pre-store a mapping relationship between a preset chip parameter and a shooting parameter and a mapping relationship between a preset environment parameter and a shooting parameter.
Specifically, the electronic device can obtain target chip parameters of the artificial intelligence chip, determine a first shooting parameter set corresponding to the target chip parameters according to a mapping relation between preset chip parameters and shooting parameters, further obtain target environment parameters, determine a second shooting parameter set corresponding to the target environment parameters according to a mapping relation between the preset environment parameters and the shooting parameters, then determine an intersection between the first shooting parameter set and the second shooting parameter set to obtain the target shooting parameter set, and finally shoot according to the target shooting parameter set to obtain an image to be processed.
102. And performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set.
In particular implementations, the set of target features may include at least one feature that may be at least one of: feature points, feature contours, feature lines, target regions, etc., are not limited herein. The electronic equipment can preprocess the image to be processed through the video processing chip and then extract the characteristics of the preprocessed image to be processed to obtain a target characteristic set. The pretreatment may be at least one of: image segmentation, graying, image enhancement, and the like, without limitation.
In the embodiment of the application, the electronic device can perform feature extraction on the image to be processed through the video processing chip, so that a target feature set can be obtained, and then the target feature set can be input into the artificial intelligent chip for image recognition.
Optionally, in step 102, performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set, which may include the following steps:
a21, carrying out image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
a22, when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
and A23, when the target image quality evaluation value is larger than the upper limit value of the first preset range, performing feature extraction on the image to be processed through the video processing chip to obtain the target feature set.
In specific implementation, the electronic device may perform image quality evaluation on the image to be processed by using at least one image quality evaluation index through the video processing chip to obtain a target image quality evaluation value, where the image quality evaluation index may be at least one of the following: information entropy, average gradient, average gray scale, sharpness, signal-to-noise ratio, edge preservation, etc., and is not limited herein. The first preset range may be set by the user or by default. The first preset range may be an empirical value.
In the embodiment of the application, when the target image quality evaluation value is in a first preset range, the electronic device indicates that the image quality is moderate, performs image enhancement processing on an image to be processed to obtain a first image, and tries to extract more features through the image enhancement processing to improve the subsequent image identification precision, so that the first image can be subjected to feature extraction through a video processing chip to obtain a target feature set, and as the image to be processed is subjected to the image enhancement processing, more image features can be extracted; when the target image quality evaluation value is greater than the upper limit value of the first preset range, it can be understood that the image quality is good enough at this time, enough features can be extracted to meet the requirement of image recognition, that is, image enhancement processing is not required at this time, and further, the electronic device can directly extract the features of the image to be processed through the video processing chip to obtain the target feature set. Of course, when the target image quality evaluation value is smaller than the lower limit value of the first preset range, it indicates that the image quality is too poor, and then how to enhance the image, the requirement of subsequent image recognition cannot be met, and the to-be-processed image may be acquired again or the subsequent feature extraction and the image recognition may not be performed.
Optionally, in the step a22, performing image enhancement processing on the image to be processed to obtain the first image, the method may include the following steps:
a221, performing multi-scale decomposition on the image to be processed to obtain low-frequency characteristic components and high-frequency characteristic components;
a222, determining a target energy ratio between the low-frequency characteristic component and the high-frequency characteristic component;
a223, when the target energy ratio is in a second preset range, performing global image enhancement processing on the image to be processed to obtain the first image;
a224, when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
and A225, when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
In a specific implementation, an algorithm corresponding to the multi-scale decomposition in the embodiment of the present application may be at least one of the following: wavelet transform, laplace transform, contourlet transform, non-downsampled contourlet transform, ridge transform, shear wave transform, and the like, without limitation. The second preset range may be set by the user or by default. The second preset range may be an empirical value.
Specifically, the electronic device may perform multi-scale decomposition on the image to be processed to obtain a low-frequency feature component and a high-frequency feature component, where the low-frequency feature component may reflect a main portion of the image to a certain extent, and the high-frequency feature component reflects detailed information of the image to a certain extent. Furthermore, the electronic device may determine a target energy ratio between the low-frequency characteristic component and the high-frequency characteristic component, and when the target energy ratio is within a second preset range, it indicates that the distribution between the low frequency and the high frequency is moderate, and may perform global image enhancement processing on the image to be processed to obtain a first image; when the target energy value is larger than the upper limit value of the second preset range, the detail information is lack, the electronic equipment can perform image enhancement processing on the high-frequency characteristic component, and perform reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain a first image; when the target energy value is smaller than the lower limit value of the second preset range, it is described that detailed information is rich, image enhancement needs to be performed on low frequency, and then, the electronic device can perform image enhancement processing on the low frequency characteristic component, and perform reconstruction operation corresponding to sub-scale decomposition on the low frequency characteristic component and the high frequency characteristic component after the image enhancement processing to obtain a first image.
Optionally, in the step a223, performing global image enhancement on the image to be processed to obtain the first image, may include the following steps:
a2231, determining a target difference value between a target image quality evaluation value and a reference image quality evaluation value, wherein the reference image quality evaluation value is greater than the upper limit value of the first preset range;
a2232, determining a target image enhancement algorithm identifier corresponding to the target difference value according to a preset mapping relationship between the difference value and the image enhancement algorithm identifier;
a2233, dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
a2234, determining target mean square deviations corresponding to the plurality of definitions;
a2235, determining a target fine-tuning parameter corresponding to the target mean square error according to a preset mapping relation between the mean square error and the fine-tuning parameter;
a2236, obtaining a target image enhancement algorithm corresponding to the target image enhancement algorithm identification and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
a2237, fine tuning the reference algorithm control parameter through the target fine tuning parameter to obtain a target algorithm control parameter;
and A2238, performing image enhancement processing on the image to be processed according to the target image enhancement algorithm and the target algorithm control parameter to obtain the first image.
In the embodiment of the present application, the reference image quality evaluation value may be preset or default, the reference image quality evaluation value may be greater than the upper limit of the first preset range, and if the image quality evaluation value is greater than or equal to the reference image quality evaluation value, the image quality is better. The electronic device may pre-store a mapping relationship between a preset difference and the image enhancement algorithm identifier, and a mapping relationship between a preset mean square error and the fine tuning parameter. The different image enhancement algorithm identifications may correspond to different image enhancement algorithms, and the image enhancement processing algorithm may be at least one of: histogram equalization, gray scale stretching, Retinex algorithm, etc., without limitation. Each image enhancement processing algorithm is provided with corresponding algorithm control parameters, the algorithm control parameters are used for controlling the degree of image enhancement or controlling the region of the algorithm enhancement, and the algorithm control parameters are adjusted to prevent under enhancement or over enhancement.
Specifically, the electronic device may determine a target difference between the target image quality evaluation value and the reference image quality evaluation value, then set an ideal-effect image according to a mapping relationship between a preset difference and an image enhancement algorithm identifier, compare the quality of the current image with the quality of the current image, and select an appropriate algorithm according to the difference, that is, determine the target image enhancement algorithm identifier corresponding to the target difference, where different differences indicate different degrees of image enhancement required for the image, and further implement image enhancement processing in a targeted manner by using the method to prevent under-enhancement or over-enhancement.
Furthermore, the electronic device may further divide the image to be processed into a plurality of regions, determine the definition of each of the plurality of regions, obtain a plurality of definitions, determine a target mean square error corresponding to the plurality of definitions, and then determine a target fine-tuning parameter corresponding to the target mean square error according to a mapping relationship between a preset mean square error and the fine-tuning parameter, where the mean square error reflects the stability of the image and the relevance and consistency between the regions of the image, and in some cases, stains exist on a lens or the lens is damaged, which may cause the image to be non-uniform, and the consistency between the regions of the image may be damaged, which may cause the possibility that the algorithm may misjudge the image quality.
Further, the electronic device may obtain a target image enhancement algorithm corresponding to the target image enhancement algorithm identifier and a reference algorithm control parameter corresponding to the target image enhancement algorithm, and fine-tune the reference algorithm control parameter by using the target fine-tuning parameter to obtain the target algorithm control parameter, which is specifically as follows:
target algorithm control parameter (1+ target fine tuning parameter) × reference algorithm control parameter
Furthermore, the electronic equipment can perform image enhancement processing on the image to be processed according to the target image enhancement algorithm according to the target algorithm control parameter to obtain the first image, so that the image enhancement effect can be improved, more image features can be extracted subsequently, and the image identification precision can be improved.
Optionally, in the step 22, performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set, which may include the following steps:
b21, receiving an image recognition instruction, wherein the image recognition instruction carries image recognition requirement parameters, and the image recognition requirement parameters comprise at least one of the following parameters: image recognition accuracy, image recognition object type;
b22, determining a target feature extraction algorithm according to the image identification requirement parameters;
and B23, performing feature extraction on the image to be processed according to the target feature extraction algorithm to obtain the target feature set.
In a specific implementation, the image recognition requirement parameter includes at least one of the following parameters: image recognition accuracy, image recognition object type, and the like, and the image recognition object type may be at least one of the following, without being limited thereto: human, animal, action, plant, expression, and the like, without limitation, the set of target features may include a plurality of features, which may be at least one of: feature points, feature profiles, feature vectors, target areas, color distribution features. Different image recognition instructions have different corresponding image recognition requirement parameters, different image recognition instructions have different used image characteristics, and further, a suitable characteristic extraction algorithm can be selected based on the requirements of the image recognition, namely the image recognition requirement parameters, the characteristic extraction algorithm is used for extracting characteristics in the characteristics, the characteristics are used for realizing the image recognition, and the characteristic extraction algorithm can be at least one of the following: a harris corner detection algorithm, a scale invariant feature extraction algorithm, hough transform, a region segmentation algorithm, etc., which are not limited herein.
Specifically, the electronic device may pre-store a mapping relationship between an image identification requirement parameter and a feature extraction algorithm, and then may determine a target feature extraction algorithm corresponding to the image identification requirement parameter based on the mapping relationship, and then perform feature extraction on the image to be processed according to the target feature extraction algorithm to obtain a target feature set, and further may reversely deduce a suitable feature extraction algorithm based on the purpose of image identification, so as to implement that the extracted features can deeply meet the requirements of image identification, which is helpful for improving the image identification effect.
103. And carrying out image recognition on the target feature set through an artificial intelligence chip to obtain a target recognition result.
In specific implementation, the electronic device may perform image recognition on the target feature set through an artificial intelligence chip, for example, implement image classification, target recognition, pattern recognition, and the like, to obtain a target recognition result.
For example, in the related art, video processing mainly includes processing (such as image segmentation, graying, and image enhancement) an image by a video processing chip such as a GPU, and then transmitting the processed image to an AI chip such as an NPU for processing such as image recognition, where when the AI chip performs image recognition, it is necessary to extract features in the image first, and then perform operations such as matching according to the features to obtain a result. And the video processing chip and the AI chip can be realized by using a domestic chip.
It can be seen that, in the image processing method described in the embodiment of the present application, the image to be processed is acquired, the video processing chip performs feature extraction on the image to be processed to obtain the target feature set, and the artificial intelligence chip performs image recognition on the target feature set to obtain the target recognition result.
Referring to fig. 2, fig. 2 is a schematic flowchart of an image processing method applied to an electronic device according to an embodiment of the present application, where as shown in the figure, the image processing method includes:
201. and acquiring an image to be processed.
202. And performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set, wherein the target feature set comprises a target feature point set and a target feature contour set.
The detailed description of steps 201 to 202 may refer to corresponding steps of the image processing method described in fig. 1, and will not be described herein again.
203. And carrying out image recognition on the target characteristic point set through an artificial intelligence chip to obtain a first recognition result set.
In a specific implementation, the artificial intelligence chip may include an artificial neural network algorithm, and the artificial neural network algorithm may be at least one of the following algorithms: convolutional neural network neural models, impulse neural network models, fully-connected neural network models, recurrent neural network models, and the like, without limitation. The image recognition of the target feature point set can be realized through an artificial neural network algorithm, namely, the target feature point set is input into a neural network model, so that a plurality of recognition results can be obtained, each recognition result corresponds to one label and one probability value, and the plurality of recognition results and the probability value corresponding to each recognition result are used as a first recognition result set.
204. And carrying out image recognition on the target feature contour set through an artificial intelligence chip to obtain a second recognition result set.
In specific implementation, the image recognition of the target feature contour set can be realized through an artificial neural network algorithm, that is, the target feature contour set is input into a neural network model, so that a plurality of recognition results can be obtained, each recognition result corresponds to one label and one probability value, and the plurality of recognition results and the probability value corresponding to each recognition result are used as a second recognition result set.
205. And determining the target recognition result according to the first recognition result set and the second recognition result set.
In a specific implementation, the electronic device may determine a first weight corresponding to the first recognition result set according to the target image quality evaluation value, and specifically, a mapping relationship between the image quality evaluation value and the weight may be stored in the electronic device in advance, further, a first weight corresponding to the target image quality evaluation value can be determined based on the mapping relation, a second weight corresponding to the second recognition result is 1-the first weight, then the probability value of each recognition result in the first recognition result set is weighted with the first weight to obtain a plurality of first weighted probability values, and carrying out weighted operation on the probability value of each recognition result in the second recognition result set and the second weight value to obtain a plurality of second weighted probability values, then summing the weighted probability values of the same label, and selecting the maximum value in the weighted probability values after summation, and taking the label corresponding to the maximum value as a target identification result.
It can be seen that, in the image processing method described in the embodiment of the present application, an image to be processed is acquired, feature extraction is performed on the image to be processed through a video processing chip to obtain a target feature set, the target feature set includes a target feature point set and a target feature profile set, image recognition is performed on the target feature point set through an artificial intelligence chip to obtain a first recognition result set, image recognition is performed on the target feature profile set through the artificial intelligence chip to obtain a second recognition result set, and a target recognition result is determined according to the first recognition result set and the second recognition result set.
Referring to fig. 3, in accordance with the above-mentioned embodiment, fig. 3 is a schematic structural diagram of an electronic device provided in this embodiment, as shown in the figure, the electronic device includes a processor, a memory, a communication interface, and one or more programs, the one or more programs are stored in the memory and configured to be executed by the processor, and the processor may be a video processing chip or an artificial intelligence chip, in this embodiment, the program includes instructions for performing the following steps:
acquiring an image to be processed;
performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set;
and carrying out image recognition on the target feature set through an artificial intelligence chip to obtain a target recognition result.
It can be seen that, in the electronic device described in the embodiment of the present application, an image to be processed is acquired, feature extraction is performed on the image to be processed through a video processing chip to obtain a target feature set, image recognition is performed on the target feature set through an artificial intelligence chip to obtain a target recognition result, and since the feature extraction of the image is performed in the video processing chip, the data volume of the artificial intelligence chip can be reduced, which is beneficial to improving the video processing efficiency.
Optionally, in the aspect that the feature extraction is performed on the image to be processed by the video processing chip to obtain the target feature set, the program includes instructions for executing the following steps:
performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
and when the target image quality evaluation value is larger than the upper limit value of the first preset range, performing feature extraction on the image to be processed through the video processing chip to obtain the target feature set.
Optionally, in the aspect of obtaining the first image by performing image enhancement processing on the image to be processed, the program includes instructions for executing the following steps:
carrying out multi-scale decomposition on the image to be processed to obtain low-frequency characteristic components and high-frequency characteristic components;
determining a target energy ratio between the low frequency feature component and the high frequency feature component;
when the target energy ratio is in a second preset range, carrying out global image enhancement processing on the image to be processed to obtain the first image;
when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
and when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
Optionally, in the aspect of performing global image enhancement on the image to be processed to obtain the first image, the program includes instructions for executing the following steps:
determining a target difference value between a target image quality evaluation value and a reference image quality evaluation value, wherein the reference image quality evaluation value is larger than the upper limit value of the first preset range;
determining a target image enhancement algorithm identifier corresponding to the target difference value according to a preset mapping relation between the difference value and the image enhancement algorithm identifier;
dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
determining target mean square deviations corresponding to the plurality of definitions;
determining a target fine-tuning parameter corresponding to the target mean square error according to a mapping relation between a preset mean square error and the fine-tuning parameter;
acquiring a target image enhancement algorithm corresponding to the target image enhancement algorithm identification and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
fine-tuning the reference algorithm control parameter through the target fine-tuning parameter to obtain a target algorithm control parameter;
and carrying out image enhancement processing on the image to be processed according to the target algorithm control parameter and the target image enhancement algorithm to obtain the first image.
Optionally, in the aspect of acquiring the image to be processed, the program includes instructions for performing the following steps:
acquiring target chip parameters of the artificial intelligence chip;
determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between preset chip parameters and shooting parameters;
acquiring target environment parameters;
determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between preset environment parameters and shooting parameters;
determining the intersection of the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
and shooting according to the target shooting parameter set to obtain the image to be processed.
Fig. 4 is a block diagram showing functional units of an image processing apparatus 400 according to an embodiment of the present application. The image processing apparatus 400 is applied to an electronic device, and the apparatus 400 includes: an acquisition unit 401, an extraction unit 402 and a recognition unit 403, wherein,
the acquiring unit 401 is configured to acquire an image to be processed;
the extracting unit 402 is configured to perform feature extraction on the image to be processed through a video processing chip to obtain a target feature set;
the identifying unit 403 is configured to perform image identification on the target feature set through an artificial intelligence chip to obtain a target identification result.
It can be seen that, in the image processing apparatus described in the embodiment of the present application, an image to be processed is acquired, a video processing chip performs feature extraction on the image to be processed to obtain a target feature set, and an artificial intelligence chip performs image recognition on the target feature set to obtain a target recognition result.
Optionally, in the aspect that the feature of the image to be processed is extracted by the video processing chip to obtain the target feature set, the extracting unit 402 is specifically configured to:
performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
and when the target image quality evaluation value is larger than the upper limit value of the first preset range, performing feature extraction on the image to be processed through the video processing chip to obtain the target feature set.
Optionally, in the aspect of performing image enhancement processing on the image to be processed to obtain a first image, the extracting unit 402 is specifically configured to:
carrying out multi-scale decomposition on the image to be processed to obtain low-frequency characteristic components and high-frequency characteristic components;
determining a target energy ratio between the low frequency feature component and the high frequency feature component;
when the target energy ratio is in a second preset range, carrying out global image enhancement processing on the image to be processed to obtain the first image;
when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
and when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
Optionally, in the aspect of performing global image enhancement on the image to be processed to obtain the first image, the extracting unit 402 is specifically configured to:
determining a target difference value between a target image quality evaluation value and a reference image quality evaluation value, wherein the reference image quality evaluation value is larger than the upper limit value of the first preset range;
determining a target image enhancement algorithm identifier corresponding to the target difference value according to a preset mapping relation between the difference value and the image enhancement algorithm identifier;
dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
determining target mean square deviations corresponding to the plurality of definitions;
determining a target fine-tuning parameter corresponding to the target mean square error according to a mapping relation between a preset mean square error and the fine-tuning parameter;
acquiring a target image enhancement algorithm corresponding to the target image enhancement algorithm identification and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
fine-tuning the reference algorithm control parameter through the target fine-tuning parameter to obtain a target algorithm control parameter;
and carrying out image enhancement processing on the image to be processed according to the target algorithm control parameter and the target image enhancement algorithm to obtain the first image.
Optionally, in the aspect of acquiring the image to be processed, the acquiring unit 401 is specifically configured to:
acquiring target chip parameters of the artificial intelligence chip;
determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between preset chip parameters and shooting parameters;
acquiring target environment parameters;
determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between preset environment parameters and shooting parameters;
determining the intersection of the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
and shooting according to the target shooting parameter set to obtain the image to be processed.
It is to be understood that the functions of each program module of the image processing apparatus of this embodiment may be specifically implemented according to the method in the foregoing method embodiment, and the specific implementation process may refer to the relevant description of the foregoing method embodiment, which is not described herein again.
Embodiments of the present application also provide a computer storage medium, where the computer storage medium stores a computer program for electronic data exchange, the computer program enabling a computer to execute part or all of the steps of any one of the methods described in the above method embodiments, and the computer includes an electronic device.
Embodiments of the present application also provide a computer program product comprising a non-transitory computer readable storage medium storing a computer program operable to cause a computer to perform some or all of the steps of any of the methods as described in the above method embodiments. The computer program product may be a software installation package, the computer comprising an electronic device.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts described, as some steps may occur in other orders or concurrently depending on the application. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required in this application.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer readable memory if it is implemented in the form of a software functional unit and sold or used as a stand-alone product. Based on such understanding, the technical solution of the present application may be substantially implemented or a part of or all or part of the technical solution contributing to the prior art may be embodied in the form of a software product stored in a memory, and including several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the above-mentioned method of the embodiments of the present application. And the aforementioned memory comprises: a U-disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a removable hard disk, a magnetic or optical disk, and other various media capable of storing program codes.
Those skilled in the art will appreciate that all or part of the steps in the methods of the above embodiments may be implemented by associated hardware instructed by a program, which may be stored in a computer-readable memory, which may include: flash Memory disks, Read-Only memories (ROMs), Random Access Memories (RAMs), magnetic or optical disks, and the like.
The foregoing detailed description of the embodiments of the present application has been presented to illustrate the principles and implementations of the present application, and the above description of the embodiments is only provided to help understand the method and the core concept of the present application; meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (10)

1. An image processing method, characterized in that the method comprises:
acquiring an image to be processed;
performing feature extraction on the image to be processed through a video processing chip to obtain a target feature set;
and carrying out image recognition on the target feature set through an artificial intelligence chip to obtain a target recognition result.
2. The method of claim 1, wherein the performing feature extraction on the image to be processed by a video processing chip to obtain a target feature set comprises:
performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
and when the target image quality evaluation value is larger than the upper limit value of the first preset range, performing feature extraction on the image to be processed through the video processing chip to obtain the target feature set.
3. The method according to claim 2, wherein the performing image enhancement processing on the image to be processed to obtain a first image comprises:
carrying out multi-scale decomposition on the image to be processed to obtain low-frequency characteristic components and high-frequency characteristic components;
determining a target energy ratio between the low frequency feature component and the high frequency feature component;
when the target energy ratio is in a second preset range, carrying out global image enhancement processing on the image to be processed to obtain the first image;
when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
and when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
4. The method according to claim 3, wherein the performing global image enhancement processing on the image to be processed to obtain the first image comprises:
determining a target difference value between a target image quality evaluation value and a reference image quality evaluation value, wherein the reference image quality evaluation value is larger than the upper limit value of the first preset range;
determining a target image enhancement algorithm identifier corresponding to the target difference value according to a preset mapping relation between the difference value and the image enhancement algorithm identifier;
dividing the image to be processed into a plurality of areas, and determining the definition of each area in the plurality of areas to obtain a plurality of definitions;
determining target mean square deviations corresponding to the plurality of definitions;
determining a target fine-tuning parameter corresponding to the target mean square error according to a mapping relation between a preset mean square error and the fine-tuning parameter;
acquiring a target image enhancement algorithm corresponding to the target image enhancement algorithm identification and a reference algorithm control parameter corresponding to the target image enhancement algorithm;
fine-tuning the reference algorithm control parameter through the target fine-tuning parameter to obtain a target algorithm control parameter;
and carrying out image enhancement processing on the image to be processed according to the target algorithm control parameter and the target image enhancement algorithm to obtain the first image.
5. The method according to any one of claims 1-4, wherein the acquiring the image to be processed comprises:
acquiring target chip parameters of the artificial intelligence chip;
determining a first shooting parameter set corresponding to the target chip parameter according to a mapping relation between preset chip parameters and shooting parameters;
acquiring target environment parameters;
determining a second shooting parameter set corresponding to the target environment parameter according to a mapping relation between preset environment parameters and shooting parameters;
determining the intersection of the first shooting parameter set and the second shooting parameter set to obtain a target shooting parameter set;
and shooting according to the target shooting parameter set to obtain the image to be processed.
6. An image processing apparatus, characterized in that the apparatus comprises: an acquisition unit, an extraction unit and an identification unit, wherein,
the acquisition unit is used for acquiring an image to be processed;
the extraction unit is used for extracting the features of the image to be processed through a video processing chip to obtain a target feature set;
and the identification unit is used for carrying out image identification on the target characteristic set through an artificial intelligence chip to obtain a target identification result.
7. The apparatus according to claim 6, wherein in the aspect that the video processing chip performs feature extraction on the image to be processed to obtain a target feature set, the extraction unit is specifically configured to:
performing image quality evaluation on the image to be processed through the video processing chip to obtain a target image quality evaluation value;
when the target image quality evaluation value is in a first preset range, performing image enhancement processing on the image to be processed to obtain a first image, and performing feature extraction on the first image through the video processing chip to obtain the target feature set;
and when the target image quality evaluation value is larger than the upper limit value of the first preset range, performing feature extraction on the image to be processed through the video processing chip to obtain the target feature set.
8. The apparatus according to claim 7, wherein, in the aspect of performing the image enhancement processing on the image to be processed to obtain the first image, the extracting unit is specifically configured to:
carrying out multi-scale decomposition on the image to be processed to obtain low-frequency characteristic components and high-frequency characteristic components;
determining a target energy ratio between the low frequency feature component and the high frequency feature component;
when the target energy ratio is in a second preset range, carrying out global image enhancement processing on the image to be processed to obtain the first image;
when the target energy value is larger than the upper limit value of the second preset range, performing image enhancement processing on the high-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the high-frequency characteristic component and the low-frequency characteristic component after the image enhancement processing to obtain the first image;
and when the target energy value is smaller than the lower limit value of the second preset range, performing image enhancement processing on the low-frequency characteristic component, and performing reconstruction operation corresponding to the sub-scale decomposition on the low-frequency characteristic component and the high-frequency characteristic component after the image enhancement processing to obtain the first image.
9. An electronic device comprising a processor, a memory for storing one or more programs and configured to be executed by the processor, the programs comprising instructions for performing the steps in the method of any of claims 1-5, the processor comprising a video processing chip or an artificial intelligence chip.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores a computer program comprising program instructions that, when executed by a processor, cause the processor to carry out the method according to any one of claims 1-5.
CN202110876555.2A 2021-07-31 2021-07-31 Image processing method, device and storage medium Active CN113807166B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110876555.2A CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium
CN202410338655.3A CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110876555.2A CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202410338655.3A Division CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Publications (2)

Publication Number Publication Date
CN113807166A true CN113807166A (en) 2021-12-17
CN113807166B CN113807166B (en) 2024-03-08

Family

ID=78942733

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202110876555.2A Active CN113807166B (en) 2021-07-31 2021-07-31 Image processing method, device and storage medium
CN202410338655.3A Pending CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202410338655.3A Pending CN118334361A (en) 2021-07-31 2021-07-31 Image processing method and device based on artificial intelligent chip, medium and program

Country Status (1)

Country Link
CN (2) CN113807166B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418824A (en) * 2022-01-27 2022-04-29 支付宝(杭州)信息技术有限公司 Image processing method, device and storage medium
CN114842579A (en) * 2022-04-26 2022-08-02 深圳市凯迪仕智能科技有限公司 Intelligent lock, image processing method and related product

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019001254A1 (en) * 2017-06-29 2019-01-03 Oppo广东移动通信有限公司 Method for iris liveness detection and related product
CN109801209A (en) * 2019-01-29 2019-05-24 北京旷视科技有限公司 Parameter prediction method, artificial intelligence chip, equipment and system
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111681164A (en) * 2020-05-29 2020-09-18 广州市盛光微电子有限公司 Device and method for cruising of panoramic image in local end-to-end mode
CN111783375A (en) * 2020-06-30 2020-10-16 Oppo广东移动通信有限公司 Chip system and related device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019001254A1 (en) * 2017-06-29 2019-01-03 Oppo广东移动通信有限公司 Method for iris liveness detection and related product
CN109801209A (en) * 2019-01-29 2019-05-24 北京旷视科技有限公司 Parameter prediction method, artificial intelligence chip, equipment and system
CN111160175A (en) * 2019-12-19 2020-05-15 中科寒武纪科技股份有限公司 Intelligent pedestrian violation behavior management method and related product
CN111681164A (en) * 2020-05-29 2020-09-18 广州市盛光微电子有限公司 Device and method for cruising of panoramic image in local end-to-end mode
CN111783375A (en) * 2020-06-30 2020-10-16 Oppo广东移动通信有限公司 Chip system and related device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114418824A (en) * 2022-01-27 2022-04-29 支付宝(杭州)信息技术有限公司 Image processing method, device and storage medium
CN114842579A (en) * 2022-04-26 2022-08-02 深圳市凯迪仕智能科技有限公司 Intelligent lock, image processing method and related product
CN114842579B (en) * 2022-04-26 2024-02-20 深圳市凯迪仕智能科技股份有限公司 Intelligent lock, image processing method and related products

Also Published As

Publication number Publication date
CN113807166B (en) 2024-03-08
CN118334361A (en) 2024-07-12

Similar Documents

Publication Publication Date Title
US10733421B2 (en) Method for processing video, electronic device and storage medium
CN107679448B (en) Eyeball action-analysing method, device and storage medium
CN107633204A (en) Face occlusion detection method, apparatus and storage medium
CN113807166B (en) Image processing method, device and storage medium
CN106650568B (en) Face recognition method and device
CN107944381B (en) Face tracking method, face tracking device, terminal and storage medium
CN110599554A (en) Method and device for identifying face skin color, storage medium and electronic device
CN111260655A (en) Image generation method and device based on deep neural network model
CN114299363A (en) Training method of image processing model, image classification method and device
CN109711287B (en) Face acquisition method and related product
CN111080665A (en) Image frame identification method, device and equipment and computer storage medium
CN110910400A (en) Image processing method, image processing device, storage medium and electronic equipment
CN111652878B (en) Image detection method, image detection device, computer equipment and storage medium
CN111444373B (en) Image retrieval method, device, medium and system thereof
CN113706550A (en) Image scene recognition and model training method and device and computer equipment
CN116959113A (en) Gait recognition method and device
CN116958582A (en) Data processing method and related device
CN115546554A (en) Sensitive image identification method, device, equipment and computer readable storage medium
CN112084874B (en) Object detection method and device and terminal equipment
CN109241928B (en) Method and computing device for recognizing heterogeneous irises
CN113760415A (en) Dial plate generation method and device, electronic equipment and computer readable storage medium
CN117152567B (en) Training method, classifying method and device of feature extraction network and electronic equipment
CN111488476A (en) Image pushing method, model training method and corresponding device
CN112749705B (en) Training model updating method and related equipment
CN110956190A (en) Image recognition method and device, computer device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant