CN110633740B - Image semantic matching method, terminal and computer readable storage medium - Google Patents

Image semantic matching method, terminal and computer readable storage medium Download PDF

Info

Publication number
CN110633740B
CN110633740B CN201910824888.3A CN201910824888A CN110633740B CN 110633740 B CN110633740 B CN 110633740B CN 201910824888 A CN201910824888 A CN 201910824888A CN 110633740 B CN110633740 B CN 110633740B
Authority
CN
China
Prior art keywords
image
feature
label
tag
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910824888.3A
Other languages
Chinese (zh)
Other versions
CN110633740A (en
Inventor
王健宗
彭俊清
瞿晓阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201910824888.3A priority Critical patent/CN110633740B/en
Publication of CN110633740A publication Critical patent/CN110633740A/en
Priority to PCT/CN2020/112352 priority patent/WO2021043092A1/en
Application granted granted Critical
Publication of CN110633740B publication Critical patent/CN110633740B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image semantic matching method, a terminal and a computer readable storage medium, wherein a first characteristic tag set and a second characteristic tag set are respectively obtained by analyzing a first image and a second image, each characteristic tag in the first characteristic tag set and each characteristic tag in the second characteristic tag set are matched, the characteristic tag shared by the first characteristic tag set and the second characteristic tag set is determined, the semantic matching relation between the first image and the second image is determined based on the characteristic tag shared by the first characteristic tag set and the second characteristic tag set, the first characteristic tag set and the second characteristic tag set can be respectively obtained by analyzing the first image and the second image which are not related and are unpaired, and whether the first image and the second image have the semantic relation or not can be determined based on the characteristic tag shared by the first characteristic tag set and the second characteristic tag set.

Description

Image semantic matching method, terminal and computer readable storage medium
Technical Field
The present invention relates to the field of electronic technology, and in particular, to an image semantic matching method, a terminal, and a computer readable storage medium.
Background
Semantic matching, namely, establishing semantic correspondence between different object instances or scenes, most of semantic matching researches are concentrated on paired images with a certain relationship, and no method is adopted for realizing semantic matching for unpaired images without a relationship.
Disclosure of Invention
The invention provides a method, a terminal and a computer readable storage medium for image semantic matching.
The invention provides an image semantic matching method, which comprises the following steps:
analyzing the first image and the second image to obtain a first characteristic label set and a second characteristic label set respectively, wherein the first characteristic label set comprises characteristic labels of all the main bodies in the first image, and the second characteristic label comprises characteristic labels of all the main bodies in the second image;
matching each characteristic label in the first characteristic label set with each characteristic label in the second characteristic label set, and determining the characteristic labels shared by the first characteristic label set and the second characteristic label set;
and determining the semantic matching relationship between the first image and the second image based on the feature labels shared by the first feature label set and the second feature label set.
Optionally, the feature labels include position labels, pixel labels and area labels of each main body in the image;
the position label is the position of the main body in the image;
the pixel tag includes average brightness, gray, hue, or color temperature information of the subject;
the area label includes the area of the subject, or the ratio of the subject to the total area of the image.
Optionally, parsing the first image and the second image to obtain a first feature tag set and a second feature tag set respectively includes:
determining a main body in the first image and the second image;
determining feature labels of the subjects in the first image and the second image based on the subjects in the first image and the second image;
and determining the characteristic label of each main body in the first image as a characteristic label set of the first image, and determining the characteristic label of each main body in the second image as a characteristic label set of the second image.
Optionally, matching each feature tag in the first feature tag set with each feature tag in the second feature tag set, and determining a feature tag common to the first feature tag set and the second feature tag set includes:
matching each subject in the first image with each subject in the second image, and determining a subject shared by the first image and the second image;
and comparing the feature labels of the main body shared by the first image and the second image to obtain the feature label of the shared main body, and taking the feature label as the feature label shared by the first feature label set and the second feature label set.
Optionally, matching each feature tag in the first feature tag set with each feature tag in the second feature tag set, and determining a feature tag common to the first feature tag set and the second feature tag set includes:
classifying each feature tag in the first feature tag set and each feature tag in the second feature tag set;
and comparing the first characteristic label set with the characteristic labels of the same category in the second characteristic label set respectively to determine the characteristic labels shared by the first characteristic label set and the second characteristic label set.
Optionally, determining the semantic matching relationship between the first image and the second image based on the feature tag shared by the first feature tag set and the second feature tag set includes:
when the number of the shared feature labels is greater than or equal to the preset label number, judging that the first image and the second image have semantic relations.
Optionally, determining the semantic matching relationship between the first image and the second image based on the feature tag shared by the first feature tag set and the second feature tag set includes:
determining important feature tags in a first feature tag set of the first image;
when the feature labels shared by the first feature label set and the second feature label set comprise important feature labels, judging that the first image and the second image have semantic relations.
Optionally, determining the important feature label in the first feature label set of the first image includes:
determining each feature label of a main body with the largest occupied area in a first image as an important feature label;
or determining each feature label of the main body with the maximum brightness in the first image as an important feature label;
alternatively, determining the feature label of each subject of the first image within the fifth one of the nine boxes is an important feature label.
Further, the invention also provides a device, which comprises a processor, a memory and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the image semantic matching method as described above.
Further, the present invention also provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the image semantic matching method as above.
The invention provides an image semantic matching method, a terminal and a computer readable storage medium, wherein a first characteristic tag set and a second characteristic tag set can be respectively obtained by analyzing a first image and a second image which are not related and are unpaired, and whether the first image and the second image have semantic relation or not can be determined based on characteristic tags shared by the first characteristic tag set and the second characteristic tag set.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are necessary for the description of the embodiments or the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention and that other drawings may be obtained from them without inventive effort for a person skilled in the art.
FIG. 1 is a basic flow chart of an image semantic matching method provided by an embodiment of the present invention;
FIG. 2 is a detailed flowchart of an image semantic matching method according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of an implementation method for matching image semantics according to an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more comprehensible, the technical solutions in the embodiments of the present invention will be clearly described in conjunction with the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are only some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Fig. 1 is a basic flowchart of an image semantic matching method provided in this embodiment, where the method includes:
s101, analyzing the first image and the second image to respectively obtain a first characteristic tag set and a second characteristic tag set.
In step S101, the first feature tag set includes feature tags of respective subjects in the first image, and the second feature tag includes feature tags of respective subjects in the second image.
The subject in the image is conspicuous in the image, contrasted against the background, while there may be multiple subjects in the image. For example, the picture displayed in one image is a dog running on a lawn, the subject in the one image is a running dog, and the background is a lawn.
The feature labels may include one or more of a location label, a pixel label, an area label of each subject in the image. In the example of simultaneously including a position tag, a pixel tag, and an area tag, the position tag is the position of the subject in the image; the pixel labels include average brightness, gray, hue, or color temperature information of a subject in the image; the area label includes the area of the subject, or the ratio of the subject to the total area of the image.
It should be understood that the first image and the second image may be a certain relationship and a paired image, or may be a non-relationship and an unpaired image, and the determining whether the two images have a semantic relationship or not may be realized by the existing semantic matching method for the images having a certain relationship and paired images; for images which have no relation and are not paired, the existing semantic matching method can not judge whether the images have semantic relation or not, and based on the method, the invention provides the image semantic matching method.
S102, matching each feature tag in the first feature tag set with each feature tag in the second feature tag set, and determining feature tags shared by the first feature tag set and the second feature tag set.
It should be understood that the feature labels in the feature label set are not recorded in a chaotic manner, but are recorded according to different subjects, for example, the first image includes the subject 1 and the subject 2, and the corresponding first image feature label set includes the position label, the pixel label, and the area label of the subject 1, and the position label, the pixel label, and the area label of the subject 2.
Thus, the process of matching the common feature tag in the first feature tag set and the second feature tag set in step S102 may be:
(1) And firstly carrying out matching by taking the main bodies in the first image and the second image as units, determining a common main body of the first image and the second image, and then carrying out matching on the feature labels of the common main body, thereby determining the feature labels common to the first feature label set and the second feature label set.
(2) Because the feature labels comprise position labels, pixel labels and area labels, the feature label sets in the first feature label set and the second feature label set can be classified according to the types of the feature labels, such as the position label set, the pixel label set and the area label set, and then the position label sets in the first feature label set and the position label sets in the second feature label set are respectively compared with each other, so that the feature labels shared by the first feature label set and the second feature label set are determined.
S103, determining the semantic matching relation between the first image and the second image based on the feature labels shared by the first feature label set and the second feature label set.
It should be noted that, by determining the relationship between the number of the common feature labels and the number of the preset labels, it is further determined whether the first image and the second image have a semantic relationship; the first image and the second image can also be judged to have a semantic relationship by determining that important feature tags of the first image are included in feature tags shared by the first feature tag set and the second feature tag set.
The invention provides an image semantic matching method, a terminal and a computer readable storage medium, wherein a first characteristic tag set and a second characteristic tag set can be respectively obtained by analyzing a first image and a second image which are not related and are unpaired, and whether the first image and the second image have semantic relation or not can be determined based on characteristic tags shared by the first characteristic tag set and the second characteristic tag set.
The image semantic matching method provided by the invention will be described below based on the above description, and other embodiments of the image semantic matching method provided by the invention will be described.
Referring to fig. 2, fig. 2 is a flowchart illustrating an image semantic matching method according to a second embodiment of the present invention, where the method includes:
s201, determining a main body in the first image and the second image.
The subject in the image is conspicuous in the image, contrasted against the background, while there may be multiple subjects in the image.
S202, determining feature labels of the subjects in the first image and the second image based on the subjects in the first image and the second image.
The feature labels may include one or more of a location label, a pixel label, an area label of each subject in the image.
The position label is the position of the main body in the image; the pixel labels include average brightness, gray, hue, or color temperature information of a subject in the image; the area label includes the area of the subject, or the ratio of the subject to the total area of the image.
S203, determining the characteristic label of each main body in the first image as a characteristic label set of the first image, and determining the characteristic label of each main body in the second image as a characteristic label set of the second image.
It should be noted again that the first set of feature labels includes feature labels of respective subjects in the first image and the second set of feature labels includes feature labels of respective subjects in the second image.
S204, matching each characteristic label in the first characteristic label set with each characteristic label in the second characteristic label set, and determining the characteristic labels shared by the first characteristic label set and the second characteristic label set.
The feature labels in the feature label set are not recorded in a chaotic manner, but are recorded according to different subjects, for example, the first image includes a subject 1 and a subject 2, and the corresponding first image feature label set includes a position label, a pixel label and an area label of the subject 1, and the corresponding first image feature label set includes a position label, a pixel label and an area label of the subject 2.
The method for determining the feature tag shared by the first feature tag set and the second feature tag set includes the following two ways, and the present embodiment takes the first example as a continuing description of the present invention.
First kind:
s20411, each subject in the first image is matched with each subject in the second image, and a subject common to the first image and the second image is determined.
S20412, comparing the feature labels of the main body shared by the first image and the second image to obtain the feature label of the shared main body, and taking the feature label as the feature label shared by the first feature label set and the second feature label set.
Second kind:
s20421, classifying each feature tag in the first feature tag set and each feature tag in the second feature tag set.
It should be understood that the feature labels include three types of position labels, pixel labels and area labels, and the corresponding classification can obtain a position label set, a pixel label set and an area label set.
S20422, comparing the first characteristic label set with the characteristic labels of the same category in the second characteristic label set respectively, and determining the characteristic labels shared by the first characteristic label set and the second characteristic label set.
And respectively comparing the position label set in the first characteristic label set with each position label in the position label set in the second characteristic label set, thereby determining the characteristic labels shared by the first characteristic label set and the second characteristic label set.
S205, determining the semantic matching relationship between the first image and the second image based on the feature labels shared by the first feature label set and the second feature label set.
When the number of the shared feature labels is greater than or equal to the preset label number, judging that the first image and the second image have semantic relations.
In another example, step S205 may also be implemented by the following two steps:
s2051, determining important feature labels in the first feature label set of the first image.
It should be understood that the important feature labels are feature label sets corresponding to important subjects in the image, and the important subjects may be subjects with the largest occupied area in the image, subjects with the largest brightness, or subjects in a fifth grid in the nine grids.
Therefore, determining the important feature tag in the first feature tag set of the first image may be determining each feature tag of the main body with the largest occupied area in the first image as the important feature tag; the feature labels of the main body with the maximum brightness in the first image can be determined to be important feature labels; the feature label may also be an important feature label for determining each subject of the first image within a fifth one of the nine boxes.
S2052, when the feature labels shared by the first feature label set and the second feature label set comprise important feature labels, judging that the first image and the second image have semantic relation
The invention provides an image semantic matching method, a terminal and a computer readable storage medium, wherein a first characteristic tag set and a second characteristic tag set can be respectively obtained by analyzing a first image and a second image which are not related and are unpaired, and whether the first image and the second image have semantic relation or not can be determined based on characteristic tags shared by the first characteristic tag set and the second characteristic tag set.
The present embodiment also provides an apparatus, as shown in fig. 3, which includes a processor 31, a memory 32, and a communication bus 33, wherein:
the communication bus 33 is used to enable connection communication between the processor 31 and the memory 32;
the processor 31 is configured to execute an image semantic matching program stored in the memory 32 to implement the steps of the image semantic matching method in the respective embodiments described above.
The present embodiment also provides a computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the image semantic matching method in the respective embodiments as described above.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present invention is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present invention. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present invention.
In the foregoing embodiments, the details of each embodiment are focused on, and reference may be made to the related description of other embodiments in a portion of this embodiment, where the foregoing embodiment numbers of the present invention are merely for illustration, and do not represent advantages or disadvantages of the embodiments, and those skilled in the art may make many forms without departing from the spirit of the invention and the scope of the claims.

Claims (6)

1. A method of semantic matching of images, the method comprising:
determining a main body in the first image and the second image;
determining feature labels of the subjects in the first image and the second image based on the subjects in the first image and the second image; the feature tag comprises a position tag, a pixel tag and an area tag of each main body in the image, wherein the position tag is the position of the main body in the image, and the pixel tag comprises: average brightness, gray scale, hue or color temperature information of a subject, the area label comprising: the area of the main body or the ratio of the main body to the total area of the image;
determining the characteristic label of each main body in the first image as a first characteristic label set, and determining the characteristic label of each main body in the second image as a second characteristic label set;
matching each subject in the first image with each subject in the second image, and determining a subject common to the first image and the second image; comparing the feature labels of the main body shared by the first image and the second image to obtain a shared feature label, and taking the shared feature label as the feature label shared by the first feature label set and the second feature label set;
or classifying each feature tag in the first feature tag set and each feature tag in the second feature tag set; comparing the first characteristic label set with the characteristic labels of the same category in the second characteristic label set respectively to determine the characteristic labels shared by the first characteristic label set and the second characteristic label set;
and determining the semantic matching relationship between the first image and the second image based on the feature labels shared by the first feature label set and the second feature label set.
2. The method of image semantic matching according to claim 1, wherein the determining the semantic matching relationship of the first image and the second image based on the feature tag common to the first feature tag set and the second feature tag set comprises:
and when the number of the feature labels shared by the first feature label set and the second feature label set is greater than or equal to the preset label number, judging that the first image and the second image have a semantic relationship.
3. The method of image semantic matching according to claim 1, wherein the determining the semantic matching relationship of the first image and the second image based on the feature tag common to the first feature tag set and the second feature tag set comprises:
determining important feature tags in a first feature tag set of the first image;
and when the important feature labels are included in the feature labels shared by the first feature label set and the second feature label set, judging that the first image and the second image have semantic relations.
4. The image semantic matching method of claim 3, wherein the determining important feature tags in the first feature tag set of the first image comprises:
determining each feature label of a main body with the largest occupied area in the first image as an important feature label;
or determining each feature label of the main body with the maximum brightness in the first image as an important feature label;
alternatively, determining the feature label of each subject of the first image within a fifth one of the nine boxes is an important feature label.
5. An apparatus comprising a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in a memory to implement the steps of the image semantic matching method according to any one of claims 1 to 4.
6. A computer-readable storage medium storing one or more programs executable by one or more processors to implement the steps of the image semantic matching method of any of claims 1-4.
CN201910824888.3A 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium Active CN110633740B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201910824888.3A CN110633740B (en) 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium
PCT/CN2020/112352 WO2021043092A1 (en) 2019-09-02 2020-08-31 Image semantic matching method and device, terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910824888.3A CN110633740B (en) 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN110633740A CN110633740A (en) 2019-12-31
CN110633740B true CN110633740B (en) 2024-04-09

Family

ID=68969962

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910824888.3A Active CN110633740B (en) 2019-09-02 2019-09-02 Image semantic matching method, terminal and computer readable storage medium

Country Status (2)

Country Link
CN (1) CN110633740B (en)
WO (1) WO2021043092A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110633740B (en) * 2019-09-02 2024-04-09 平安科技(深圳)有限公司 Image semantic matching method, terminal and computer readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598113A (en) * 2009-06-30 2012-07-18 安芯美特控股有限公司 Method circuit and system for matching an object or person present within two or more images
US8782077B1 (en) * 2011-06-10 2014-07-15 Google Inc. Query image search
CN107067030A (en) * 2017-03-29 2017-08-18 北京小米移动软件有限公司 The method and apparatus of similar pictures detection
CN108304435A (en) * 2017-09-08 2018-07-20 腾讯科技(深圳)有限公司 Information recommendation method, device, computer equipment and storage medium
CN109977253A (en) * 2019-03-29 2019-07-05 哈尔滨工业大学 A kind of fast image retrieval method and device based on semanteme and content
CN110059212A (en) * 2019-03-16 2019-07-26 平安科技(深圳)有限公司 Image search method, device, equipment and computer readable storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109166261B (en) * 2018-10-11 2022-06-07 平安科技(深圳)有限公司 Image processing method, device and equipment based on image recognition and storage medium
CN110633740B (en) * 2019-09-02 2024-04-09 平安科技(深圳)有限公司 Image semantic matching method, terminal and computer readable storage medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102598113A (en) * 2009-06-30 2012-07-18 安芯美特控股有限公司 Method circuit and system for matching an object or person present within two or more images
US8782077B1 (en) * 2011-06-10 2014-07-15 Google Inc. Query image search
CN107067030A (en) * 2017-03-29 2017-08-18 北京小米移动软件有限公司 The method and apparatus of similar pictures detection
CN108304435A (en) * 2017-09-08 2018-07-20 腾讯科技(深圳)有限公司 Information recommendation method, device, computer equipment and storage medium
CN110059212A (en) * 2019-03-16 2019-07-26 平安科技(深圳)有限公司 Image search method, device, equipment and computer readable storage medium
CN109977253A (en) * 2019-03-29 2019-07-05 哈尔滨工业大学 A kind of fast image retrieval method and device based on semanteme and content

Also Published As

Publication number Publication date
WO2021043092A1 (en) 2021-03-11
CN110633740A (en) 2019-12-31

Similar Documents

Publication Publication Date Title
US10282643B2 (en) Method and apparatus for obtaining semantic label of digital image
US20210174135A1 (en) Method of matching image and apparatus thereof, device, medium and program product
US10635942B2 (en) Method and apparatus for identifying a product
CN114169381A (en) Image annotation method and device, terminal equipment and storage medium
CN112308802A (en) Image analysis method and system based on big data
CN109558792B (en) Method and system for detecting internet logo content based on samples and features
CN111612000B (en) Commodity classification method and device, electronic equipment and storage medium
CN110633740B (en) Image semantic matching method, terminal and computer readable storage medium
CN112149570A (en) Multi-person living body detection method and device, electronic equipment and storage medium
US10963690B2 (en) Method for identifying main picture in web page
CN112949706A (en) OCR training data generation method and device, computer equipment and storage medium
CN110413869B (en) Method and device for pushing information
CN116861107A (en) Business content display method, device, equipment, medium and product
CN113627526B (en) Vehicle identification recognition method and device, electronic equipment and medium
CN110704658A (en) Method and device for searching image, computer storage medium and terminal
CN114677578A (en) Method and device for determining training sample data
CN114373068A (en) Industry-scene OCR model implementation system, method and equipment
CN109189789B (en) Method and device for displaying table
CN112329841A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN110880022A (en) Labeling method, labeling device and storage medium
CN113435454B (en) Data processing method, device and equipment
CN111127310B (en) Image processing method and device, electronic equipment and storage medium
CN111311603A (en) Method and apparatus for outputting target object number information
CN111914850A (en) Picture feature extraction method, device, server and medium
CN113449814B (en) Picture level classification method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant