CN115661368A - Image matching method, device, server and storage medium - Google Patents

Image matching method, device, server and storage medium Download PDF

Info

Publication number
CN115661368A
CN115661368A CN202211598037.XA CN202211598037A CN115661368A CN 115661368 A CN115661368 A CN 115661368A CN 202211598037 A CN202211598037 A CN 202211598037A CN 115661368 A CN115661368 A CN 115661368A
Authority
CN
China
Prior art keywords
image
matching
matched
acquiring
trained
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211598037.XA
Other languages
Chinese (zh)
Other versions
CN115661368B (en
Inventor
金岩
邱敏
刘继超
詹慧媚
甘琳
唐至威
冯谨强
胡国锋
付晓雪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hainayun IoT Technology Co Ltd
Qingdao Hainayun Digital Technology Co Ltd
Qingdao Hainayun Intelligent System Co Ltd
Original Assignee
Hainayun IoT Technology Co Ltd
Qingdao Hainayun Digital Technology Co Ltd
Qingdao Hainayun Intelligent System Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hainayun IoT Technology Co Ltd, Qingdao Hainayun Digital Technology Co Ltd, Qingdao Hainayun Intelligent System Co Ltd filed Critical Hainayun IoT Technology Co Ltd
Priority to CN202211598037.XA priority Critical patent/CN115661368B/en
Publication of CN115661368A publication Critical patent/CN115661368A/en
Application granted granted Critical
Publication of CN115661368B publication Critical patent/CN115661368B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Abstract

The application provides an image matching method, an image matching device, a server and a storage medium. The method specifically comprises the following steps: acquiring an image pre-matching algorithm corresponding to the acquisition type by acquiring an image set to be matched; performing pre-matching pair processing on images in an image set to be matched to obtain a plurality of initial image matching pairs; acquiring a characteristic region block of each image in an image set to be matched, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained neural network and a characteristic extraction network; and acquiring a matching result of the image set to be matched by adopting a pre-trained matching network so as to realize image three-dimensional reconstruction processing based on the matching result. By the method, the neural network participates in the extraction efficiency of the feature descriptors corresponding to the images in the image set to be matched, the matching accuracy is improved, and the efficiency and the effect of three-dimensional reconstruction according to the image set to be matched are improved.

Description

Image matching method, device, server and storage medium
Technical Field
The present application relates to the field of image matching technologies, and in particular, to an image matching method, an image matching device, a server, and a storage medium.
Background
When the image information is used for three-dimensional reconstruction of the target object, sufficient feature information needs to be extracted according to the image information, and image matching is completed according to the feature information, so that more complete three-dimensional information of the target object can be calculated, and the three-dimensional reconstruction is realized. It can be seen that completing image matching is an important step in achieving three-dimensional reconstruction.
In the prior art, scale Invariant Feature Transform (SIFT) feature extraction algorithm is often adopted for image matching, the complexity is high, the real-time requirement is difficult to meet, random Sample Consensus (RANSAC) algorithm which is often used for image matching has no upper limit on iteration times, an optimal solution cannot be obtained frequently, image matching is difficult to realize, the time required by three-dimensional reconstruction is long, and the obtained effect is poor.
Therefore, rapidly and accurately completing image matching is still a technical problem to be solved urgently.
Disclosure of Invention
The application provides an image matching method, an image matching device, a server and a storage medium, which are used for solving the technical problems that in the prior art, the complexity of a feature extraction algorithm is high, an optimal solution is difficult to obtain by an image matching algorithm, so that images are difficult to match, and the three-dimensional reconstruction time is long and the effect is poor.
In a first aspect, the present application provides an image matching method, including:
acquiring an image set to be matched, and acquiring an image pre-pairing algorithm corresponding to the acquisition type according to the acquisition type of the image set to be matched;
performing pre-matching processing on the images in the image set to be matched according to the image pre-matching algorithm to obtain a plurality of initial image matching pairs;
for each image in the image set to be matched, acquiring a characteristic region block of the image, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet characteristic extraction network;
and acquiring a matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result.
In a possible implementation manner, the obtaining, for each image in the image set to be matched, a feature region block of the image includes:
for each image in the image set to be matched, carrying out black plug Hessain matrix analysis on the image to obtain an analysis result;
acquiring an internal angle point corresponding to the image according to the analysis result;
and taking the inner corner as a center, and acquiring a characteristic region block of the image according to a first preset pixel length.
In a possible implementation manner, the performing black plug Hessain matrix analysis on the image to obtain an analysis result includes:
for the image
Figure 884592DEST_PATH_IMAGE001
Each pixel point (x, y) of (c), calculating an image
Figure 554608DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 575521DEST_PATH_IMAGE002
For each pixel point (x, y), respectively obtaining a Hessain response value corresponding to each pixel point (x, y)
Figure 356395DEST_PATH_IMAGE003
Corresponding said each pixel point
Figure 748193DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain the analysis result.
In a possible implementation manner, obtaining an internal corner corresponding to the image according to the analysis result includes:
according to the analysis result, adopting a formula:
Figure 589110DEST_PATH_IMAGE005
acquiring the image
Figure 315627DEST_PATH_IMAGE001
Gaussian curvature corresponding to each pixel point (x, y)
Figure 41137DEST_PATH_IMAGE006
The Gaussian curvature
Figure 412076DEST_PATH_IMAGE006
The enlarged points are taken as angular points of the image;
obtaining the area corresponding to the second preset pixel length extending to the vicinity by taking the corner point as the center
Figure 158315DEST_PATH_IMAGE007
Points less than zero are internal angle points;
wherein the content of the first and second substances,
Figure 608013DEST_PATH_IMAGE007
the Hessain response value corresponding to the pixel point (x, y) in the image is obtained;
Figure 996269DEST_PATH_IMAGE008
for the image
Figure 362660DEST_PATH_IMAGE001
First derivative in x-direction;
Figure 670013DEST_PATH_IMAGE009
for the image
Figure 246488DEST_PATH_IMAGE001
The first derivative in the y direction.
In a possible implementation manner, the obtaining a feature descriptor corresponding to the image by using a pre-trained origin neural network, a pre-trained affinity net neural network, and a HardNet feature extraction network includes:
performing direction and deformation correction processing on the characteristic region block by adopting the pre-trained organization neural network and the pre-trained AffiniNet neural network to obtain a characteristic region image corresponding to the corrected characteristic region block;
and extracting a preset number of feature descriptors corresponding to the feature region image by adopting the HardNet feature extraction network according to the feature region image.
In one possible implementation, the method further includes:
for each image in the image set to be matched, acquiring a gray level image corresponding to the image, and acquiring a color gradient corresponding to the image by adopting a Harris algorithm according to the gray level image;
then, the obtaining a feature region block of the image according to a first preset pixel length with the inner corner as a center includes:
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length and the color gradient.
In a second aspect, the present application provides an image matching apparatus, comprising:
the image acquisition module is used for acquiring an image set to be matched and acquiring an image pre-matching algorithm corresponding to the acquisition type according to the acquisition type of the image set to be matched;
the pre-matching module is used for performing pre-matching processing on the images in the image set to be matched according to the image pre-matching algorithm to obtain a plurality of initial image matching pairs;
the characteristic extraction module is used for acquiring a characteristic region block of each image in the image set to be matched, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet characteristic extraction network;
and the image matching module is used for acquiring the matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result.
In addition, optionally, the feature extraction module is specifically configured to:
for each image in the image set to be matched, carrying out black plug Hessain matrix analysis on the image to obtain an analysis result;
obtaining an inner corner point corresponding to the image according to the analysis result;
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to a first preset pixel length.
Optionally, the feature extraction module is specifically configured to:
for the image
Figure 48222DEST_PATH_IMAGE001
Each pixel point (x, y) of (1), calculating an image
Figure 128173DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 370542DEST_PATH_IMAGE002
For each pixel point (x, y), respectively obtaining a Hessain response value corresponding to each pixel point (x, y)
Figure 168734DEST_PATH_IMAGE003
Corresponding said each pixel point
Figure 508580DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain the analysis result.
Optionally, the feature extraction module is specifically configured to:
according to the analysis result, adopting a formula:
Figure 708617DEST_PATH_IMAGE005
acquiring the image
Figure 623352DEST_PATH_IMAGE001
Gaussian curvature corresponding to each pixel point (x, y)
Figure 643261DEST_PATH_IMAGE006
The Gaussian curvature
Figure 52377DEST_PATH_IMAGE006
The enlarged points are taken as angular points of the image;
obtaining the area corresponding to the second preset pixel length extending to the vicinity by taking the corner point as the center
Figure 106920DEST_PATH_IMAGE004
Points less than zero are internal angle points;
wherein the content of the first and second substances,
Figure 428442DEST_PATH_IMAGE007
the Hessain response value corresponding to the pixel point (x, y) in the image is obtained;
Figure 201226DEST_PATH_IMAGE008
for the image
Figure 148454DEST_PATH_IMAGE001
First derivative in x-direction;
Figure 323083DEST_PATH_IMAGE009
for the image
Figure 579621DEST_PATH_IMAGE001
The first derivative in the y direction.
Optionally, the image matching module is specifically configured to:
performing direction and deformation correction processing on the characteristic region block by adopting the pre-trained organization neural network and the pre-trained AffiniNet neural network to obtain a characteristic region image corresponding to the corrected characteristic region block;
and extracting a preset number of feature descriptors corresponding to the feature region image by adopting the HardNet feature extraction network according to the feature region image.
Optionally, the feature extraction module is further configured to:
for each image in the image set to be matched, acquiring a gray level image corresponding to the image, and acquiring a color gradient corresponding to the image by adopting a Harris algorithm according to the gray level image;
the feature extraction module is specifically configured to:
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length and the color gradient.
In a third aspect, the present application provides a server, comprising:
a processor and a memory;
the memory is used for storing executable instructions of the processor;
wherein the processor is configured to perform the method of the first aspect via execution of the executable instructions.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon computer-executable instructions for implementing the method according to the first aspect when executed by a processor.
The application provides an image matching method, an image matching device, a server and a storage medium. The method specifically comprises the following steps: acquiring an image set to be matched, and acquiring an image pre-matching algorithm corresponding to an acquisition type according to the acquisition type of the image set to be matched; performing pre-matching pair processing on images in an image set to be matched according to an image pre-matching pair algorithm to obtain a plurality of initial image matching pairs; for each image in an image set to be matched, acquiring a characteristic region block of the image, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet characteristic extraction network; and acquiring a matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the plurality of initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result. By the method, images in the image sets to be matched are pre-paired, the pre-trained organization neural network, the pre-trained Affininet neural network and the HardNet feature extraction network are adopted to obtain the feature descriptors corresponding to the images, the pre-paired initial image pairs and the feature descriptors corresponding to the initial image pairs are input into the pre-trained SuperGlue matching network to obtain the matching result of the image sets to be matched, the participation of the neural network improves the extraction efficiency of the feature descriptors corresponding to the images in the image sets to be matched, improves the matching accuracy, and also improves the efficiency and effect of three-dimensional reconstruction according to the image sets to be matched.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to these drawings without inventive exercise.
Fig. 1 is a schematic flowchart of a first embodiment of an image matching method provided in the present application;
fig. 2 is a schematic flowchart of a second embodiment of an image matching method provided in the present application;
FIG. 3 is a schematic structural diagram of an embodiment of an image matching apparatus provided in the present application;
fig. 4 is a schematic structural diagram of an embodiment of a server provided in the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by persons skilled in the art based on the embodiments in the present application in light of the present disclosure, are within the scope of protection of the present application.
The terms "first," "second," "third," "fourth," and the like (if any) in the description of the present application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
When the target object is three-dimensionally reconstructed through the image information of the target object, accurate three-dimensional information corresponding to the target object needs to be calculated according to the two-dimensional image information of the target object, and in order to obtain the three-dimensional information, pixel identification and alignment, that is, image matching, needs to be performed on the target object. In the prior art, a common SIFT feature extraction algorithm in the image matching process is high in complexity, high time and resource cost are consumed in calculation, the requirement on real-time performance is difficult to meet, and a common RANSAC algorithm in the matching process of the extracted feature descriptors estimates parameters of model fitting data from a point set comprising inner points and outer points in an iteration mode, but the iteration frequency of the algorithm has no upper limit, the calculation amount is large, an optimal solution cannot be obtained frequently, and the matching effect is unstable. The three-dimensional reconstruction performed by image matching based on the two algorithms is time-consuming and poor in effect.
Therefore, the technical idea of the present application is: how to improve the image matching efficiency and accuracy so as to save time and improve the reconstruction effect when performing three-dimensional reconstruction according to image information.
Hereinafter, the technical means of the present application will be described in detail by specific examples. It should be noted that the following specific embodiments may be combined with each other, and the same or similar concepts or processes may not be described in detail in some embodiments.
Fig. 1 is a schematic flowchart of a first embodiment of an image matching method provided in the present application. Referring to fig. 1, the method includes:
s101, obtaining an image set to be matched, and obtaining an image pre-matching algorithm corresponding to the collection type according to the collection type of the image set to be matched.
In this embodiment, the image set to be matched is an image set captured for a target object to be three-dimensionally reconstructed, images in the image set may be captured by corresponding devices at different times and different positions and at different angles, and images with different dimensional characteristics of the target object may be reflected, so that each image in the image set to be matched has different degrees of deformation in directions and angles with respect to a front view of the target object in each direction. Optionally, the device may be a digital camera, a mobile phone, a tablet computer, or other devices with an image capturing function.
In this embodiment, the acquisition type of the image set to be matched includes a type of a device used for acquiring an image, an environment type when the image is acquired, the number of the acquired images, and the like, and the image pre-matching algorithm includes an exhaustion method, a bag-of-words method, a sequential sequence method, a Global Positioning System (GPS) assisted clustering method, and the like. If the number of the images in the image set to be matched does not exceed 200, acquiring an exhaustive method to pre-match the images of the image set to be matched; if the images in the image set to be matched are acquired by the same equipment and the different images are spaced at the same physical distance, acquiring a sequence method to pre-match the images of the image set to be matched with an algorithm; if the image set to be matched is in an outdoor environment during image acquisition and the acquisition equipment can mark a GPS of an acquisition position, acquiring a GPS assisted clustering method to pre-match an algorithm for the image of the image set to be matched; if the image set to be matched is in an outdoor environment during image acquisition but cannot acquire a GPS of an acquisition position, acquiring a bag-of-words method to pre-match an algorithm for the image of the image set to be matched.
Step S102, carrying out pre-matching pair processing on the images in the image set to be matched according to an image pre-matching pair algorithm, and obtaining a plurality of initial image matching pairs.
In this embodiment, according to a pre-pairing algorithm obtained according to the acquisition type of the image set to be matched, pre-pairing processing is performed on images in the image set to be matched, so as to obtain a plurality of initial image matching pairs matched pairwise corresponding to the image set to be matched.
In this embodiment, if the exhaustive method is obtained as the image pre-pairing algorithm of the image set to be matched, each image in the image set to be matched is pre-paired with the remaining images; if the sequence method is acquired to be the image pre-pairing algorithm of the image set to be matched, each image in the image set to be matched can be only pre-paired with the image with the closest physical interval during shooting; if the GPS assisted clustering method is acquired to perform the image pre-matching algorithm for the image set to be matched, the images in the image set to be matched can be grouped according to the GPS information and then pre-matched, and the two images with the most similar GPS information are always used as the most preferable and matched pair; and if the acquired bag-of-word method is the image pre-pairing algorithm of the image set to be matched, carrying out image pre-pairing based on the bag-of-word model.
Step S103, for each image in the image set to be matched, obtaining a characteristic region block of the image, and obtaining a characteristic descriptor corresponding to the image by adopting a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet characteristic extraction network.
In this embodiment, for each image in an image set to be matched, a feature region block of the image is obtained, and a feature descriptor corresponding to the image is obtained by using a pre-trained organization neural network, a pre-trained AffineNet neural network and a HardNet feature extraction network. The pre-trained neural network has strong information comprehensive capability, self-learning, self-organization and self-adaptability, can well coordinate various input information relations, has high timeliness when a plurality of images are processed in parallel, and has strong robustness.
And S104, acquiring a matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the plurality of initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result.
In this embodiment, a plurality of initial image matching pairs and feature descriptors corresponding to images in each matching pair are input into a pre-trained SuperGlue matching network, if the ratio of the feature descriptors that can be matched by the input initial image matching pair to the feature descriptors corresponding to the images in the input matching pair exceeds a preset threshold, it is determined that two images in the initial image matching pair can be matched, and a matching result is output as information of the initial image matching pair that is successfully matched, so as to implement image three-dimensional reconstruction processing based on the matching result. Optionally, the preset threshold is labeled match _ threshold, and a specific parameter value thereof may be set to 0.2.
In this embodiment, the pre-trained SuperGlue matching network is an initial matching pair of an image set obtained based on history, a feature descriptor obtained by the pre-trained orientitation neural network, the pre-trained AffineNet neural network and the HardNet feature extraction network is obtained as an input of a training set, a matching result of the initial matching pair marked with the history image set is used as an output of the training set, and the original SuperGlue matching network is trained to obtain the pre-trained SuperGlue matching network. As the feature descriptors acquired by the HardNet feature extraction network in the input set of the training set are richer than those acquired by the original SuperPoint feature extraction network and have higher confidence, the pre-trained SuperGlue matching network has higher and more accurate matching precision than the original SuperGlue matching network.
In the embodiment, an image pre-matching algorithm corresponding to an acquisition type is acquired by acquiring an image set to be matched and according to the acquisition type of the image set to be matched; performing pre-matching processing on images in an image set to be matched according to an image pre-matching algorithm to obtain a plurality of initial image matching pairs; for each image in an image set to be matched, acquiring a characteristic region block of the image, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained Orientation neural network, a pre-trained Affininet neural network and a HardNet characteristic extraction network; and acquiring a matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the plurality of initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result. By the method provided by the embodiment, images in an image set to be matched are pre-paired, a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet feature extraction network are adopted to obtain feature descriptors corresponding to the images, finally, a pre-paired initial image pair and feature descriptors corresponding to the initial image pair are input into a pre-trained SuperGlue matching network to obtain a matching result of the image set to be matched, the participation of the neural network improves the extraction efficiency of the feature descriptors corresponding to each image in the image set to be matched, improves the matching accuracy, and improves the efficiency and effect of three-dimensional reconstruction according to the image set to be matched.
On the basis of the above embodiment, a specific implementation manner of obtaining the feature descriptor corresponding to the image by using the pre-trained organization neural network, the pre-trained AffineNet neural network and the HardNet feature extraction network in step S103 includes:
and performing direction and deformation correction processing on the characteristic region block by adopting a pre-trained organization neural network and a pre-trained AffiniNet neural network to obtain a characteristic region image corresponding to the corrected characteristic region block.
And extracting a preset number of feature descriptors corresponding to the feature region images by adopting a HardNet feature extraction network according to the feature region images.
In this embodiment, the pre-trained origin neural network and the pre-trained affinity neural network are obtained by training the origin neural network and the affinity neural network, with feature region blocks having different degrees of directions and deformations due to different acquisition directions and angles as inputs of a training set, and with corrected corresponding feature region images as outputs of the training set. The training set can be adjusted according to the acquisition type of the image set to be matched, illustratively, when an outdoor building is taken as a target object, due to the fact that different lenses of the five-eye camera have a fixed angle relation, the shooting angles of the data source can be enriched, and the unmanned aerial vehicle can be used for shooting from a high place to avoid interference caused by obstruction shielding, so that the training set can use images acquired by the unmanned aerial vehicle carrying the five-eye camera as a source image set of the characteristic region block; when indoor still is taken as a target object, an image acquisition environment with single background color needs to be established, and an image obtained by shooting with the same equipment is taken as a source image set of a characteristic region block so as to control variables as much as possible.
In this embodiment, the HardNet feature extraction network Padding all convolution layers with a round of 0 (except the last convolution layer) to ensure that the image size is unchanged, and no pooling layer is provided to ensure the feature descriptor performance. And extracting a preset number of feature descriptors corresponding to the feature region images by using a HardNet feature extraction network. Alternatively, the preset number may be 128 dimensions.
Fig. 2 is a schematic flowchart of a second embodiment of an image matching method provided in the present application. Referring to fig. 2, on the basis of the above-mentioned embodiment of fig. 1, in step S103, for each image in the image set to be matched, a specific implementation manner of obtaining the feature region block of the image is as follows:
step S201, for each image in the image set to be matched, carrying out black plug Hessain matrix analysis on the image, and obtaining an analysis result.
In one possible implementation manner, step S201 further includes:
for images
Figure 574122DEST_PATH_IMAGE001
Each pixel point (x, y) of (1), calculating an image
Figure 325040DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 354176DEST_PATH_IMAGE002
For each pixel point (x, y), respectively obtaining a Hessain response value corresponding to each pixel point (x, y)
Figure 20430DEST_PATH_IMAGE003
Corresponding each pixel point
Figure 767807DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain an analysis result.
In this embodiment, for each image in the set of images to be matched, the image is processed
Figure 56837DEST_PATH_IMAGE001
Each pixel point (x, y) of (c), calculating an image
Figure 940479DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 273240DEST_PATH_IMAGE002
And obtaining Hessain response value corresponding to each pixel point (x, y)
Figure 242333DEST_PATH_IMAGE003
Corresponding each pixel point
Figure 335054DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain a Hessain matrix analysis result with the same size as the analyzed image.
And S202, acquiring inner corner points corresponding to the images according to the analysis result.
In a specific implementation manner, step S202 further includes:
according to the analysis result, the formula is adopted:
Figure 73203DEST_PATH_IMAGE010
acquiring an image
Figure 343910DEST_PATH_IMAGE001
Gaussian curvature corresponding to each pixel point (x, y)
Figure 800299DEST_PATH_IMAGE006
Curvature of gauss
Figure 431131DEST_PATH_IMAGE006
The enlarged points serve as corner points of the image.
Obtaining the area corresponding to the second preset pixel length extending to the vicinity by taking the corner point as the center
Figure 289366DEST_PATH_IMAGE004
Points less than zero are interior angle points.
Wherein the content of the first and second substances,
Figure 229509DEST_PATH_IMAGE007
the Hessain response value corresponding to the pixel point (x, y) in the image is obtained;
Figure 907615DEST_PATH_IMAGE008
as an image
Figure 342139DEST_PATH_IMAGE001
The first derivative in the x direction;
Figure 54880DEST_PATH_IMAGE009
as an image
Figure 664459DEST_PATH_IMAGE001
The first derivative in the y direction.
In the present embodiment, it is preferred that,
Figure 95440DEST_PATH_IMAGE007
is a maximum value and
Figure 68076DEST_PATH_IMAGE011
the point of (A) corresponds to a point of increased Gaussian curvature, so will
Figure 635323DEST_PATH_IMAGE007
Is a maximum value and
Figure 917269DEST_PATH_IMAGE011
the point of (a) is defined as the corner point. Each image in the set of images to be matched has a plurality of corner points. The corner points are important features of the image, and the corner points can effectively reduce the data volume of information while keeping the important features of the image, effectively improve the calculation speed, facilitate the reliable matching of the image and play an important role in understanding and analyzing the image and the graph.
In this embodiment, in order to make the obtained feature region block include more effective feature points, an area corresponding to a second preset pixel length extending to the vicinity with the corner point as the center is obtained
Figure 710913DEST_PATH_IMAGE004
Points less than zero are interior angle points. The inner corner point is a local maximum point of a region which extends near the corner point and corresponds to the second preset pixel length, contains more image information of the target object, is more stable than the corner point, and is easier to obtain an effective characteristic region block. Alternatively, the second preset pixel length may be 4 pixels.
And S203, taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length.
In this embodiment, the feature region block corresponding to one interior corner may be a plurality of feature region blocks obtained according to different first preset pixel lengths. Optionally, the first preset pixel length may be any pixel length in a range from 15 pixels to 25 pixels, and 6 different pixel lengths may be determined in the range to obtain 6 feature region blocks with different sizes.
In this embodiment, an internal corner point including more image information is determined according to a blackout matrix analysis result of each image in an image set to be matched, and one or more characteristic region blocks of the image are obtained according to a first preset pixel length and the internal corner point serving as a center, that is, a seed point. The characteristic region block acquired by taking the inner corner as the central point has effective characteristics of more target objects, the effectiveness of the subsequently acquired characteristic descriptors can be guaranteed, the image matching accuracy is improved, and the imaging effect of three-dimensional reconstruction is guaranteed.
On the basis of the embodiment of fig. 2, the method further includes:
and for each image in the image set to be matched, acquiring a gray image corresponding to the image, and acquiring a color gradient corresponding to the image by adopting a Harris algorithm according to the gray image.
Then, taking the inner corner point as the center, and obtaining the characteristic region block of the image according to the first preset pixel length, including:
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length and the color gradient.
In this embodiment, in an R (red) G (green) B (blue) type color image captured by a device having an image capturing function in an image set to be matched, in an RGB model, assuming that three channel values are equal, that is, R = G = B, color information of the pixel point may be represented by the same gray scale value, and the range of the gray scale value is 0 to 255. After the gray level image of each image in the image set to be matched is obtained, the calculation amount of the image can be reduced, and the Harris algorithm can be adopted conveniently to obtain the color gradient corresponding to the image. The color gradient may embody the image information represented by different colors in the image.
In this embodiment, a characteristic region block including image color gradient information may be obtained according to the first preset pixel length and color gradient with an inside corner point as a center. The first predetermined pixel length may be anywhere from 15 pixels to 25 pixels.
Fig. 3 is a schematic structural diagram of an embodiment of an image matching apparatus provided in the present application. Referring to fig. 3, the image matching apparatus 300 includes: an image acquisition module 301, a pre-matching module 302, a feature extraction module 303 and an image matching module 304. The image acquisition module 301 is configured to acquire an image set to be matched, and acquire an image pre-pairing algorithm corresponding to an acquisition type according to the acquisition type of the image set to be matched; the pre-pairing module 302 is configured to perform pre-pairing processing on images in an image set to be matched according to an image pre-pairing algorithm to obtain a plurality of initial image matching pairs; the feature extraction module 303 is configured to acquire a feature region block of each image in the image set to be matched, and acquire a feature descriptor corresponding to the image by using a pre-trained organization neural network, a pre-trained AffineNet neural network and a HardNet feature extraction network; and the image matching module 304 is configured to obtain a matching result of the image set to be matched by using a pre-trained SuperGlue matching network according to the plurality of initial image matching pairs and the feature descriptor corresponding to each image in the image set to be matched, so as to implement image three-dimensional reconstruction processing based on the matching result.
In addition, optionally, the feature extraction module 303 is specifically configured to: for each image in the image set to be matched, carrying out black plug Hessain matrix analysis on the image to obtain an analysis result; obtaining an inner angle point corresponding to the image according to the analysis result; and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length.
Optionally, the feature extraction module 303 is specifically configured to: for images
Figure 611872DEST_PATH_IMAGE001
Each pixel point (x, y) of (1), calculating an image
Figure 33626DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 987938DEST_PATH_IMAGE002
(ii) a For each pixel point (x, y), respectively obtaining a Hessain response value corresponding to each pixel point (x, y)
Figure 127933DEST_PATH_IMAGE003
(ii) a Corresponding each pixel point
Figure 707950DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain an analysis result.
Optionally, the feature extraction module 303 is specifically configured to: according to the analysis result, the formula is adopted:
Figure 374423DEST_PATH_IMAGE012
acquiring an image
Figure 873538DEST_PATH_IMAGE001
Gaussian curvature corresponding to each pixel point (x, y)
Figure 376194DEST_PATH_IMAGE006
(ii) a Curvature of gauss
Figure 618957DEST_PATH_IMAGE006
The enlarged points are taken as angular points of the image; obtaining the area corresponding to the second preset pixel length extending to the vicinity by taking the corner point as the center
Figure 898192DEST_PATH_IMAGE004
Points less than zero are internal angle points; wherein the content of the first and second substances,
Figure 568208DEST_PATH_IMAGE007
the Hessain response value corresponding to the pixel point (x, y) in the image is obtained;
Figure 292581DEST_PATH_IMAGE008
as an image
Figure 339035DEST_PATH_IMAGE001
First derivative in x-direction;
Figure 980100DEST_PATH_IMAGE009
as an image
Figure 821017DEST_PATH_IMAGE001
The first derivative in the y direction.
Optionally, the image matching module 304 is specifically configured to: adopting a pre-trained organization neural network and a pre-trained AffiniNet neural network to correct the direction and the deformation of the characteristic region block so as to obtain a characteristic region image corresponding to the corrected characteristic region block; and extracting a preset number of feature descriptors corresponding to the feature region images by adopting a HardNet feature extraction network according to the feature region images.
Optionally, the feature extraction module 303 is further configured to: acquiring a gray image corresponding to each image in an image set to be matched, and acquiring a color gradient corresponding to the image by adopting a Harris algorithm according to the gray image; the feature extraction module 303 is specifically configured to: and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length and the color gradient.
The image matching device provided in the embodiment of the present application can implement the technical solution of any one of the above method embodiments, and the implementation principle and the beneficial effect thereof are similar, and are not described herein again.
Fig. 4 is a schematic structural diagram of an embodiment of a server provided in the present application. Referring to fig. 4, the server 400 includes: a processor 401 and a memory 402.
The memory 402 is used to store executable instructions for the processor 401.
Wherein the processor 401 is configured to execute the technical solution in any of the foregoing method embodiments via executing the executable instructions.
The server provided by the present application is used for executing the technical solution in any of the foregoing method embodiments, and the implementation principle and the technical effect are similar, which are not described herein again.
The present application further provides a computer-readable storage medium, where computer-executable instructions are stored in the computer-readable storage medium, and when the computer-executable instructions are executed by a processor, the computer-executable instructions are used to implement the technical solution in any one of the foregoing method embodiments, which are similar in implementation principle and technical effect and are not described herein again.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present application.

Claims (14)

1. An image matching method, comprising:
acquiring an image set to be matched, and acquiring an image pre-matching algorithm corresponding to the acquisition type according to the acquisition type of the image set to be matched;
performing pre-matching processing on the images in the image set to be matched according to the image pre-matching algorithm to obtain a plurality of initial image matching pairs;
for each image in the image set to be matched, acquiring a characteristic region block of the image, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet characteristic extraction network;
and acquiring a matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result.
2. The method according to claim 1, wherein the obtaining, for each image in the set of images to be matched, a feature region block of the image comprises:
for each image in the image set to be matched, carrying out black plug Hessain matrix analysis on the image to obtain an analysis result;
acquiring an internal angle point corresponding to the image according to the analysis result;
and taking the inner corner as a center, and acquiring a characteristic region block of the image according to a first preset pixel length.
3. The method of claim 2, wherein performing a black-plug Hessain matrix analysis on the image to obtain an analysis result comprises:
for the image
Figure 731204DEST_PATH_IMAGE001
Each pixel point (x, y) of (1), calculating an image
Figure 705982DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 76920DEST_PATH_IMAGE002
For each pixel point (x, y), respectively obtaining a Hessain response value corresponding to each pixel point (x, y)
Figure 964105DEST_PATH_IMAGE003
Corresponding said each pixel point
Figure 53284DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain the analysis result.
4. The method of claim 3, wherein obtaining the internal corner corresponding to the image according to the analysis result comprises:
according to the analysis result, adopting a formula:
Figure DEST_PATH_IMAGE005
acquiring the image
Figure 333218DEST_PATH_IMAGE001
Gaussian curvature corresponding to each pixel point (x, y)
Figure 965187DEST_PATH_IMAGE006
The Gaussian curvature
Figure 272541DEST_PATH_IMAGE006
The enlarged points are taken as angular points of the image;
obtaining the area corresponding to the second preset pixel length extending to the vicinity by taking the corner point as the center
Figure 583436DEST_PATH_IMAGE004
Points less than zero are internal angle points;
wherein the content of the first and second substances,
Figure 385170DEST_PATH_IMAGE007
the Hessain response value corresponding to the pixel point (x, y) in the image is obtained;
Figure 730701DEST_PATH_IMAGE008
for the image
Figure 707491DEST_PATH_IMAGE001
First derivative in x-direction;
Figure 505683DEST_PATH_IMAGE009
for the image
Figure 111107DEST_PATH_IMAGE001
The first derivative in the y direction.
5. The method as claimed in any one of claims 1 to 4, wherein the obtaining of the feature descriptor corresponding to the image by using the pre-trained organization neural network, the pre-trained AffiniNet neural network and the HardNet feature extraction network comprises:
performing direction and deformation correction processing on the characteristic region block by adopting the pre-trained organization neural network and the pre-trained AffiniNet neural network to obtain a characteristic region image corresponding to the corrected characteristic region block;
and extracting a preset number of feature descriptors corresponding to the feature region image by adopting the HardNet feature extraction network according to the feature region image.
6. The method of claim 2, further comprising:
for each image in the image set to be matched, acquiring a gray level image corresponding to the image, and acquiring a color gradient corresponding to the image by adopting a Harris algorithm according to the gray level image;
then, the obtaining a feature region block of the image according to a first preset pixel length with the inner corner as a center includes:
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length and the color gradient.
7. An image matching apparatus, characterized by comprising:
the image acquisition module is used for acquiring an image set to be matched and acquiring an image pre-matching algorithm corresponding to the acquisition type according to the acquisition type of the image set to be matched;
the pre-matching module is used for performing pre-matching processing on the images in the image set to be matched according to the image pre-matching algorithm to obtain a plurality of initial image matching pairs;
the characteristic extraction module is used for acquiring a characteristic region block of each image in the image set to be matched, and acquiring a characteristic descriptor corresponding to the image by adopting a pre-trained organization neural network, a pre-trained AffiniNet neural network and a HardNet characteristic extraction network;
and the image matching module is used for acquiring the matching result of the image set to be matched by adopting a pre-trained SuperGlue matching network according to the characteristic descriptors corresponding to the initial image matching pairs and each image in the image set to be matched so as to realize image three-dimensional reconstruction processing based on the matching result.
8. The apparatus of claim 7, wherein the feature extraction module is specifically configured to:
for each image in the image set to be matched, carrying out black plug Hessain matrix analysis on the image to obtain an analysis result;
obtaining an inner corner point corresponding to the image according to the analysis result;
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to a first preset pixel length.
9. The apparatus of claim 8, wherein the feature extraction module is specifically configured to:
for the image
Figure 311145DEST_PATH_IMAGE001
Each pixel point (x, y) of (1), calculating an image
Figure 960301DEST_PATH_IMAGE001
Obtaining a Hessain matrix by using second-order partial derivatives in the x direction and the y direction and second-order derivatives in the xy direction
Figure 245788DEST_PATH_IMAGE002
For each pixel point (x, y), respectively obtaining a Hessain response value corresponding to each pixel point (x, y)
Figure 920483DEST_PATH_IMAGE003
Corresponding said each pixel point
Figure 975027DEST_PATH_IMAGE004
And arranging according to the corresponding position relation of the pixel points on the image to obtain the analysis result.
10. The apparatus of claim 9, wherein the feature extraction module is specifically configured to:
according to the analysis result, adopting a formula:
Figure 296549DEST_PATH_IMAGE005
acquiring the image
Figure 69333DEST_PATH_IMAGE001
Gaussian curvature corresponding to each pixel point (x, y)
Figure 16561DEST_PATH_IMAGE006
The Gaussian curvature
Figure 925611DEST_PATH_IMAGE006
The enlarged points are taken as angular points of the image;
obtaining the area corresponding to the second preset pixel length extending to the vicinity by taking the corner point as the center
Figure 182149DEST_PATH_IMAGE004
Points less than zero are internal angle points;
wherein the content of the first and second substances,
Figure 317595DEST_PATH_IMAGE007
the Hessain response value corresponding to the pixel point (x, y) in the image is obtained;
Figure 193147DEST_PATH_IMAGE008
for the image
Figure 597451DEST_PATH_IMAGE001
The first derivative in the x direction;
Figure 900256DEST_PATH_IMAGE009
for the image
Figure 257419DEST_PATH_IMAGE001
The first derivative in the y direction.
11. The apparatus according to any one of claims 7 to 10, wherein the image matching module is specifically configured to:
performing direction and deformation correction processing on the characteristic region block by adopting the pre-trained organization neural network and the pre-trained AffiniNet neural network to obtain a characteristic region image corresponding to the corrected characteristic region block;
and extracting a preset number of feature descriptors corresponding to the feature region image by adopting the HardNet feature extraction network according to the feature region image.
12. The apparatus of claim 8, wherein the feature extraction module is further configured to:
for each image in the image set to be matched, acquiring a gray image corresponding to the image, and acquiring a color gradient corresponding to the image by adopting a Harris algorithm according to the gray image;
the feature extraction module is specifically configured to:
and taking the inner corner point as a center, and acquiring a characteristic region block of the image according to the first preset pixel length and the color gradient.
13. A server, comprising:
a processor and a memory;
the memory is used for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any of claims 1-6 via execution of the executable instructions.
14. A computer-readable storage medium having computer-executable instructions stored therein, which when executed by a processor, are configured to implement the method of any one of claims 1 to 6.
CN202211598037.XA 2022-12-14 2022-12-14 Image matching method, device, server and storage medium Active CN115661368B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211598037.XA CN115661368B (en) 2022-12-14 2022-12-14 Image matching method, device, server and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211598037.XA CN115661368B (en) 2022-12-14 2022-12-14 Image matching method, device, server and storage medium

Publications (2)

Publication Number Publication Date
CN115661368A true CN115661368A (en) 2023-01-31
CN115661368B CN115661368B (en) 2023-04-11

Family

ID=85022936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211598037.XA Active CN115661368B (en) 2022-12-14 2022-12-14 Image matching method, device, server and storage medium

Country Status (1)

Country Link
CN (1) CN115661368B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309225A1 (en) * 2009-06-03 2010-12-09 Gray Douglas R Image matching for mobile augmented reality
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
CN110009722A (en) * 2019-04-16 2019-07-12 成都四方伟业软件股份有限公司 Three-dimensional rebuilding method and device
CN110781911A (en) * 2019-08-15 2020-02-11 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium
WO2020206903A1 (en) * 2019-04-08 2020-10-15 平安科技(深圳)有限公司 Image matching method and device, and computer readable storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100309225A1 (en) * 2009-06-03 2010-12-09 Gray Douglas R Image matching for mobile augmented reality
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
WO2020206903A1 (en) * 2019-04-08 2020-10-15 平安科技(深圳)有限公司 Image matching method and device, and computer readable storage medium
CN110009722A (en) * 2019-04-16 2019-07-12 成都四方伟业软件股份有限公司 Three-dimensional rebuilding method and device
CN110781911A (en) * 2019-08-15 2020-02-11 腾讯科技(深圳)有限公司 Image matching method, device, equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
SHALLOM R: "Estimation of orientation and affine transformations of a 3-dimensional object" *
缪盾;刘燕萍;: "一种混合式图像特征检测与匹配算法研究" *

Also Published As

Publication number Publication date
CN115661368B (en) 2023-04-11

Similar Documents

Publication Publication Date Title
CN109886997B (en) Identification frame determining method and device based on target detection and terminal equipment
US9530073B2 (en) Efficient descriptor extraction over multiple levels of an image scale space
WO2020228446A1 (en) Model training method and apparatus, and terminal and storage medium
US10708525B2 (en) Systems and methods for processing low light images
CN109902548B (en) Object attribute identification method and device, computing equipment and system
JP2016513843A (en) Reduction of object detection time by using feature spatial localization
CN112712518B (en) Fish counting method and device, electronic equipment and storage medium
CN108345821B (en) Face tracking method and device
WO2022179549A1 (en) Calibration method and apparatus, computer device, and storage medium
CN111369605A (en) Infrared and visible light image registration method and system based on edge features
WO2020014913A1 (en) Method for measuring volume of object, related device, and computer readable storage medium
CN114332183A (en) Image registration method and device, computer equipment and storage medium
Badki et al. Robust radiometric calibration for dynamic scenes in the wild
CN113298870B (en) Object posture tracking method and device, terminal equipment and storage medium
CN113454684A (en) Key point calibration method and device
CN115661368B (en) Image matching method, device, server and storage medium
CN110288691B (en) Method, apparatus, electronic device and computer-readable storage medium for rendering image
CN106683044B (en) Image splicing method and device of multi-channel optical detection system
CN110751163B (en) Target positioning method and device, computer readable storage medium and electronic equipment
CN110969657B (en) Gun ball coordinate association method and device, electronic equipment and storage medium
CN110135474A (en) A kind of oblique aerial image matching method and system based on deep learning
CN114913246A (en) Camera calibration method and device, electronic equipment and storage medium
CN114549857A (en) Image information identification method and device, computer equipment and storage medium
CN114387353A (en) Camera calibration method, calibration device and computer readable storage medium
CN113920196A (en) Visual positioning method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant