CN111192302A - Feature matching method based on motion smoothness and RANSAC algorithm - Google Patents

Feature matching method based on motion smoothness and RANSAC algorithm Download PDF

Info

Publication number
CN111192302A
CN111192302A CN202010007373.7A CN202010007373A CN111192302A CN 111192302 A CN111192302 A CN 111192302A CN 202010007373 A CN202010007373 A CN 202010007373A CN 111192302 A CN111192302 A CN 111192302A
Authority
CN
China
Prior art keywords
matching
grid
feature
ransac algorithm
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010007373.7A
Other languages
Chinese (zh)
Inventor
程向红
李俊杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southeast University
Original Assignee
Southeast University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southeast University filed Critical Southeast University
Priority to CN202010007373.7A priority Critical patent/CN111192302A/en
Publication of CN111192302A publication Critical patent/CN111192302A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods

Abstract

The invention discloses a feature matching method based on motion smoothness and RANSAC algorithm, which comprises the following steps: (1) reading two pictures to be matched, and extracting the characteristics of the two pictures; (2) after feature extraction, performing primary matching by using a violence matching method; (3) gridding the two pictures, and finding out a grid pair which most possibly represents the same area according to the number of the grid area matching; (4) judging the correct matching rate of the grid area according to the motion smoothness, and extracting the grid area with extremely low mismatching rate; (5) calculating corresponding homography matrixes by using a RANSAC algorithm for the extracted feature points in the grid region; (6) and screening the primary matching result through the homography matrix obtained by calculation to obtain a high-quality matching point. The method can effectively improve the recall rate of matching, improve the real-time performance of matching and reduce the time spent on matching.

Description

Feature matching method based on motion smoothness and RANSAC algorithm
The technical field is as follows:
the invention relates to a feature matching method based on motion smoothness and RANSAC algorithm, belonging to the technical field of image processing and visual navigation.
Background art:
at present, the visual sensor is widely applied to scenes such as visual navigation, combined navigation and the like due to low cost and easy use. The image matching is a basic technology of visual navigation, the image matching algorithm based on the characteristics is the mainstream of the current image matching technology, the main idea is to find out characteristic points in two images, describe each characteristic point and judge whether to match or not by comparing the similarity degree of each characteristic point.
The feature extraction and matching mainly refers to extracting features of the images and matching the two images according to the features to obtain a relation between the two images. For matching between feature points, a traditional method is to use violent matching, but due to the fact that image textures are repeated and images may be selected or scaled, more mismatching is easy to occur, and matching performance is poor. How to distinguish between a mis-match and a correct match is the key to the matching algorithm. At present, a traditional method is to use violent matching to perform rough matching, and then use a Random Sample Consensus (RANSAC) algorithm to perform matching filtering, but the RANSAC algorithm only has a certain probability to obtain a reliable model, when mismatching is too much, the algorithm may fail, and because input data is too much, multiple iterations are required, and the real-time performance is poor.
Disclosure of Invention
The invention aims to provide a feature matching method based on motion smoothness and RANSAC algorithm, which can effectively improve the recall rate of matching, improve the real-time performance of matching and reduce the time spent on matching.
The above purpose is realized by the following technical scheme:
a feature matching method based on motion smoothness and RANSAC algorithm comprises the following steps:
(1) reading two pictures to be matched, and extracting the characteristics of the two pictures;
(2) after feature extraction, performing primary matching by using a violence matching method;
(3) gridding the two pictures, and finding out a grid pair which most possibly represents the same area according to the number of the grid area matching;
(4) judging the correct matching rate of the grid area according to the motion smoothness, and extracting the grid area with extremely low mismatching rate;
(5) calculating corresponding homography matrixes by using a RANSAC algorithm for the extracted feature points in the grid region;
(6) and screening the primary matching result through the homography matrix obtained by calculation to obtain a high-quality matching point.
Preferably, in the step (2), the violence matching method comprises the following steps: and for the selected feature points, calculating the distances between the descriptors of the selected feature points and all the feature point descriptors in the image to be matched, sequencing the obtained distances, and matching the feature points with the closest distances.
Preferably, in the step (3), the two images are gridded, and the gridding is significant in that the number of matches in each grid of the grid and the image to be matched is counted, and the grid pair with the largest matching number is regarded as the grid pair capable of representing the same scene.
Preferably, in the step (4), the motion smoothness screening theory is as follows: the motion smoothness results in enough supporting matches in the neighborhood around the correct matching point, whereas if the match is wrong, the neighborhood does not exist, or there are fewer supporting matches. Two images { A, B } are set, and the two images are matched. The corresponding matching logarithm is N pairs, and M is { x ═ x1,x2,…,xi,…xNAnd for the N pairs of matches, estimating the probability of mismatching by analyzing the number of supported matches near each match. MabThe number of matches in the representative region { a, b }, set SiTo match xiThe neighborhood of (2) supports a number of matches, referred to as support. If each feature point matching is independent, then the support S can be obtainediThe probability distribution of (a) is approximated by a binomial distribution:
Figure BDA0002353280250000021
in the formula, PtAnd PfThe probability of correct matching and the probability of incorrect matching respectively, m is xiNumber of feature matches of neighborhood.
In practical consideration, the analysis of single matching is expanded to the whole matching neighborhood, the image is divided into grids, as shown in fig. 2, the matching in the single grid is regarded as a whole, the neighborhood of the grid is defined as 8 grids surrounding the grid, and for the region { a, b } in the graph, the support degree S is setabThe formula of (c) is shown as follows:
Figure BDA0002353280250000022
Figure BDA0002353280250000023
indicating region { ak,bkNumber of matching points in the } support SabThe distribution is satisfied:
Figure BDA0002353280250000024
n is the grid average feature matching number, k is the grid number including the neighborhood, and k is 9. The above formula shows that true and false matching is supported in its neighborhood, but SabThe distribution of (a) is quite different, and the corresponding expected E and standard deviation D are:
Figure BDA0002353280250000025
Figure BDA0002353280250000026
wherein E istAnd DtRespectively expectation and standard deviation of the correct matching distribution, EfAnd DfFor the expectation and standard deviation of the distribution of false matches, as shown in fig. 3, whose probability density function is bimodal, a certain threshold is chosen that can distinguish between true and false matches to some extent. As can be seen from fig. 3, a threshold is defined:
Figure BDA0002353280250000027
as can be seen from the binomial distribution law, the probability begins to decrease when the abscissa size exceeds the expectation. The magnitude of the threshold T depends on the adjustment factor gamma, when Sab>At time T, it can be considered that the mismatching rate of the region { a, b } is extremely low, and theoretically, the larger the threshold value is, the smaller the mismatching rate of the extracted grid region is, but the actual effect depends on the number of feature points, and the mismatching cannot be completely eliminated.
Preferably, in the step (5), the extracted feature points in the grid region with low mismatching rate are input into a Random Sample Consensus algorithm (RANSAC), and a homography matrix H corresponding to the region is calculated, where the homography matrix is used to describe a conversion relationship between correct matching points of two images.
Preferably, in step (6), screening the primary matching result by using a homography matrix, first calculating a position coordinate of the feature point position of the original image after being transformed by the homography matrix H, setting an error threshold, comparing the error threshold with the feature point coordinate position corresponding to the actual primary matching, calculating a position error, completing screening, and defining the position errors of two pixel points (x, y) and (x ', y') as:
Figure BDA0002353280250000031
since there is still some error between the computed homography matrix and the actual homography matrix between the two images, the selection of the threshold can be slightly expanded, and it is generally considered that when E <3, the matching can be retained.
The invention has the beneficial effects that: after the images are subjected to primary matching, an extremely low mismatching rate area is extracted according to the movement smoothness, a corresponding homography matrix is calculated by using a RANSAC algorithm, and finally, a primary matching result is screened by using the calculated homography matrix to obtain a high-quality matching point.
Drawings
FIG. 1 is a block diagram of a feature matching method based on motion smoothness and RANSAC algorithm;
FIG. 2 is a schematic diagram of an image grid neighborhood;
FIG. 3 is a schematic diagram of a probability distribution;
FIG. 4 is a graph illustrating the matching effect of the present invention on a test picture set;
FIG. 5 is a comparison of robustness of the present invention with other methods, wherein FIG. 5(a) is a graph comparing accuracy P of the method of the present invention with GMS algorithm and RANSAC algorithm, and FIG. 5(b) is a graph comparing recall R of the method of the present invention with GMS algorithm and RANSAC algorithm; fig. 5(c) is a graph comparing F values of the method of the present invention with GMS algorithm and RANSAC algorithm, where F is (2PR)/(P + R).
Detailed Description
The present invention will be better understood from the following examples. As shown in fig. 1, the present invention relates to a feature matching method based on motion smoothing constraint and RANSAC algorithm, which comprises the following specific steps:
(1) reading two pictures to be matched, and extracting the characteristics of the two pictures;
(2) after feature extraction, performing primary matching by using a violence matching method;
(3) gridding the two pictures, and finding out a grid pair which most possibly represents the same area according to the number of the grid area matching;
(4) judging the correct matching rate of the grid area according to the motion smoothness, and extracting the grid area with extremely low mismatching rate;
(5) calculating corresponding homography matrixes by using a RANSAC algorithm for the extracted feature points in the grid region;
(6) and screening the primary matching result through the homography matrix obtained by calculation to obtain a high-quality matching point.
Step (2), the violence matching method comprises the following steps: and for the selected feature points, calculating the distances between the descriptors of the selected feature points and all the feature point descriptors in the image to be matched, sequencing the obtained distances, and matching the feature points with the closest distances.
And (3) gridding the two images, wherein the gridding is significant in that the matching number of each grid and the grid of the image to be matched is calculated, and the grid with the maximum matching number is regarded as the grid capable of representing the same scene.
Step (4), for each grid area, preferably, in step (4), the motion smoothness screening theory is as follows: the motion smoothness results in enough supporting matches in the neighborhood around the correct matching point, whereas if the match is wrong, the neighborhood does not exist, or there are fewer supporting matches. Two images { A, B } are set, and the two images are matched. To pairThe matching logarithm is N pairs, where M is { x ═ x1,x2,…,xi,…xNAnd for the N pairs of matches, estimating the probability of mismatching by analyzing the number of supported matches near each match. MabThe number of matches in the representative region { a, b }, set SiTo match xiThe neighborhood of (2) supports a number of matches, referred to as support. If each feature point matching is independent, then the support S can be obtainediThe probability distribution of (a) is approximated by a binomial distribution:
Figure BDA0002353280250000041
in the formula, PtAnd PfThe probability of correct matching and the probability of incorrect matching respectively, m is xiNumber of feature matches of neighborhood.
In practical consideration, the analysis of the single matching is expanded to the whole matching neighborhood, the image is divided into grids, as shown in fig. 2, the matching in the single grid is regarded as a whole, the neighborhood of the grid is defined as 8 grids surrounding the grid, and for the grid region { a, b }, the support degree S is setabThe formula of (c) is shown as follows:
Figure BDA0002353280250000042
Figure BDA0002353280250000043
indicating region { ak,bkNumber of matching points in the } support SabThe distribution is satisfied:
Figure BDA0002353280250000044
n is the grid average feature matching number, k is the grid number including the neighborhood, and k is 9. The above formula shows that true and false matching is supported in its neighborhood, but SabThe distribution of (a) is quite different, and the corresponding expected E and standard deviation D are:
Figure BDA0002353280250000045
Figure BDA0002353280250000046
wherein E istAnd DtRespectively expectation and standard deviation of the correct matching distribution, EfAnd DfFor the expectation and standard deviation of the distribution of false matches, as shown in fig. 3, whose probability density function is bimodal, a certain threshold is chosen that can distinguish between true and false matches to some extent. As can be seen from fig. 3, a threshold is defined:
Figure BDA0002353280250000047
as can be seen from the binomial distribution law, the probability begins to decrease when the abscissa size exceeds the expectation. The size of the threshold T depends on the adjustment factor γ, and when S > T, the mismatching rate of the region is considered to be extremely low, and theoretically, the larger the threshold is, the smaller the mismatching rate of the extracted grid region is, but the actual effect depends on the number of feature points, and the mismatching cannot be completely eliminated. In this embodiment, the adjustment factor γ is 10.
And (5) inputting the extracted feature points in the grid region with low mismatching rate into a Random Sample Consensus (RANSAC) algorithm, and calculating a homography matrix H corresponding to the region, wherein the homography matrix is used for describing a conversion relation of correct matching points between the two images.
Step 6, screening the primary matching result by using the homography matrix, firstly calculating the position coordinate of the original image feature point position after being transformed by the homography matrix H, setting an error threshold, comparing the error threshold with the feature point coordinate position corresponding to the actual primary matching, calculating the position error, finishing the screening, and defining the position errors of two pixel points (x, y) and (x ', y') as follows:
Figure BDA0002353280250000051
because the homography matrix obtained by calculation and the actual homography matrix between the two images still have certain errors, the selection of the threshold value can be slightly expanded, and generally, when E is considered to be less than 3, the matching can be reserved, and finally, the whole matching screening process is finished.
The feasibility of the invention was verified by the following experiments:
(1) experiments were compiled in an ubuntu operating environment using C + +, with the computer configured as: intel (R) core (TM) i5-8250U CPU @1.6GHz 1.80GHz RAM 8.00GB, and extracting feature points by adopting a traditional ORB algorithm;
(2) the adjustment factor gamma is set to 10,
(3) the image dataset of the experiment was derived from the oxford university standard dataset.
(4) Comparing the robustness of the present invention with other inventions.
The feasibility of the method was verified by simulation (as shown in fig. 4 and 5). As can be seen from FIG. 4, the method of the present invention has the advantages of good matching effect and high accuracy. As can be seen from fig. 5, compared with the conventional GMS algorithm and RANSAC algorithm, the method of the present invention has a more stable accuracy, which is higher than the GMS algorithm, and has a significant advantage in terms of the recall rate compared with the other two algorithms; the F value is also higher than the other two algorithms.
The foregoing is only a preferred embodiment of the present invention, and it should be noted that, for those skilled in the art, various modifications and decorations can be made without departing from the principle of the present invention, and these modifications and decorations should also be regarded as the protection scope of the present invention.

Claims (6)

1. A method for feature matching based on motion smoothness and RANSAC algorithm, the method comprising the steps of:
(1) reading two pictures to be matched, and extracting the characteristics of the two pictures;
(2) after feature extraction, performing primary matching by using a violence matching method;
(3) gridding the two pictures, and finding out a grid pair which most possibly represents the same area according to the number of the grid area matching;
(4) judging the correct matching rate of the grid area according to a motion smoothness screening theory, and extracting the grid area with extremely low mismatching rate;
(5) calculating corresponding homography matrixes by using a RANSAC algorithm for the extracted feature points in the grid region;
(6) and screening the primary matching result through the homography matrix obtained by calculation to obtain a high-quality matching point.
2. The method for matching features based on motion smoothness and RANSAC algorithm as claimed in claim 1, wherein in the step (2), the brute force matching method is as follows: and for the selected feature points, calculating the distances between the descriptors of the selected feature points and all the feature point descriptors in the image to be matched, sequencing the obtained distances, and matching the feature points with the closest distances.
3. The method for matching features based on motion smoothness and RANSAC algorithm as claimed in claim 1, wherein the specific method in step (3) is to grid the two images, calculate the number of matches between each grid and the grid of the image to be matched, and view the grid with the largest number of matches as the matching grid most likely to represent the same scene.
4. The method for feature matching based on motion smoothness and RANSAC algorithm as claimed in claim 1, wherein in the step (4), the motion smoothness filtering theory is as follows: the motion smoothness causes enough supporting matching to exist in the neighborhood near the correct matching point, otherwise, if the matching is wrong, the neighborhood does not exist or less supporting matching exists;
let two images { a, B }, match the two images, the corresponding matching logarithm is N pairs, let M ═ x1,x2,…,xi,…xNFor the N pairs of matches, estimating the probability of a mismatch by analyzing the number of supported matches in the vicinity of each match, MabThe number of matches of the representative region { a, b }, is setSiTo match xiThe neighborhood of (2) supports the number of matches, called support, and since the probability of each point match is independent, S is approximately represented by a binomial distribution functioniThe distribution of (c):
Figure FDA0002353280240000011
in the formula, PtAnd PfThe probability of correct matching and the probability of incorrect matching respectively, m is xiThe number of feature matches of the neighborhood;
in practical consideration, the analysis of single matching is expanded to the whole matching neighborhood, the image is divided into grids, the matching in the single grid is regarded as a whole, the neighborhood of the grid is defined as 8 grids surrounding the grid, and the support degree S is set for the grid region { a, b }abThe formula (2) is shown as follows:
Figure FDA0002353280240000021
Figure FDA0002353280240000022
indicating region { ak,bkNumber of matching points in the } support SabThe distribution is satisfied:
Figure FDA0002353280240000023
n is the grid average feature matching number, k is the grid number including the neighborhood, and formula (3) shows that true and false matching is supported by the support degree in the neighborhood, but SabThe distribution of (a) is quite different, and the corresponding expected E and standard deviation D are:
Figure FDA0002353280240000024
wherein E istAnd DtRespectively expectation and standard deviation of the correct matching distribution, EfAnd DfIs a mistakeThe expectation and standard deviation of the mismatch distribution, since its probability density function is bimodal, select a certain threshold to distinguish between true and false matches, define a threshold:
Figure FDA0002353280240000025
according to the two-term distribution rule, when the size of the abscissa exceeds the expectation, the probability begins to decrease, the size of the threshold value T depends on the adjustment factor gamma, and when S exceeds the expectationab>And T, the mismatching rate of the areas { a, b } is considered to be extremely low.
5. The method of claim 1, wherein the feature matching based on motion smoothness and RANSAC algorithm in step (5) is implemented by inputting the extracted feature points in the grid region with low mismatching rate into RANSAC algorithm, and calculating the homography H corresponding to the region, wherein the homography H is used to describe the transformation relationship between the correct matching points of the two images.
6. The feature matching method based on motion smoothness and RANSAC algorithm as claimed in claim 1, wherein the specific method in step (6) is to use a homography matrix to screen the primary matching result, first calculate the position coordinates of the feature point position of the original image after being transformed by the homography matrix H, consider the error existing in the homography matrix calculation, set an error threshold, compare it with the feature point coordinate position corresponding to the actual primary matching, calculate the position error, complete the screening, and define the position error of two pixels (x, y) and (x ', y') as:
Figure FDA0002353280240000026
when E <3, the match is retained as a good match point.
CN202010007373.7A 2020-01-02 2020-01-02 Feature matching method based on motion smoothness and RANSAC algorithm Pending CN111192302A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010007373.7A CN111192302A (en) 2020-01-02 2020-01-02 Feature matching method based on motion smoothness and RANSAC algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010007373.7A CN111192302A (en) 2020-01-02 2020-01-02 Feature matching method based on motion smoothness and RANSAC algorithm

Publications (1)

Publication Number Publication Date
CN111192302A true CN111192302A (en) 2020-05-22

Family

ID=70710653

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010007373.7A Pending CN111192302A (en) 2020-01-02 2020-01-02 Feature matching method based on motion smoothness and RANSAC algorithm

Country Status (1)

Country Link
CN (1) CN111192302A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749888A (en) * 2020-12-31 2021-05-04 浙江省方大标准信息有限公司 Random sample consensus (RANSAC) algorithm-based multivariate random spot check method, system and device
CN112818853A (en) * 2021-02-01 2021-05-18 中国第一汽车股份有限公司 Traffic element identification method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140362240A1 (en) * 2013-06-07 2014-12-11 Apple Inc. Robust Image Feature Based Video Stabilization and Smoothing
CN110009739A (en) * 2019-01-29 2019-07-12 浙江省北大信息技术高等研究院 The extraction and coding method of the motion feature of the digital retina of mobile camera
CN110111248A (en) * 2019-03-15 2019-08-09 西安电子科技大学 A kind of image split-joint method based on characteristic point, virtual reality system, camera

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140362240A1 (en) * 2013-06-07 2014-12-11 Apple Inc. Robust Image Feature Based Video Stabilization and Smoothing
CN110009739A (en) * 2019-01-29 2019-07-12 浙江省北大信息技术高等研究院 The extraction and coding method of the motion feature of the digital retina of mobile camera
CN110111248A (en) * 2019-03-15 2019-08-09 西安电子科技大学 A kind of image split-joint method based on characteristic point, virtual reality system, camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
程向红,李俊杰: "基于运动平滑性与 RANSAC 优化的图像特征匹配算法", 《中国惯性技术学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112749888A (en) * 2020-12-31 2021-05-04 浙江省方大标准信息有限公司 Random sample consensus (RANSAC) algorithm-based multivariate random spot check method, system and device
CN112749888B (en) * 2020-12-31 2023-10-03 浙江省标准化研究院(金砖国家标准化(浙江)研究中心、浙江省物品编码中心) Multi-element random spot check method, system and device based on RANSAC algorithm
CN112818853A (en) * 2021-02-01 2021-05-18 中国第一汽车股份有限公司 Traffic element identification method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Zhang et al. Cross-based local stereo matching using orthogonal integral images
CN110210560B (en) Incremental training method, classification method and device, equipment and medium of classification network
CN107194408B (en) Target tracking method of mixed block sparse cooperation model
WO2018090937A1 (en) Image processing method, terminal and storage medium
US9025889B2 (en) Method, apparatus and computer program product for providing pattern detection with unknown noise levels
US8774508B2 (en) Local feature amount calculating device, method of calculating local feature amount, corresponding point searching apparatus, and method of searching corresponding point
CN112116610A (en) Remote sensing image segmentation method, system, terminal and storage medium
WO2022077863A1 (en) Visual positioning method, and method for training related model, related apparatus, and device
US20050152604A1 (en) Template matching method and target image area extraction apparatus
CN111222548A (en) Similar image detection method, device, equipment and storage medium
CN107038432B (en) Fingerprint image direction field extraction method based on frequency information
CN113177592B (en) Image segmentation method and device, computer equipment and storage medium
CN111192302A (en) Feature matching method based on motion smoothness and RANSAC algorithm
CN111105452A (en) High-low resolution fusion stereo matching method based on binocular vision
CN113643365A (en) Camera pose estimation method, device, equipment and readable storage medium
CN114419406A (en) Image change detection method, training method, device and computer equipment
CN114549861A (en) Target matching method based on feature point and convolution optimization calculation and storage medium
CN108694411A (en) A method of identification similar image
CN110826554A (en) Infrared target detection method
CN110766708B (en) Image comparison method based on contour similarity
CN111814884A (en) Target detection network model upgrading method based on deformable convolution
CN108763261B (en) Graph retrieval method
CN115147296A (en) Hyperspectral image correction method, device, computer equipment and storage medium
CN111369425A (en) Image processing method, image processing device, electronic equipment and computer readable medium
CN113362342B (en) Image segmentation method and related device thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200522

RJ01 Rejection of invention patent application after publication