CN108550165A - A kind of image matching method based on local invariant feature - Google Patents
A kind of image matching method based on local invariant feature Download PDFInfo
- Publication number
- CN108550165A CN108550165A CN201810221834.3A CN201810221834A CN108550165A CN 108550165 A CN108550165 A CN 108550165A CN 201810221834 A CN201810221834 A CN 201810221834A CN 108550165 A CN108550165 A CN 108550165A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- feature
- point
- points
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 62
- 239000011159 matrix material Substances 0.000 claims abstract description 20
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000005070 sampling Methods 0.000 claims abstract description 8
- 238000005286 illumination Methods 0.000 abstract description 14
- 238000010276 construction Methods 0.000 abstract description 2
- 230000008569 process Effects 0.000 abstract description 2
- 238000002592 echocardiography Methods 0.000 abstract 1
- 238000001914 filtration Methods 0.000 description 12
- 239000013598 vector Substances 0.000 description 9
- 230000009466 transformation Effects 0.000 description 8
- 238000004364 calculation method Methods 0.000 description 6
- 230000008859 change Effects 0.000 description 6
- 230000004044 response Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 3
- 238000000844 transformation Methods 0.000 description 3
- 230000002159 abnormal effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000010191 image analysis Methods 0.000 description 2
- 238000013178 mathematical model Methods 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 230000002547 anomalous effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 230000008030 elimination Effects 0.000 description 1
- 238000003379 elimination reaction Methods 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000012887 quadratic function Methods 0.000 description 1
- 238000013139 quantization Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention is to provide a kind of image matching methods based on local invariant feature.Integral image and Hessian determinants of a matrix are sought to initial pictures;It establishes scale space pyramid and carries out the positioning of characteristic point;The principal direction of characteristic point is determined by Haar small echos, completes feature point extraction;Calculate the invariable rotary LBP features of image-region around each characteristic point, construction feature description;Using the nearest neighbor method of Euclidean distance, completes feature and slightly match;(6) by random sampling coherence method, remaining Mismatching point after thick matching process executes is rejected, completes the matching of feature essence.The method of the invention of the present invention ensureing match time and under conditions of accuracy, image occur scale, illumination and it is rotationally-varying when also there is certain robustness.
Description
Technical Field
The invention relates to an image feature extraction and image processing method, in particular to an image matching method.
Background
Image matching refers to an image analysis and processing technology for aligning two images containing the same scene and determining the corresponding relation between the two images, and is widely applied to the fields of navigation, map and terrain matching, biological feature recognition, character recognition, medical image analysis, computer vision and the like. In practical applications, images to be matched are often obtained by different sensors at different times and under different conditions, and there are differences among the images, such as translation, scale, rotation, illumination, noise, and viewing angle, which brings great challenges to the image matching method. Related methods based on pixel grey values (e.g., SSAD, NNPROD, etc.) have the disadvantage of being sensitive to the scale, rotation, illumination, perspective, etc. transformations of the image, in contrast to methods based on local Features (e.g., corner points, SURF (Speed Up Robust Features), etc.). The local invariant features of an image have the characteristics of invariance, low redundancy, no need of image segmentation and uniqueness under various image transformations (such as geometric transformation, illumination transformation and the like), and are widely used in the fields of image matching, object recognition, image classification, image retrieval and the like.
By applying local features in the image matching technology, the complicated image matching problem can be converted into the measurement problem of the feature vector, so that the speed and the robustness of the algorithm are improved. The basic idea of the image matching method based on the local invariant features is to detect a feature point set of an image, generate feature vectors based on the feature points and local neighborhood information thereof, and measure the similarity between the feature vectors to complete the image matching. SIFT (scalelnvariant Feature Transform) published in 2004 on IJCV and SURF based on Hessian matrix and Haar wavelet proposed later to improve SIFT running speed are two most representative local Feature algorithms in the field of image matching, but are inefficient in the problem of illumination change images and not very accurate. The LBP (Local Binary Patterns) method with robustness on illumination change is a Binary description for expressing the size relationship between a certain pixel point of a gray image and surrounding pixel points, and is initially applied to image texture description. In recent years, many extended LBPs have been proposed that are computationally simple and have the advantages of partial scale, rotation, and dark and light invariance.
In the image matching algorithm combining SIFT and rotation invariant LBP, the rotation invariant LBP features are used for describing local image areas around SIFT key points, the method has strong robustness on image transformation such as scale, rotation, illumination and the like, but the requirement on high real-time performance cannot be met in the aspect of operation speed.
In a patent document entitled "an image matching method combining an LBP feature extraction method and an SURF feature extraction method", when feature points are described, Haar feature descriptors in an original SURF method are reserved, and are combined with LBP rotation invariant descriptors, so that the image matching method has a better matching effect than the original SURF method, but the complexity of the method is increased and the operation speed is reduced to some extent due to two feature descriptions.
In summary of local invariant features, it is mentioned that many matching methods of local invariant features applied in the fields of image, video retrieval, target identification, etc. include a threshold-based matching method, a nearest neighbor-based matching method, and a nearest neighbor distance ratio-based matching method. The 3 methods have advantages and disadvantages, wherein the threshold-based matching method is simple and has small calculation amount, and the nearest neighbor distance ratio-based matching method has high accuracy.
Due to the influence of various geometric and photometric transformations among images, noise, quantization errors, similar local structures possibly existing in the images and the like, wrong matching still possibly exists in the feature matching result based on the similarity measurement. The random sampling consistency method calculates mathematical model parameters of data according to a group of sample data sets containing abnormal data to obtain effective sample data, and is originally proposed by Fischler and Bolles in 1981. The method is widely applied to image registration and splicing, and can search an optimal parameter model in a group of data sets containing 'outer points' (points which do not accord with the optimal model) by adopting a continuous iteration method.
Disclosure of Invention
The invention aims to provide an image matching method based on local invariant features, which has certain robustness when the scale, illumination and rotation of an image change under the condition of ensuring the matching time and accuracy.
The purpose of the invention is realized as follows:
(1) solving the determinant of an integral image and a Hessian matrix from the initial image;
(2) establishing a scale space pyramid and positioning feature points;
(3) determining the main direction of the feature points through Haar wavelets to complete feature point extraction;
(4) calculating the rotation invariant LBP characteristics of the image area around each characteristic point, and constructing a characteristic descriptor;
(5) utilizing a nearest neighbor method of Euclidean distance to complete coarse matching of features;
(6) and eliminating the residual error matching points after the coarse matching method is executed by a random sampling consistency method to finish the fine matching of the features.
Compared with the background technology, the invention is mainly characterized in that:
1. the invention introduces the rotation invariant LBP characteristics into the description of the key points, constructs a description method with simple calculation and low dimensionality for the description of the key points, and enhances the illumination robustness of the image matching method.
2. In the feature point extraction part, a feature point extraction method provided by SURF is adopted, and a Hessian matrix and integral graph method is used for fast calculation, so that the problem of low calculation speed of the SIFT method based on the gradient graph is effectively improved while the matching performance is ensured.
3. In the invention, when the characteristics are described, the LBP rotation invariant descriptor is adopted to replace a Haar characteristic descriptor in the original SURF method, so that the method has good effects on the rotation change, the illumination change and the visual angle change, and simultaneously, the speed of image matching is not influenced.
4. In order to improve the accuracy of image matching, the invention adopts the Euclidean distance nearest neighbor method with higher accuracy to carry out rough matching.
5. In the invention, during fine matching, a random sampling consistency method is adopted to eliminate the mismatching in the coarse matching result, thereby further improving the accuracy of image matching.
The invention researches on key problems of extraction and description of local feature points, matching of features, elimination of mismatching and the like, and provides an image matching method based on local invariant features, which can ensure better image matching performance and has better scale, illumination and rotation robustness. Under the condition of ensuring the matching time and the accuracy, the method also has certain robustness when the image has scale, illumination and rotation changes.
The invention has the beneficial effects that: in the characteristic point extraction part, a Hessian matrix and integral graph method is used for fast calculation, and a scale image pyramid is constructed, so that the rapidity and scale invariance of image matching are ensured; in the feature description part, the rotation invariant LBP features are introduced into the description of the key points, a description method which is simple in calculation and low in dimensionality is constructed for the key points, and the scale and the illumination robustness are enhanced; in the coarse matching part of the characteristic angular points, a nearest neighbor method of Euclidean distance is adopted to ensure that the angular point matching speed is higher; in the fine matching part of the characteristic angular points, a random sampling consistency method is adopted, the residual mismatching points after the coarse matching method is executed are removed, the matching result is optimized, and the angular point matching accuracy is higher.
Drawings
FIG. 1 is a flow chart of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be further described with reference to the accompanying drawings by way of example.
With reference to fig. 1, the image matching method based on local feature point extraction and description of the present invention includes the following steps:
the method comprises the following steps: image feature point extraction, comprising:
(1) determinant for calculating integral image and Hessian matrix of initial image
And traversing the initial image to be matched to obtain an integral image of the initial image, and obtaining a determinant of a Hessian matrix of each point on the image.
For a given pixel point (x, x) in the image 1 to be matched, if f (x, y), the Hessian matrix H (f (x, y)) of the pixel point is:
the discriminant det (H) of the H matrix (abbreviated as Hessian matrix, the same below) is:
wherein,andthe second partial derivative of pixel f (x, y).
The value of the discriminant is the eigenvalue of the H matrix, all points can be classified according to the sign of the decision result, and whether the point is an extreme point is judged according to the positive and negative of the value of the discriminant.
Each pixel can solve an H matrix. In order to make the feature points have scale independence, it is necessary to perform gaussian filtering before constructing the H matrix. After filtering, calculating an H matrix, and defining a Hessian matrix H (x, σ) on the σ scale of the pixel point x as:
wherein L isxx(x, σ) is the second derivative of the kernel function of the sum of gaussians functionConvolution with image 1 at X point, Lxy(x, σ) and Lyy(x, σ) is calculated by the same method, wherein
By the method, the determinant value of the H determinant can be calculated for each pixel in the image, and the characteristic point can be distinguished by using the determinant value. Frame for childrenResult D of convolution of the shape filter with the imagexx、Dxy、DyyRespectively replace Lxx、Lxy、LyyObtain a matrix H of approximate HessianapproxIts determinant det (H)approx) Comprises the following steps:
det(Happrox)=DxxDyy-(0.9Dxy)2(4)
(2) establishing a scale space pyramid and positioning feature points
In order to have scale invariance, a scale space pyramid must be built. Keeping the size of the image unchanged, and establishing a scale image pyramid by changing the size of a template of a box filter, wherein the specific construction method comprises the following steps: the image scale space pyramid is divided into four layers, each layer is filtered for four times, the size of a first filtering template of the layer 1 is given and is 9 multiplied by 9, the sizes of adjacent templates in the layer 1 are different by 6 pixels, namely the sizes of the first filtering template to the fourth filtering template in the layer 1 are respectively 9 multiplied by 9, 15 multiplied by 15, 21 multiplied by 21 and 27 multiplied by 27; the sizes of the adjacent filtering templates in the 2 nd layer are different by 12 pixels, the sizes of the adjacent filtering templates in the 3 rd layer are different by 24 pixels, the sizes of the adjacent filtering templates in the 4 th layer are different by 48 pixels, the first template size of each layer is equal to the size of the second template of the previous layer, so that the sizes of the first to fourth filtering templates in the 2 nd layer are respectively 15 × 15, 27 × 27, 39 × 39 and 51 × 51, the sizes of the first to fourth filtering templates in the 3 rd layer are respectively 27 × 27, 51 × 51, 75 × 75 and 99 × 99, the sizes of the first to fourth filtering templates in the 4 th layer are respectively 51 × 51, 99 × 99, 147 × 195, and the approximate scale corresponding to each filtering can be calculated by formula (5).
Wherein sigmaapproxRepresenting a scale.
The characteristic point positioning process comprises the following steps: setting a threshold value for the response value of the Hessian matrix, removing all points smaller than the threshold value, then selecting points larger or smaller than the response values of the points of the adjacent layer and the periphery of the layer as characteristic points through non-maximum suppression, and finally fitting a three-dimensional quadratic function to accurately determine the positions and the scales of the points of the characteristic points.
(3) Determining principal directions of feature points by Haar wavelets
In a circular neighborhood which takes a characteristic point as a circle center and 6 sigma (sigma is a scale value of the characteristic point) as a radius, solving Haar wavelet responses in X and Y directions by using a Haar wavelet template with the side length of 4 sigma, weighting a filtered region by using a Gaussian function with the standard deviation of 2 sigma, then scanning a circle in the circular neighborhood by using a fan with the center angle of pi/3 by taking the characteristic point as the center, calculating the sum of the Haar wavelet responses of image points included in each pi/3 angle scanned by the fan, and taking the direction of the maximum response as the main direction of the characteristic point, wherein sigma is the scale value of the characteristic point, and X and Y are two directions in a plane rectangular coordinate system in which the circular neighborhood is located.
Step two: feature descriptor generation, comprising:
the basic LBP feature is a binary description of a 3 x 3 neighborhood, with the disadvantage of being rotation-dependent, and in order to obtain rotation invariance, the present invention uses a rotation-invariant LBP feature description of the region around the keypoint.
Let pi(r, c, sigma, theta) is a certain key point obtained in the key point extraction in the step one, wherein the (r, c) is the position coordinate of the original image, and the sigma and the theta are respectively piThe dimensions and directions of (a). According to the size of σ, at piOn the corresponding layer of Gaussian pyramid, with piAn image area of 9 × 9 size is taken as the area to be described. To obtain rotational invariance, the image area is rotated to the reference direction by the magnitude of θ. The method for describing the characteristic of the LBP with the unchanged region rotation to be described comprises the following steps:
(1) in the region of 8 × 8 size of the region to be described, each pixel point p is usedjUsing the LBP as a center, obtaining a rotation invariant LBP characteristic using the LBP as a center, and recording the characteristicIs lbpj(j=1,2,…,64)。
(2) Intuitively speaking, a pixel point pjThe farther from the center, the pair describes piThe smaller the amount of information that contributes, and thus to lbpjWeighting is performed by a weighting factor wjComprises the following steps:
wj=exp{-[(rj-ri)2+(cj-ci)2]/(2σ0 2)}/(2πσ0 2) (6)
wherein (r)j,cj) And (r)j,ci) Is a pixel point pjAnd a central point piCoordinates, σ, in the image region to be described0Is a selected constant.
(3) Combining all the calculated weighted LBP characteristic values into a one-dimensional vector, and recording the one-dimensional vector as Ti,
Ti=[w1·lbp1w2·lbp2… w64·lbp64](7)
(4) To eliminate the influence of illumination variation, TiAre normalized, i.e.
To sum up, the resulting 64-dimensional vector TiIs the key point TiDescription of the surrounding area.
Step three: coarse matching of image features, comprising:
after the description vector of the key point is generated, the invention adopts a distance formula as
The distance of equation (9) serves as a similarity determination measure between keypoints. Wherein, TA=[a1a2… an]And TB=[b1b2… bn]Are the description vectors for keypoints a and B, respectively. Matching strategies: taking a certain key point A in the image 1, and finding out 2 key points B and C which are closest to the description vector in the image 2, if the closest distance | | TA-TB||1Second closest distance | | TA-TC||1Is smaller than a threshold value t, i.e.
Then keypoint a is considered to match the nearest keypoint B.
Step four: fine matching of image features, comprising:
the random sampling consistency method is a method for calculating mathematical model parameters of data according to a group of sample data sets containing abnormal data to obtain effective sample data. The basic assumption of this method is that the sample contains both correct data (model-adapted data) and anomalous data (model-non-adapted data). Also, given a correct set of data, there are ways in which model parameters can be calculated that fit these data.
Setting the corresponding matching feature point sets in the two images 1 and 2 to be spliced as f, and removing mismatching points by using a random sampling consistency method comprises the following steps:
(1) and (4) randomly selecting three pairs of non-collinear matching points from the matching feature point set f, and calculating a transformation matrix M of the three pairs of non-collinear matching points.
(2) The corresponding points are calculated from the transformation matrix M, e.g. the point pair (a, b), calculating b ═ M (a).
(3) And cutting off the distance between b' and b, wherein if the distance is within the threshold, (a, b) is an inner point, and otherwise, the distance is an outer point.
(4) If the number of the interior points meets the condition, quitting; and after the circulation step k, if the number of the internal points does not reach the requirement, taking the maximum number of the internal points. If the exit condition is not met, go to step four (1).
Claims (1)
1. An image matching method based on local invariant features is characterized in that:
(1) solving the determinant of an integral image and a Hessian matrix from the initial image;
(2) establishing a scale space pyramid and positioning feature points;
(3) determining the main direction of the feature points through Haar wavelets to complete feature point extraction;
(4) calculating the rotation invariant LBP characteristics of the image area around each characteristic point, and constructing a characteristic descriptor;
(5) utilizing a nearest neighbor method of Euclidean distance to complete coarse matching of features;
(6) and eliminating the residual error matching points after the coarse matching method is executed by a random sampling consistency method to finish the fine matching of the features.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810221834.3A CN108550165A (en) | 2018-03-18 | 2018-03-18 | A kind of image matching method based on local invariant feature |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810221834.3A CN108550165A (en) | 2018-03-18 | 2018-03-18 | A kind of image matching method based on local invariant feature |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108550165A true CN108550165A (en) | 2018-09-18 |
Family
ID=63516598
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810221834.3A Pending CN108550165A (en) | 2018-03-18 | 2018-03-18 | A kind of image matching method based on local invariant feature |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108550165A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727239A (en) * | 2018-12-27 | 2019-05-07 | 北京航天福道高技术股份有限公司 | Based on SURF feature to the method for registering of inspection figure and reference map |
CN109815822A (en) * | 2018-12-27 | 2019-05-28 | 北京航天福道高技术股份有限公司 | Inspection figure components target identification method based on Generalized Hough Transform |
CN109816674A (en) * | 2018-12-27 | 2019-05-28 | 北京航天福道高技术股份有限公司 | Registration figure edge extracting method based on Canny operator |
CN110189368A (en) * | 2019-05-31 | 2019-08-30 | 努比亚技术有限公司 | Method for registering images, mobile terminal and computer readable storage medium |
CN110208795A (en) * | 2019-06-13 | 2019-09-06 | 成都汇蓉国科微系统技术有限公司 | A kind of low slow small target detection identifying system of mobile platform high-precision and method |
CN111238488A (en) * | 2020-03-18 | 2020-06-05 | 湖南云顶智能科技有限公司 | Aircraft accurate positioning method based on heterogeneous image matching |
CN115588204A (en) * | 2022-09-23 | 2023-01-10 | 神州数码系统集成服务有限公司 | Single character image matching and identifying method based on DS evidence theory |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903237A (en) * | 2014-03-21 | 2014-07-02 | 上海大学 | Dual-frequency identification sonar image sequence splicing method |
CN104933434A (en) * | 2015-06-16 | 2015-09-23 | 同济大学 | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method |
CN105608671A (en) * | 2015-12-30 | 2016-05-25 | 哈尔滨工业大学 | Image connection method based on SURF algorithm |
CN106657789A (en) * | 2016-12-29 | 2017-05-10 | 核动力运行研究所 | Thread panoramic image synthesis method |
CN107481273A (en) * | 2017-07-12 | 2017-12-15 | 南京航空航天大学 | A kind of Spacecraft Autonomous Navigation rapid image matching method |
-
2018
- 2018-03-18 CN CN201810221834.3A patent/CN108550165A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103903237A (en) * | 2014-03-21 | 2014-07-02 | 上海大学 | Dual-frequency identification sonar image sequence splicing method |
CN104933434A (en) * | 2015-06-16 | 2015-09-23 | 同济大学 | Image matching method combining length between perpendiculars (LBP) feature extraction method and surf feature extraction method |
CN105608671A (en) * | 2015-12-30 | 2016-05-25 | 哈尔滨工业大学 | Image connection method based on SURF algorithm |
CN106657789A (en) * | 2016-12-29 | 2017-05-10 | 核动力运行研究所 | Thread panoramic image synthesis method |
CN107481273A (en) * | 2017-07-12 | 2017-12-15 | 南京航空航天大学 | A kind of Spacecraft Autonomous Navigation rapid image matching method |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727239A (en) * | 2018-12-27 | 2019-05-07 | 北京航天福道高技术股份有限公司 | Based on SURF feature to the method for registering of inspection figure and reference map |
CN109815822A (en) * | 2018-12-27 | 2019-05-28 | 北京航天福道高技术股份有限公司 | Inspection figure components target identification method based on Generalized Hough Transform |
CN109816674A (en) * | 2018-12-27 | 2019-05-28 | 北京航天福道高技术股份有限公司 | Registration figure edge extracting method based on Canny operator |
CN109815822B (en) * | 2018-12-27 | 2024-05-28 | 北京航天福道高技术股份有限公司 | Patrol diagram part target identification method based on generalized Hough transformation |
CN110189368A (en) * | 2019-05-31 | 2019-08-30 | 努比亚技术有限公司 | Method for registering images, mobile terminal and computer readable storage medium |
CN110189368B (en) * | 2019-05-31 | 2023-09-19 | 努比亚技术有限公司 | Image registration method, mobile terminal and computer readable storage medium |
CN110208795A (en) * | 2019-06-13 | 2019-09-06 | 成都汇蓉国科微系统技术有限公司 | A kind of low slow small target detection identifying system of mobile platform high-precision and method |
CN110208795B (en) * | 2019-06-13 | 2021-10-15 | 成都汇蓉国科微系统技术有限公司 | High-precision low-speed small target detection and identification system and method for mobile platform |
CN111238488A (en) * | 2020-03-18 | 2020-06-05 | 湖南云顶智能科技有限公司 | Aircraft accurate positioning method based on heterogeneous image matching |
CN115588204A (en) * | 2022-09-23 | 2023-01-10 | 神州数码系统集成服务有限公司 | Single character image matching and identifying method based on DS evidence theory |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108550165A (en) | A kind of image matching method based on local invariant feature | |
CN109903313B (en) | Real-time pose tracking method based on target three-dimensional model | |
Oyallon et al. | An analysis of the SURF method | |
US9141871B2 (en) | Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space | |
Bouchiha et al. | Automatic remote-sensing image registration using SURF | |
Novatnack et al. | Scale-dependent/invariant local 3D shape descriptors for fully automatic registration of multiple sets of range images | |
CN110334762B (en) | Feature matching method based on quad tree combined with ORB and SIFT | |
KR101117837B1 (en) | Multi-image feature matching using multi-scale oriented patches | |
US8798377B2 (en) | Efficient scale-space extraction and description of interest points | |
CN108765476B (en) | Polarized image registration method | |
CN106981077A (en) | Infrared image and visible light image registration method based on DCE and LSS | |
CN107025449B (en) | Oblique image straight line feature matching method constrained by local area with unchanged visual angle | |
CN111369605A (en) | Infrared and visible light image registration method and system based on edge features | |
CN107862708A (en) | A kind of SAR and visible light image registration method | |
Lee et al. | Accurate registration using adaptive block processing for multispectral images | |
CN102122359A (en) | Image registration method and device | |
Flenner et al. | Resampling forgery detection using deep learning and a-contrario analysis | |
CN116612165A (en) | Registration method for large-view-angle difference SAR image | |
Ihmeida et al. | Image registration techniques and applications: Comparative study on remote sensing imagery | |
CN114822781A (en) | Medical image desensitization method based on examination images | |
Roth et al. | Wide-baseline image matching with projective view synthesis and calibrated geometric verification | |
CN113763274A (en) | Multi-source image matching method combining local phase sharpness orientation description | |
Zedan et al. | Copy move forgery detection techniques: a comprehensive survey of challenges and future directions | |
CN117196954A (en) | Weak texture curved surface image stitching method and device for aircraft skin | |
CN117173437A (en) | Multi-mode remote sensing image hybrid matching method and system with multi-dimensional directional self-similar characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180918 |
|
RJ01 | Rejection of invention patent application after publication |