CN109285110A - The infrared visible light image registration method and system with transformation are matched based on robust - Google Patents
The infrared visible light image registration method and system with transformation are matched based on robust Download PDFInfo
- Publication number
- CN109285110A CN109285110A CN201811068867.5A CN201811068867A CN109285110A CN 109285110 A CN109285110 A CN 109285110A CN 201811068867 A CN201811068867 A CN 201811068867A CN 109285110 A CN109285110 A CN 109285110A
- Authority
- CN
- China
- Prior art keywords
- matching
- point
- transformation
- infrared
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 230000009466 transformation Effects 0.000 title claims abstract description 92
- 238000000034 method Methods 0.000 title claims abstract description 61
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims abstract description 50
- 238000001514 detection method Methods 0.000 claims abstract description 28
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 238000013178 mathematical model Methods 0.000 claims description 33
- 239000011159 matrix material Substances 0.000 claims description 33
- 239000013598 vector Substances 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000001914 filtration Methods 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 7
- 238000010276 construction Methods 0.000 claims description 7
- 230000001131 transforming effect Effects 0.000 claims description 5
- 239000000284 extract Substances 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000007476 Maximum Likelihood Methods 0.000 description 3
- 238000013459 approach Methods 0.000 description 3
- 230000003595 spectral effect Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 239000002131 composite material Substances 0.000 description 2
- 238000003384 imaging method Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 244000291564 Allium cepa Species 0.000 description 1
- 241000370001 Hantavirus Liu Species 0.000 description 1
- 101000666658 Homo sapiens Rho-related GTP-binding protein RhoV Proteins 0.000 description 1
- 241000282320 Panthera leo Species 0.000 description 1
- 102100038400 Rho-related GTP-binding protein RhoV Human genes 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000010191 image analysis Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 210000001525 retina Anatomy 0.000 description 1
- 238000010845 search algorithm Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/14—Transformations for image registration, e.g. adjusting or mapping for alignment of images
- G06T3/147—Transformations for image registration, e.g. adjusting or mapping for alignment of images using affine transformations
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of infrared visible light image registration method and system matched based on robust with transformation, feature point detection algorithm and Feature Descriptor including using robust extract the infrared Feature Descriptor set with visible images to be registered, and establish initial matching;Erroneous matching is filtered out using the constraint of characteristic point neighbour structure stability, according to the parameter of the affine Transform Model between the matching relationship Robust Estimation image to be matched of characteristic point, infrared image is converted using interpolation method, completes registration.The present invention considers infrared and mode differences and different scale of visible images, and erroneous matching is filtered out using the stability of characteristic point neighbour structure during characteristic matching, using the parameter for obtaining transformation model based on the parameter Estimation that uniform space constrains, image characteristics extraction for different modalities and robustness is all had by the characteristic matching influenced compared with very noisy.
Description
Technical Field
The invention relates to the technical field of image registration, in particular to an infrared visible light image registration technical scheme based on PIIFD (partial intensity invariant feature descriptor) and robust transformation estimation.
Background
Image registration is an important technology in the field of image processing, and is a process of matching and superimposing two or more images acquired at different times, different sensors (imaging devices) or under different imaging conditions (weather, illumination, camera position, angle, and the like). The key of image registration is to find the transformation relation of the space domain between two or more images, so that the coordinates of corresponding points on the images on different images are unified.
Over the past decades, researchers have studied many approaches to address the image registration problem. From the aspect of feature extraction, the conventional SIFT method (d.lowe, discrete image features from scale-innovative keypoints, International Journal of Computer Vision, vol.60, No.2, pp.91-110,2004), ORB method (e.rule, v.rabaud, k.konolige, and g.bradski, "ORB: innovative alternative to SIFT or SURF," in ICCV,2011, pp.2564-2571) and SURF method (h.bay, t.tuytelaars, and l.v.gool, "SURF: specified robust," in proc.9th eur.conf.vis., 417, 404-pp.404-110,2004) are all registered in the same modality image, but the task is almost completely failed in multi-modality registration. The proposal of the PIIFD descriptor (J.Chen, J.Tian, N.Lee, J.ZHEN, R.T.Smith, and A.F.Lane, A partial interactive innovative creative professional for multimodal image registration. IEEE Transactions on biological Engineering 57,1707(2010)) makes it possible to extract features common between images of different modalities. The original purpose of PIIFD design is to match retina images, so that the scale difference of the images is not considered, and the PIIFD is difficult to be directly used in the infrared and visible light registration field.
From the point of view of feature matching, these methods can be roughly classified into two categories: region-based matching methods and feature-based matching methods. The former searches for matching information by searching the similarity degree of original gray values in a certain area in two images; the latter then uses descriptor similarity of local features or spatial geometric constraints to find matching point pairs. In cases with a small amount of significant detail, the grey values provide more information than the local shape and structure, so that the region-based approach has a better matching effect. However, the region-based method is computationally intensive and is not applicable in the case of image distortion and luminosity changes. On the contrary, the characteristic method has better robustness, can process images with complex distortion and is widely applied.
How to find corresponding matching points in the two images to form matching point pairs and ensure the correctness of the matching point pairs is the key of the image matching method.
The matching method based on the region mainly comprises a correlation method, a Fourier method and a mutual information method. The main idea of the correlation method is to calculate the similarity of corresponding windows in two images, and then to take the pair with the greatest degree of similarity as a matching point pair. However, the correlation method cannot be applied to a non-textured area with insignificant similarity and is computationally complex. The fourier method makes use of a fourier representation of the image in the frequency domain. Compared with the traditional correlation method, the method is more efficient in calculation and has good robustness to frequency-like noise. However, this approach has certain limitations in processing images with different spectral structures. The mutual information method, although it is good in matching effect, cannot obtain a global maximum value in the whole search space, and thus inevitably reduces its robustness.
In feature-based matching methods, a strategy that is divided into two steps is generally adopted. In the first step, a set of initial matching point pairs is determined by the similarity degree of the feature descriptors, wherein most of the initial matching point pairs are correct matches, but a large number of error matches are inevitable. And secondly, removing wrong matching through geometric constraint, and finally obtaining a correct matching point pair and a geometric parameter for transformation between the two images. Typical examples of such strategies include RANSAC methods (M.A. Fischer R.C. Bolles, "Random sample sensors: A parts for model setting with application to image analysis and automatic card-graphics," Commun.ACM, vol.24, No.6, pp.381-395, Jun.1981), ARHV methods (P.H.S. Torr and A.Zisserman, "MLESAC: A.N.B. with application to image geometry," computer. V.image-reader-stand, vol.78, No.1, pp.138-156, Apr.2000) and VFC methods (J.Ma, J.Z.Z.U.S. Pat. No. 172. 78, vol.1, pp.138-156, Apr.2000) that depend on parameter models and VFC methods (J.Ma.J.Z.Z.Z.Z.S. Pat. No.4, IEEE.1, Vol.S. 1, Vol.S.C.1, Vol.1, Vol.Z.Z.1, Vol.S.S.1, Vol.1, Vol.S.S. 1, Vol.S.1, Vol.S. 1, Vol.1, Vol.S. Pat. 23, Vol.1, Vol.S. Pat. No.1, Vol.1, Vol.J.J.Z., GS method (h.liu and s.yan, "Common visual pattern conversion via spectral conversion-ent correlation," in proc.eeconf.com.com.vis.pattern recovery, jun.2010, pp.1609-1616.) and ICF method (x.li and z.hu, "projecting mismatch by spectral conversion function," int.j.com.vis., vol.89, No.1, pp.1-17, aug.2010) based on non-parametric models.
The registration of the infrared image and the visible light image belongs to multi-mode image registration, the difficulty of feature extraction is high, and PIIFD has great limitation while the traditional feature extraction method fails. And due to the complexity of the feature extraction stage, the feature matching stage has the characteristics of high outlier proportion and low correct matching number. Therefore, a registration method with higher robustness for both feature extraction and feature matching is needed.
Disclosure of Invention
Aiming at the defects of the prior art, the invention provides an infrared visible light image registration technical scheme based on PIIFD descriptor and robust transformation estimation.
In order to achieve the above object, the present invention provides a method for registering infrared and visible light images based on robust matching and transformation, comprising the following steps,
step 1, extracting a feature descriptor set of an infrared image and a visible image to be registered by using a robust feature point detection algorithm and a feature descriptor, and establishing initial matching, comprising the following substeps,
step 1.1, respectively detecting feature points on the infrared image and the visible light image by using a feature point detection algorithm;
step 1.2, extracting H scales of feature descriptors around each feature point on the infrared image to obtain a set D of the feature descriptors1Extracting H scales of feature descriptors around each feature point on the visible light image to obtain a set D of the feature descriptors2H is a preset value; the feature descriptor adopts a PIIFD feature descriptor;
step 1.3, using BBF strategy, find D1All descriptors in D2To obtain a set M1Selection of D2All descriptors in D1To obtain a set M2Selecting M1And M2As an initial match, establishes a common element containing N0For matched point set
Step 2, using the constraint of the structural stability of the feature point neighborhood to filter out error matching, comprising the following substeps,
step 2.1, for each matching point xi,i=1,...,N0And yj,j=1,...,N0Respectively finding the nearest K points and matching the points xiSet of K neighboring pointsxijRepresenting the matching point xiThe jth neighboring point of (a), the matching point yiSet of K neighboring pointsyijRepresents the matching point yiK is a preset value;
step 2.2, calculating the cost of accepting each pair of matching according to the change degree of the neighborhood structure
Step 2.3, according to the preset threshold lambda, the cost c is calculatediIf the matching is judged to be correct matching, N initial matching point pairs are obtained, and the point set on the infrared image is recorded as X ═ X1,…,xN}TAnd the point set on the corresponding matched visible light image is Y ═ Y1,…,yN}T;
Step 3, estimating parameters of an affine transformation model between the images to be matched according to the matching relation robustness of the characteristic points, comprising the following substeps,
step 3.1, establishing a transformation mathematical model corresponding to affine geometric transformation between the images to be matched and a posterior probability mathematical model corresponding to a posterior probability of correct matching of the matching point pair;
step 3.2, according to the point set X ═ { X ═ X1,…,xN}T and Y={y1,…,yN}TSolving model parameters, wherein the model parameters comprise affine transformation parameters;
and 4, transforming the infrared image by using the affine transformation parameters calculated in the step 3.2 in an interpolation mode to complete registration.
In step 1.1, the feature point detection algorithm adopts a Harris corner point detection algorithm.
Furthermore, in step 2.2, the cost of accepting each pair of matches is calculated according to the degree of change of the neighborhood structure as follows,
wherein ,
indicates yjBelong to yiNeighborhood of (x), if there is a matchj,yj) Middle yjCorresponding xjBelong to xiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (x)i,xj) The value is 0, otherwise, the value is 1;
indicates xjBelong to xiNeighborhood of (x), if there is a matchj,yj) In xjCorresponding to yjBelong to yiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (y)i,yj) The value is 0, otherwise 1.
In step 3.1, for affine geometric transformation between images to be matched, a transformation mathematical model is established as follows:
y=f(x)=Ax+t
setting two images to be matched as an infrared image a and a visible light image b, wherein x and y are coordinate vectors of pixels on the infrared image a and the visible light image b respectively, f (x) represents an affine transformation relation, A and t are affine transformation parameters, A is a 2 x 2 matrix, and t is a 2 x 1 vector;
at the initial matching point pair X ═ X1,…,xN}TAnd Y ═ Y1,…,yN}TCalculating the posterior probability p of the n-th pair of matching points as correct matchingnThere is a mathematical model of the posterior probability as follows,
wherein ,xnRepresenting the initial matching point, y, on the infrared imagenThe initial matching points on the visible light image are represented, N is 1, N, gamma and sigma are model parameters of a posterior probability mathematical model, e is a mathematical constant, and b is a preset coefficient.
Furthermore, in step 3.2, solving the model parameters comprises the following sub-steps,
step 3.2.1, initialization, including making γ ═ γ0,A=I2×2,t=0,P=IN×N,γ0Of gammaAn initial value;
the current iteration number k is 1, the model parameter sigma is calculated by adopting the following model parameter formula,
wherein, the matrix T ═ (f (x)1),…,f(xN))TTr () represents the trace of the matrix;
step 3.2.2, updating the matrix P, including adopting the posterior probability mathematical model obtained in the step 3.1, and calculating to obtain the posterior probabilities P that the N pairs of matching points are respectively correctly matched1,…pNLet P be diag (P)1,…,pN) Diag () denotes a diagonal matrix;
step 3.2.3, calculating affine transformation parameters A, t as follows:
t=μy-Aμx
wherein , wherein μxAnd muyThe average coordinate vector weighted for the a posteriori probability,andis a centralized coordinate matrix, and lambda is a vector of Nx 1;
step 3.2.4, according to the affine transformation parameters A and t obtained in the step 3.2.3, recalculating the model parameters gamma and sigma of the posterior probability mathematical model as follows,
the parameter gamma is calculated using the following formula,
calculating sigma by adopting a model parameter formula in the step 3.2.1;
step 3.2.5, determining the convergence condition includes calculating the current parameter L, and when k is satisfied, k is kmax or (L-Lold)/LoldStopping iteration when the epsilon is less than or equal to epsilon, and entering the step 4; otherwise, k is k +1, and the step 3.2.2 is returned;
wherein ,kmaxFor maximum number of iterations,. epsilon.is the convergence threshold, LoldRepresenting the parameter L calculated in the previous step 3.2.5.
The invention also correspondingly provides an infrared visible light image registration system based on robust matching and transformation, which comprises the following modules,
the characteristic extraction module is used for extracting a characteristic descriptor set of the infrared image and the visible image to be registered by using a robust characteristic point detection algorithm and a characteristic descriptor and establishing initial matching and comprises the following sub-modules,
the characteristic point detection submodule is used for detecting characteristic points on the infrared image and the visible light image respectively by using a characteristic point detection algorithm;
a feature descriptor module for extracting H scales of feature descriptors around each feature point on the infrared image to obtain a set D of feature descriptors1Extracting H scales of feature descriptors around each feature point on the visible light image to obtain a set D of the feature descriptors2H is a preset value; the feature descriptor adopts a PIIFD feature descriptor;
a feature matching submodule for finding D using BBF policy1All descriptors in D2To obtain a set M1Selection of D2All descriptors in D1The number of the first and second sub-fields,get the set M2Selecting M1And M2As an initial match, establishes a common element containing N0For matched point set
An error matching filtering module for filtering error matching using constraints on the structural stability of the neighborhood of feature points, comprising the following modules,
a neighboring matching point submodule for generating a neighboring matching point x for each matching pointi,i=1,...,N0And yj,j=1,...,N0Respectively finding the nearest K points and matching the points xiSet of K neighboring pointsxijRepresenting the matching point xiThe jth neighboring point of (a), the matching point yiSet of K neighboring pointsyijRepresents the matching point yiK is a preset value; a cost calculation submodule for calculating the cost of accepting each pair of matching according to the variation degree of the neighborhood structure
A threshold judgment submodule for judging the cost c according to a preset threshold lambdaiIf the matching is judged to be correct matching, N initial matching point pairs are obtained, and the point set on the infrared image is recorded as X ═ X1,…,xN}TAnd the point set on the corresponding matched visible light image is Y ═ Y1,…,yN}T;
The robust parameter estimation module is used for robustly estimating the parameters of the affine transformation model between the images to be matched according to the matching relation of the characteristic points and comprises the following sub-modules,
the model construction submodule is used for establishing a transformation mathematical model corresponding to affine geometric transformation between the images to be matched and a posterior probability mathematical model corresponding to the posterior probability of correct matching of the matching point pair;
a parameter solving submodule for solving the parameter according to the point set X ═ X1,…,xN}T and Y={y1,…,yN}TSolving model parameters, wherein the model parameters comprise affine transformation parameters;
and the image transformation module is used for transforming the infrared image in an interpolation mode by using affine transformation parameters obtained by the parameter solving submodule and finishing registration.
In the feature point detection submodule, the feature point detection algorithm adopts a Harris corner point detection algorithm.
In the cost calculation sub-module, the cost for accepting each pair of matching is calculated according to the change degree of the neighborhood structure as follows,
wherein ,
indicates yjBelong to yiNeighborhood of (x), if there is a matchj,yj) Middle yjCorresponding xjBelong to xiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (x)i,xj) The value is 0, otherwise, the value is 1;
indicates xjBelong to xiThe neighborhood of (a) is determined,if there is a match (x)j,yj) In xjCorresponding to yjBelong to yiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (y)i,yj) The value is 0, otherwise 1.
In the model construction sub-module, for affine geometric transformation between images to be matched, a transformation mathematical model is established as follows:
y=f(x)=Ax+t
setting two images to be matched as an infrared image a and a visible light image b, wherein x and y are coordinate vectors of pixels on the infrared image a and the visible light image b respectively, f (x) represents an affine transformation relation, A and t are affine transformation parameters, A is a 2 x 2 matrix, and t is a 2 x 1 vector;
at the initial matching point pair X ═ X1,…,xN}TAnd Y ═ Y1,…,yN}TCalculating the posterior probability p of the n-th pair of matching points as correct matchingnThere is a mathematical model of the posterior probability as follows,
wherein ,xnRepresenting the initial matching point, y, on the infrared imagenThe initial matching points on the visible light image are represented, N is 1, N, gamma and sigma are model parameters of a posterior probability mathematical model, e is a mathematical constant, and b is a preset coefficient.
Moreover, the step parameter solving submodule comprises the following units,
an initialization unit for initializing, including making γ ═ γ0,A=I2×2,t=0,P=IN×N,γ0Is an initial value of gamma;
the current iteration number k is 1, the model parameter sigma is calculated by adopting the following model parameter formula,
wherein, the matrix T ═ (f (x)1),…,f(xN))TTr () represents the trace of the matrix;
an updating unit for updating the matrix P, including calculating the posterior probability P of correct matching for N pairs of matching points by using the mathematical model of posterior probability obtained in the model construction submodule1,…pNLet P be diag (P)1,…,pN) Diag () denotes a diagonal matrix;
a first parameter calculation unit configured to calculate affine transformation parameters a, t as follows:
t=μy-Aμx
wherein , wherein μxAnd muyThe average coordinate vector weighted for the a posteriori probability,andis a centralized coordinate matrix, and lambda is a vector of Nx 1;
a second parameter calculating unit for recalculating the model parameters γ, σ of the posterior probability mathematical model based on the affine transformation parameters a, t obtained by the first parameter calculating unit as follows,
the parameter gamma is calculated using the following formula,
calculating sigma by using an initialized unit model parameter formula;
an iteration judgment unit for judging convergence conditions, including calculating current parameter L, and when k is satisfied, k is equal to kmax or (L-Lold)/LoldStopping iteration when the epsilon is less than or equal to epsilon, and commanding the image transformation module to work; otherwise, if k is k +1, the updating unit is instructed to work;
wherein ,kmaxFor maximum number of iterations,. epsilon.is the convergence threshold, LoldAnd the parameter L obtained by the last work calculation of the iteration judgment unit is shown.
The invention has the following advantages:
1. the invention provides an improved characteristic extraction and matching method for infrared and visible light images, compared with the commonly used traditional method, the method can extract more effective characteristics among images in different modes, has certain scale invariance and can establish more credible initial matching.
2. The invention constructs a mismatching filtering algorithm based on the neighborhood stability of the feature points, and uses a maximum likelihood frame to estimate the parameters, so that the invention can deal with the condition of mismatching with a larger proportion and has stronger robustness.
Drawings
FIG. 1 is a flow chart of an embodiment of the present invention.
Detailed Description
The technical solution of the present invention is further described in detail below with reference to the accompanying drawings and examples.
The method provided by the invention comprises the steps of firstly extracting multi-scale features of an image to be registered by using a robust corner detection algorithm and a feature descriptor, then filtering out error matching by using the stability of a feature point neighborhood structure, finally performing mathematical modeling by using Bayes maximum likelihood estimation with hidden variables, estimating affine transformation parameters among images, and finally performing spatial transformation on the images to complete a registration task. The method considers the modal difference and the scale difference of the infrared and visible light images, filters error matching by using the stability of the neighborhood structure of the feature point in the process of feature matching, obtains the parameters of a transformation model by using a parameter estimation algorithm based on an EM algorithm and consistent space constraint, and has robustness for image feature extraction of different modalities and feature matching influenced by strong noise.
Referring to fig. 1, the method provided by the embodiment of the present invention mainly includes 4 steps:
step 1, extracting a feature descriptor set of an infrared image and a visible image to be registered by using a robust feature point detection algorithm and a descriptor, and establishing initial matching, comprising the following substeps,
step 1.1, feature points are detected on the infrared and visible light images respectively by using a Harris angular point detection algorithm, and feature point sets respectively having L and M feature points are extracted:in specific implementation, a person skilled in the art can pre-specify approximate values of L and M, the embodiment adopts about 300, and the Harris corner detection algorithm is the prior art, which is not described in detail herein;
step 1.2, extracting PIIFD feature descriptors with H scales around each feature point, and constructing a set respectively comprising HL and HM feature descriptors:ahlhl-th feature descriptor representing an infrared image, bhmH, a H-th feature descriptor representing an infrared image, which can be specified by a person skilled in the art according to the difference of image scales, and 5 is adopted in the embodiment;
step 1.3, find D1All descriptors in D2Match of D2All descriptors in D1Respectively creating sets containing HL and HM matches:selecting M1And M2As an initial match, establishing a common element containing N0Set of matching points for individual elements: wherein ,xi、yiRespectively corresponding matching points on the infrared image and the visible light image, and in the concrete implementation, a person in the art can select a nearest neighbor search algorithm to find the descriptor at D1Or D2In the matching, in the embodiment, a BBF (best-bin-first) strategy is used for searching a nearest neighbor descriptor of the descriptor in the Euclidean distance sense to serve as the matching, the BBF is an improved k-d tree nearest neighbor query algorithm, a search sequence is efficiently determined by using a priority queue, the operation speed of the algorithm is increased, the nearest neighbor or the nearest neighbor with high approximation can be given as a search result, the method is particularly suitable for searching of a high-dimensional space, the BBF algorithm is the prior art, and the method is not repeated.
Step 2, using the constraint of the structural stability of the feature point neighborhood to filter out error matching, comprising the following substeps,
step 2.1, for each matching point xi,i=1,...,N0And yi,i=1,...,N0Respectively searching the nearest K points to establish a set of adjacent points wherein Representing the matching point xiThe set of K neighboring points of (a),represents the matching point yiThe set of K neighboring points of (a),in specific implementation, a person skilled in the art can preset the value of K by himself, where K is 4, x in this embodimentijRepresenting the matching point xiJ-th neighboring point of (1), yijRepresents the matching point yiThe jth neighboring point of (a);
step 2.2, calculating the cost of receiving each pair of matching according to the change degree of the neighborhood structure,
wherein ,ciThe ith pair match (x) is weighed as a cost functioni,yi) The degree of deviation from the neighborhood structural stability constraint. Variable d (x)i,xj) Measure match (x)j,yj) Whether a match (x) is satisfiedi,yi) All due neighborhood stability constraints, whereIndicates yjBelong to yiIf there is a match (x) in the neighborhood (i.e., the set of K near points)j,yj) Middle yjCorresponding xjAre correspondingly attributed to xiIndicates a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, corresponding to d (x)i,xj) Will not increase the cost ciA value of (a), i.e. d (x)i,xj) The definition is as follows,
d(yi,yj) And d (x)i,xj) The effect is the same, but because of the presence of a mismatch,andare not generally equivalent and are therefore calculated separately as part of the cost. Similarly, the variable d (y)i,yj) The definition is as follows:
wherein the variable d (y)i,yj) Also for measuring the match (x)j,yj) Whether a match (x) is satisfiedi,yi) All due neighborhood stability constraints, whereIndicates xjBelong to xiIf there is a match (x) in the neighborhood (i.e., the set of K near points)j,yj) In xjCorresponding to yjCorresponding to yiIndicates a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, corresponding to d (y)i,yj) Will not increase the cost ciThe value of (c).
Step 2.3, set threshold lambda to judge whether the match is accepted as correct match and use parameter piIndicating a positive error of the match, pi1 means that the i-th pair match is a correct match, pi0 means that the ith pair match is an error match. The value of λ can be preset at the discretion of the skilled person, set to 6 in this embodiment,
the set of correct matches is determined by:
Γ*={i|pi=1,i=1,...,N0}
wherein ,Γ*Is a correctly matched set.
And if N initial matching point pairs are obtained in this way, recording the point set on the infrared image as X ═ X1,…,xN}TAnd the point set on the corresponding matched visible light image is Y ═ Y1,…,yN}T。
Step 3, estimating parameters of affine transformation models between the images to be matched according to the matching relation robustness of the characteristic points, solving the affine model parameters based on the maximum likelihood estimation and optimization method in the embodiment, comprising the following substeps,
step 3.1, establishing a model corresponding to the geometric transformation between the images to be matched and a model corresponding to the posterior probability that the matching point pair is correctly matched, and realizing the following steps,
for affine geometric transformation between images to be matched, a transformation mathematical model is established as follows:
y=f(x)=Ax+t
setting two images to be matched as an infrared image a and a visible light image b, wherein x and y are coordinate vectors of pixels on the infrared image a and the visible light image b respectively, f (x) represents an affine transformation relation, A and t are affine transformation parameters to be solved, A is a 2 x 2 matrix, and t is a 2 x 1 vector;
at the obtained initial matching point pair X ═ X1,…,xN}TAnd Y ═ Y1,…,yN}TWherein the n-th pair of matching points (x) is calculatedn,yn) Posterior probability p for correct matchingnThere is a mathematical model of the posterior probability as follows,
wherein ,xnRepresenting the initial matching point, y, on the infrared imagenRepresenting an initial matching point on the visible light image, wherein N is 1, N, gamma and sigma are model parameters, and e is a mathematical constant; b is a preset coefficient which can be specified in advance by a person skilled in the art, or a variable parameter is adopted, and 0.1 is adopted in the embodiment;
step 3.2, according to the point set X ═ { X ═ X1,…,xN}T and Y={y1,…,yN}TSolving the model parameters, comprising the sub-steps of,
step 3.2.1, initialization, including making γ ═ γ0(in practice, the initial value γ of γ can be set by those skilled in the art0Example take γ0=0.9),A=I2×2(i.e., an identity matrix having dimensions of 2 × 2 and diagonal elements all of 1), t is 0, and P is IN×N(i.e., an identity matrix having dimensions N × N and diagonal elements all of 1), let the current iteration number k be 1, calculate σ using the following model parameter formula,
wherein, the matrix T ═ (f (x)1),…,f(xN))TTr () represents the trace of the matrix;
step 3.2.2, updating the matrix P, including adopting the posterior probability mathematical model obtained in the step 3.1, and calculating to obtain the posterior probabilities P that the N pairs of matching points are respectively correctly matched1,…pNLet P be diag (P)1,…,pN) Diag () denotes a diagonal matrix;
step 3.2.3, calculating affine transformation parameters A, t as follows:
t=μy-Aμx
wherein , wherein μxAnd muyThe average coordinate vector weighted for the a posteriori probability,andis a centralized coordinate matrix, and lambda is a vector of N multiplied by 1 and is all 1;
step 3.2.4, according to the affine transformation parameters A and t obtained by calculation in step 3.2.3, recalculating the model parameters gamma and sigma of the posterior probability mathematical model as follows,
the model parameter gamma is calculated using the following formula,
calculating sigma by adopting a model parameter formula in the step 3.2.1;
step 3.2.5, determining the convergence condition includes calculating the current parameter L, and when k is satisfied, k is kmax or (L-Lold)/LoldStopping iteration when the epsilon is less than or equal to epsilon, and entering the step 4; otherwise, k equals k +1, and the procedure returns to step 3.2.2.
wherein ,kmaxFor the maximum number of iterations, in practice, one skilled in the art can select the preset maximum number of iterations kmaxThe value of (c) is 50, and ∈ is a convergence threshold, which can be preset by a person skilled in the art in specific implementation, for example, 0.0001.
According to the preset parameters in the step, under the premise of knowing the matching posterior probability, the calculation formula of the expectation L of the log-likelihood function value of the existing data is as follows,
wherein ,LoldIndicating the last L calculated in step 3.2.5, L may be made the first time step 2.3.5 is executedoldIs a large initial value, for example, the power of 6 of 10.
The advantage of such a calculation is that the optimal behavior of the parameter to be determined is guaranteed in the sense of maximizing the probability when L converges.
And 4, performing registration transformation on the infrared image, namely adopting a geometric transformation algorithm of the image according to affine transformation parameters A and t obtained by calculation in the step 3.2, and assuming that a certain pixel coordinate vector of the registered infrared image is x ═ x (x)0,y0)TThen the coordinate of the pixel mapped to the original infrared image should be y ═ x1,y1)T=A-1(x-t), calculating pixel values by using bicubic (bicubic) algorithm, and completing registration transformation by line-by-line scanning, wherein the image geometric transformation algorithm is the prior art, and is not repeated in the invention. Bicubic interpolation can create smoother image edges than bilinear interpolation.
In specific implementation, the above processes can be automatically operated by adopting a software mode. The invention also provides a corresponding system in a modularized mode, and the embodiment of the invention also correspondingly provides an infrared visible light image registration system based on robust matching and transformation, which comprises the following modules,
the characteristic extraction module is used for extracting a characteristic descriptor set of the infrared image and the visible image to be registered by using a robust characteristic point detection algorithm and a characteristic descriptor and establishing initial matching and comprises the following sub-modules,
the characteristic point detection submodule is used for detecting characteristic points on the infrared image and the visible light image respectively by using a characteristic point detection algorithm;
a feature descriptor module for extracting H scales of feature descriptors around each feature point on the infrared image to obtain a set D of feature descriptors1Extracting H scales of feature descriptors around each feature point on the visible light image to obtain a set D of the feature descriptors2H is a preset value; the feature descriptor adopts a PIIFD feature descriptor;
a feature matching submodule for finding D using BBF policy1All descriptors in D2To obtain a set M1Selection of D2All descriptors in D1To obtain a set M2Selecting M1And M2As an initial match, establishes a common element containing N0For matched point set
An error matching filtering module for filtering error matching using constraints on the structural stability of the neighborhood of feature points, comprising the following modules,
a neighboring matching point submodule for generating a neighboring matching point x for each matching pointi,i=1,...,N0And yj,j=1,...,N0Respectively finding the nearest K points and matching the points xiSet of K neighboring pointsxijRepresenting the matching point xiThe jth neighboring point of (a), the matching point yiSet of K neighboring pointsyijRepresents the matching point yiK is a preset value; a cost calculation sub-module for calculating the cost,cost for accepting each pair of matches is calculated according to the variation degree of the neighborhood structure
A threshold judgment submodule for judging the cost c according to a preset threshold lambdaiIf the matching is judged to be correct matching, N initial matching point pairs are obtained, and the point set on the infrared image is recorded as X ═ X1,…,xN}TAnd the point set on the corresponding matched visible light image is Y ═ Y1,…,yN}T;
The robust parameter estimation module is used for robustly estimating the parameters of the affine transformation model between the images to be matched according to the matching relation of the characteristic points and comprises the following sub-modules,
the model construction submodule is used for establishing a transformation mathematical model corresponding to affine geometric transformation between the images to be matched and a posterior probability mathematical model corresponding to the posterior probability of correct matching of the matching point pair;
a parameter solving submodule for solving the parameter according to the point set X ═ X1,…,xN}T and Y={y1,…,yN}TSolving model parameters, wherein the model parameters comprise affine transformation parameters;
and the image transformation module is used for transforming the infrared image in an interpolation mode by using affine transformation parameters obtained by the parameter solving submodule and finishing registration.
The specific implementation of each module refers to the method steps, and the detailed description of the invention is omitted.
For the sake of easy understanding of the technical solution of the present invention, SIFT is selected, and the original single-scale PIIFD is compared with the present invention for infrared and visible light image feature extraction. The effective matching refers to performing initial matching on the descriptor after the features are extracted, converting the feature point coordinates of the infrared image into corresponding visible light image coordinates through a real transformation matrix, calculating the distance d between the coordinates and the visible light image feature point coordinates corresponding to the feature points, and if the distance d is less than or equal to 5, judging the matching to be effective matching. It can be seen that the feature extraction method of the present invention extracts the most significant number of valid matches.
Method effect comparison table 1
Method of producing a composite material | Average number of effective matches |
SIFT | 0.44 |
Original PIIFD | 9.16 |
The invention | 16.24 |
And selecting RANSAC, ICF and VFC methods to compare with the method for image matching. The comparison result is shown in the following table, wherein the accuracy refers to the proportion of the matching point pairs which are correct in the matching point pairs given by the method finally. It can be seen that the method is shortest in use time and far higher in accuracy than other methods.
Comparison of Process results Table 2
Method of producing a composite material | Average time (seconds) | Accuracy rate |
RANSAC | 0.2840 | 46.57% |
ICF | 0.1776 | 27.33% |
VFC | 0.0409 | 18.09% |
The invention | 0.0123 | 77.75% |
The specific embodiments described herein are merely illustrative of the spirit of the invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit or scope of the invention as defined in the appended claims.
Claims (10)
1. An infrared visible light image registration method based on robust matching and transformation is characterized in that: comprises the following steps of (a) carrying out,
step 1, extracting a feature descriptor set of an infrared image and a visible image to be registered by using a robust feature point detection algorithm and a feature descriptor, and establishing initial matching, comprising the following substeps,
step 1.1, respectively detecting feature points on the infrared image and the visible light image by using a feature point detection algorithm;
step 1.2, of each feature point on the infrared imageExtracting H scales of feature descriptors around to obtain a set D of feature descriptors1Extracting H scales of feature descriptors around each feature point on the visible light image to obtain a set D of the feature descriptors2H is a preset value; the feature descriptor adopts a PIIFD feature descriptor;
step 1.3, using BBF strategy, find D1All descriptors in D2To obtain a set M1Selection of D2All descriptors in D1To obtain a set M2Selecting M1And M2As an initial match, establishes a common element containing N0For matched point set
Step 2, using the constraint of the structural stability of the feature point neighborhood to filter out error matching, comprising the following substeps,
step 2.1, for each matching point xi,i=1,...,N0And yj,j=1,...,N0Respectively finding the nearest K points and matching the points xiSet of K neighboring pointsxijRepresenting the matching point xiThe jth neighboring point of (a), the matching point yiSet of K neighboring pointsyijRepresents the matching point yiK is a preset value;
step 2.2, calculating the cost of accepting each pair of matching according to the change degree of the neighborhood structure
Step 2.3, according to the preset threshold lambda, the cost c is calculatediIf the matching less than or equal to lambda is judged to be correct matching, N matches are obtainedInitial matching point pairs, and recording the point set on the infrared image as X ═ X1,…,xN}TAnd the point set on the corresponding matched visible light image is Y ═ Y1,…,yN}T;
Step 3, estimating parameters of an affine transformation model between the images to be matched according to the matching relation robustness of the characteristic points, comprising the following substeps,
step 3.1, establishing a transformation mathematical model corresponding to affine geometric transformation between the images to be matched and a posterior probability mathematical model corresponding to a posterior probability of correct matching of the matching point pair;
step 3.2, according to the point set X ═ { X ═ X1,…,xN}T and Y={y1,…,yN}TSolving model parameters, wherein the model parameters comprise affine transformation parameters;
and 4, transforming the infrared image by using the affine transformation parameters calculated in the step 3.2 in an interpolation mode to complete registration.
2. The infrared-visible image registration method based on robust matching and transformation as claimed in claim 1, wherein: in step 1.1, the feature point detection algorithm adopts a Harris corner detection algorithm.
3. The infrared-visible image registration method based on robust matching and transformation as claimed in claim 1, wherein: in step 2.2, the cost of accepting each pair of matches is calculated according to the degree of change of the neighborhood structure as follows,
wherein ,
indicates yjBelong to yiNeighborhood of (x), if there is a matchj,yj) Middle yjCorresponding xjBelong to xiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (x)i,xj) The value is 0, otherwise, the value is 1;
indicates xjBelong to xiNeighborhood of (x), if there is a matchj,yj) In xjCorresponding to yjBelong to yiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (y)i,yj) The value is 0, otherwise 1.
4. The infrared-visible image registration method based on robust matching and transformation as claimed in claim 1, wherein: in step 3.1, for affine geometric transformation between images to be matched, a transformation mathematical model is established as follows:
y=f(x)=Ax+t
setting two images to be matched as an infrared image a and a visible light image b, wherein x and y are coordinate vectors of pixels on the infrared image a and the visible light image b respectively, f (x) represents an affine transformation relation, A and t are affine transformation parameters, A is a 2 x 2 matrix, and t is a 2 x 1 vector;
at the initial matching point pair X ═ X1,…,xN}TAnd Y ═ Y1,…,yN}TCalculating the posterior probability p of the n-th pair of matching points as correct matchingnThere is a mathematical model of the posterior probability as follows,
wherein ,xnRepresenting the initial matching point, y, on the infrared imagenDenotes the initial matching point on the visible light image, N1, N, γ and σ are all posterior probability mathematical modelsE is a mathematical constant, and b is a preset coefficient.
5. The infrared-visible image registration method based on robust matching and transformation as claimed in claim 4, wherein: in step 3.2, solving the model parameters comprises the following sub-steps,
step 3.2.1, initialization, including making γ ═ γ0,A=I2×2,t=0,P=IN×N,γ0Is an initial value of gamma;
the current iteration number k is 1, the model parameter sigma is calculated by adopting the following model parameter formula,
wherein, the matrix T ═ (f (x)1),…,f(xN))TTr () represents the trace of the matrix;
step 3.2.2, updating the matrix P, including adopting the posterior probability mathematical model obtained in the step 3.1, and calculating to obtain the posterior probabilities P that the N pairs of matching points are respectively correctly matched1,…pNLet P be diag (P)1,…,pN) Diag () denotes a diagonal matrix;
step 3.2.3, calculating affine transformation parameters A, t as follows:
t=μy-Aμx
wherein , wherein μxAnd muyThe average coordinate vector weighted for the a posteriori probability,andis a centralized coordinate matrix, and lambda is a vector of Nx 1;
step 3.2.4, according to the affine transformation parameters A and t obtained in the step 3.2.3, recalculating the model parameters gamma and sigma of the posterior probability mathematical model as follows,
the parameter gamma is calculated using the following formula,
calculating sigma by adopting a model parameter formula in the step 3.2.1;
step 3.2.5, determining the convergence condition includes calculating the current parameter L, and when k is satisfied, k is kmax or (L-Lold)/LoldStopping iteration when the epsilon is less than or equal to epsilon, and entering the step 4; otherwise, k is k +1, and the step 3.2.2 is returned;
wherein ,kmaxFor maximum number of iterations,. epsilon.is the convergence threshold, LoldRepresenting the parameter L calculated in the previous step 3.2.5.
6. An infrared visible light image registration system based on robust matching and transformation is characterized in that: comprises the following modules which are used for realizing the functions of the system,
the characteristic extraction module is used for extracting a characteristic descriptor set of the infrared image and the visible image to be registered by using a robust characteristic point detection algorithm and a characteristic descriptor and establishing initial matching and comprises the following sub-modules,
the characteristic point detection submodule is used for detecting characteristic points on the infrared image and the visible light image respectively by using a characteristic point detection algorithm;
a feature descriptor module for extracting H scales of feature descriptors around each feature point on the infrared image to obtain a set D of feature descriptors1Extracting H scales of feature descriptors around each feature point on the visible light image to obtain a set D of the feature descriptors2H is a preset value; the feature descriptor adopts a PIIFD feature descriptor;
a feature matching submodule for finding D using BBF policy1All descriptors in D2To obtain a set M1Selection of D2All descriptors in D1To obtain a set M2Selecting M1And M2As an initial match, establishes a common element containing N0For matched point set
An error matching filtering module for filtering error matching using constraints on the structural stability of the neighborhood of feature points, comprising the following modules,
a neighboring matching point submodule for generating a neighboring matching point x for each matching pointi,i=1,...,N0And yj,j=1,...,N0Respectively finding the nearest K points and matching the points xiSet of K neighboring pointsxijRepresenting the matching point xiThe jth neighboring point of (a), the matching point yiSet of K neighboring pointsyijRepresents the matching point yiK is a preset value; a cost calculation submodule for calculating the cost of accepting each pair of matching according to the variation degree of the neighborhood structure
A threshold judgment submodule for judging the cost c according to a preset threshold lambdaiIf the matching is judged to be correct matching, N initial matching point pairs are obtained, and the point set on the infrared image is recorded as X ═ X1,…,xN}TAnd the point set on the corresponding matched visible light image is Y ═ Y1,…,yN}T;
The robust parameter estimation module is used for robustly estimating the parameters of the affine transformation model between the images to be matched according to the matching relation of the characteristic points and comprises the following sub-modules,
the model construction submodule is used for establishing a transformation mathematical model corresponding to affine geometric transformation between the images to be matched and a posterior probability mathematical model corresponding to the posterior probability of correct matching of the matching point pair;
a parameter solving submodule for solving the parameter according to the point set X ═ X1,…,xN}T and Y={y1,…,yN}TSolving model parameters, wherein the model parameters comprise affine transformation parameters;
and the image transformation module is used for transforming the infrared image in an interpolation mode by using affine transformation parameters obtained by the parameter solving submodule and finishing registration.
7. The infrared-visible image registration system based on robust matching and transformation as claimed in claim 5, wherein: in the characteristic point detection submodule, the characteristic point detection algorithm adopts a Harris angular point detection algorithm.
8. The infrared-visible image registration system based on robust matching and transformation as claimed in claim 6, wherein: in the cost calculation submodule, the cost for receiving each pair of matches is calculated according to the change degree of the neighborhood structure as follows,
wherein ,
indicates yjBelong to yiNeighborhood of (x), if there is a matchj,yj) Middle yjCorresponding xjBelong to xiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (x)i,xj) The value is 0, otherwise, the value is 1;
indicates xjBelong to xiNeighborhood of (x), if there is a matchj,yj) In xjCorresponding to yjBelong to yiIn the neighborhood of (a), indicate a match (x)j,yj) Presence of (x) satisfies the matchi,yi) All due neighborhood stability constraints, variable d (y)i,yj) The value is 0, otherwise 1.
9. The infrared-visible image registration system based on robust matching and transformation as claimed in claim 6, wherein: in the model construction submodule, aiming at affine geometric transformation between images to be matched, a transformation mathematical model is established as follows:
y=f(x)=Ax+t
setting two images to be matched as an infrared image a and a visible light image b, wherein x and y are coordinate vectors of pixels on the infrared image a and the visible light image b respectively, f (x) represents an affine transformation relation, A and t are affine transformation parameters, A is a 2 x 2 matrix, and t is a 2 x 1 vector;
at the initial matching point pair X ═ X1,…,xN}TAnd Y ═ Y1,…,yN}TCalculating the posterior probability p of the n-th pair of matching points as correct matchingnThere is a mathematical model of the posterior probability as follows,
wherein ,xnRepresenting the initial matching point, y, on the infrared imagenThe initial matching points on the visible light image are represented, N is 1, N, gamma and sigma are model parameters of a posterior probability mathematical model, e is a mathematical constant, and b is a preset coefficient.
10. The infrared-visible image registration system based on robust matching and transformation as claimed in claim 9, wherein: the step parameter solving submodule comprises the following units,
an initialization unit for initializing, including making γ ═ γ0,A=I2×2,t=0,P=IN×N,γ0Is an initial value of gamma;
the current iteration number k is 1, the model parameter sigma is calculated by adopting the following model parameter formula,
wherein, the matrix T ═ (f (x)1),…,f(xN))TTr () represents the trace of the matrix;
an updating unit for updating the matrix P, including calculating the posterior probability P of correct matching for N pairs of matching points by using the mathematical model of posterior probability obtained in the model construction submodule1,…pNLet P be diag (P)1,…,pN) Diag () denotes a diagonal matrix;
a first parameter calculation unit configured to calculate affine transformation parameters a, t as follows:
t=μy-Aμx
wherein , wherein μxAnd muyThe average coordinate vector weighted for the a posteriori probability,andis a centralized coordinate matrix, and lambda is a vector of Nx 1;
a second parameter calculating unit for recalculating the model parameters γ, σ of the posterior probability mathematical model based on the affine transformation parameters a, t obtained by the first parameter calculating unit as follows,
the parameter gamma is calculated using the following formula,
calculating sigma by using an initialized unit model parameter formula;
an iteration judgment unit for judging convergence conditions, including calculating current parameter L, and when k is satisfied, k is equal to kmax or (L-Lold)/LoldStopping iteration when the epsilon is less than or equal to epsilon, and commanding the image transformation module to work; otherwise, if k is k +1, the updating unit is instructed to work;
wherein ,kmaxFor maximum number of iterations,. epsilon.is the convergence threshold, LoldAnd the parameter L obtained by the last work calculation of the iteration judgment unit is shown.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811068867.5A CN109285110B (en) | 2018-09-13 | 2018-09-13 | Infrared visible light image registration method and system based on robust matching and transformation |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811068867.5A CN109285110B (en) | 2018-09-13 | 2018-09-13 | Infrared visible light image registration method and system based on robust matching and transformation |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109285110A true CN109285110A (en) | 2019-01-29 |
CN109285110B CN109285110B (en) | 2023-04-21 |
Family
ID=65180596
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811068867.5A Active CN109285110B (en) | 2018-09-13 | 2018-09-13 | Infrared visible light image registration method and system based on robust matching and transformation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109285110B (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008964A (en) * | 2019-03-28 | 2019-07-12 | 上海交通大学 | The corner feature of heterologous image extracts and description method |
CN110148161A (en) * | 2019-04-12 | 2019-08-20 | 中国地质大学(武汉) | A kind of remote sensing images error hiding elimination method and system |
CN110223330A (en) * | 2019-06-12 | 2019-09-10 | 国网河北省电力有限公司沧州供电分公司 | A kind of method for registering and system of visible light and infrared image |
CN110728296A (en) * | 2019-09-03 | 2020-01-24 | 华东师范大学 | Two-step random sampling consistency method and system for accelerating feature point matching |
CN111311657A (en) * | 2020-03-12 | 2020-06-19 | 广东电网有限责任公司广州供电局 | Infrared image homologous registration method based on improved corner main direction distribution |
CN113095385A (en) * | 2021-03-31 | 2021-07-09 | 安徽工业大学 | Multimode image matching method based on global and local feature description |
CN113591597A (en) * | 2021-07-07 | 2021-11-02 | 东莞市鑫泰仪器仪表有限公司 | Intelligent public security information system based on thermal imaging |
CN113792788A (en) * | 2021-09-14 | 2021-12-14 | 安徽工业大学 | Infrared and visible light image matching method based on multi-feature similarity fusion |
CN116385502A (en) * | 2023-03-09 | 2023-07-04 | 武汉大学 | Image registration method based on region search under geometric constraint |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012058902A1 (en) * | 2010-11-02 | 2012-05-10 | 中兴通讯股份有限公司 | Method and apparatus for combining panoramic image |
CN105469110A (en) * | 2015-11-19 | 2016-04-06 | 武汉大学 | Non-rigid transformation image characteristic matching method based on local linear transfer and system |
CN105488754A (en) * | 2015-11-19 | 2016-04-13 | 武汉大学 | Local linear migration and affine transformation based image feature matching method and system |
CN107680054A (en) * | 2017-09-26 | 2018-02-09 | 长春理工大学 | Multisource image anastomosing method under haze environment |
-
2018
- 2018-09-13 CN CN201811068867.5A patent/CN109285110B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2012058902A1 (en) * | 2010-11-02 | 2012-05-10 | 中兴通讯股份有限公司 | Method and apparatus for combining panoramic image |
CN105469110A (en) * | 2015-11-19 | 2016-04-06 | 武汉大学 | Non-rigid transformation image characteristic matching method based on local linear transfer and system |
CN105488754A (en) * | 2015-11-19 | 2016-04-13 | 武汉大学 | Local linear migration and affine transformation based image feature matching method and system |
CN107680054A (en) * | 2017-09-26 | 2018-02-09 | 长春理工大学 | Multisource image anastomosing method under haze environment |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110008964A (en) * | 2019-03-28 | 2019-07-12 | 上海交通大学 | The corner feature of heterologous image extracts and description method |
CN110148161A (en) * | 2019-04-12 | 2019-08-20 | 中国地质大学(武汉) | A kind of remote sensing images error hiding elimination method and system |
CN110223330B (en) * | 2019-06-12 | 2021-04-09 | 国网河北省电力有限公司沧州供电分公司 | Registration method and system for visible light and infrared images |
CN110223330A (en) * | 2019-06-12 | 2019-09-10 | 国网河北省电力有限公司沧州供电分公司 | A kind of method for registering and system of visible light and infrared image |
CN110728296B (en) * | 2019-09-03 | 2022-04-05 | 华东师范大学 | Two-step random sampling consistency method and system for accelerating feature point matching |
CN110728296A (en) * | 2019-09-03 | 2020-01-24 | 华东师范大学 | Two-step random sampling consistency method and system for accelerating feature point matching |
CN111311657A (en) * | 2020-03-12 | 2020-06-19 | 广东电网有限责任公司广州供电局 | Infrared image homologous registration method based on improved corner main direction distribution |
CN111311657B (en) * | 2020-03-12 | 2023-04-25 | 广东电网有限责任公司广州供电局 | Infrared image homologous registration method based on improved corner principal direction distribution |
CN113095385A (en) * | 2021-03-31 | 2021-07-09 | 安徽工业大学 | Multimode image matching method based on global and local feature description |
CN113095385B (en) * | 2021-03-31 | 2023-04-18 | 安徽工业大学 | Multimode image matching method based on global and local feature description |
CN113591597A (en) * | 2021-07-07 | 2021-11-02 | 东莞市鑫泰仪器仪表有限公司 | Intelligent public security information system based on thermal imaging |
CN113792788A (en) * | 2021-09-14 | 2021-12-14 | 安徽工业大学 | Infrared and visible light image matching method based on multi-feature similarity fusion |
CN113792788B (en) * | 2021-09-14 | 2024-04-16 | 安徽工业大学 | Infrared and visible light image matching method based on multi-feature similarity fusion |
CN116385502A (en) * | 2023-03-09 | 2023-07-04 | 武汉大学 | Image registration method based on region search under geometric constraint |
CN116385502B (en) * | 2023-03-09 | 2024-04-19 | 武汉大学 | Image registration method based on region search under geometric constraint |
Also Published As
Publication number | Publication date |
---|---|
CN109285110B (en) | 2023-04-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109285110B (en) | Infrared visible light image registration method and system based on robust matching and transformation | |
CN108388896B (en) | License plate identification method based on dynamic time sequence convolution neural network | |
Paragios et al. | Matching distance functions: A shape-to-area variational approach for global-to-local registration | |
Aldoma et al. | Multimodal cue integration through hypotheses verification for rgb-d object recognition and 6dof pose estimation | |
US7409108B2 (en) | Method and system for hybrid rigid registration of 2D/3D medical images | |
CN104601964B (en) | Pedestrian target tracking and system in non-overlapping across the video camera room of the ken | |
CN109903313B (en) | Real-time pose tracking method based on target three-dimensional model | |
Huachao et al. | Robust and precise registration of oblique images based on scale-invariant feature transformation algorithm | |
CN106981077A (en) | Infrared image and visible light image registration method based on DCE and LSS | |
CN113361542B (en) | Local feature extraction method based on deep learning | |
CN105469110B (en) | Non-rigid transformation Image Feature Matching method and system based on local linear migration | |
CN112784712B (en) | Missing child early warning implementation method and device based on real-time monitoring | |
Li et al. | A robust shape model for multi-view car alignment | |
CN113724379B (en) | Three-dimensional reconstruction method and device for fusing image and laser point cloud | |
CN111192194A (en) | Panoramic image splicing method for curtain wall building vertical face | |
CN111009005A (en) | Scene classification point cloud rough registration method combining geometric information and photometric information | |
CN113592923A (en) | Batch image registration method based on depth local feature matching | |
CN105488754B (en) | Image Feature Matching method and system based on local linear migration and affine transformation | |
Yang et al. | Non-rigid point set registration via global and local constraints | |
Dai et al. | A novel two-stage algorithm for accurate registration of 3-D point clouds | |
CN109448031B (en) | Image registration method and system based on Gaussian field constraint and manifold regularization | |
Tran et al. | 3D point cloud registration based on the vector field representation | |
Wuhrer et al. | Posture invariant surface description and feature extraction | |
CN105469112B (en) | Image Feature Matching method and system based on local linear migration and rigid model | |
Zhang et al. | Edge-driven object matching for UAV images and satellite SAR images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |