CN104021568B - Automatic registering method of visible lights and infrared images based on polygon approximation of contour - Google Patents

Automatic registering method of visible lights and infrared images based on polygon approximation of contour Download PDF

Info

Publication number
CN104021568B
CN104021568B CN201410294262.3A CN201410294262A CN104021568B CN 104021568 B CN104021568 B CN 104021568B CN 201410294262 A CN201410294262 A CN 201410294262A CN 104021568 B CN104021568 B CN 104021568B
Authority
CN
China
Prior art keywords
contour
image
point
pixel
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201410294262.3A
Other languages
Chinese (zh)
Other versions
CN104021568A (en
Inventor
李振华
徐胜男
江耿红
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201410294262.3A priority Critical patent/CN104021568B/en
Publication of CN104021568A publication Critical patent/CN104021568A/en
Application granted granted Critical
Publication of CN104021568B publication Critical patent/CN104021568B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an automatic registering method of visible lights and infrared images based on polygon approximation of a contour, belonging to the field of multi-source image registration. The method mainly comprises the following five steps: I, applying an edge detection operator and contour tracing algorithm to extract a main contour of an image; II, carrying out polygon approximation on the extracted contour; III, matching the fitted contour and selecting control points; IV, selecting an image transformation model and estimating transformation parameters; and V, carrying out corresponding transformation and interpolation operation to input images according to the transformation parameters. The method provided by the invention is high in registering precision and fast in speed and can effectively solve the automatic registering problem of visible lights and infrared images under rigid body transformation.

Description

Automatic registration method of visible light and infrared image based on contour polygon fitting
Technical Field
The invention relates to an automatic registration method of visible light and infrared images based on contour polygon fitting, and belongs to the field of multi-source image registration.
Background
Unlike a single sensor image, visible light and infrared images from different sensors have large differences in gray scale values, image contrast, sensitive objects, etc., which increases the difficulty and complexity of image registration. Common image registration methods fall into two categories: image gray scale based methods and image feature based methods. Due to the fact that image gray scale features acquired by different sensors are not consistent, it is difficult to register such images by using an image gray scale-based registration method. The feature-based registration method can extract the significant features of the image, such as contours, angular points and the like, compress the information content of the image, has high registration speed and has robustness on image gray level transformation. With the development of the multi-source image fusion technology, the image feature-based registration method has wide application in the field of image registration. The method mainly comprises four aspects of feature extraction, feature matching, transformation model selection, parameter calculation, coordinate transformation, interpolation and the like.
More line features are applied are contour features. The article: dai X L, Khorram S.A Feature-based Registration of Algorithm Using Improved Chain-code reconstructed with inverse movements [ J ]. IEEE Transactions on Geoscience and RemoteSensing,1999,37(5): 2351-. The method has high image registration accuracy, but requires that a good closed contour can be detected from an input image. This registration method is not applicable when a matching closed contour cannot be detected from the input image. The article: li H, Manjunath B S, Mitra S K.A contact-based method to Multisensor Image Registration [ J ] IEEE Transactionson imaging processing,1995,4(3):320-334, proposes a multi-sensor Image Registration method based on Contour, and selects corner points matching the open Contour and the centroid matching the closed Contour as control points by respectively matching the open Contour and the closed Contour. The method fully utilizes the information of the open contour and the closed contour of the image, and the registration precision is relatively high. However, the algorithm of the method is high in complexity, and the registration accuracy is greatly influenced by corner detection.
Disclosure of Invention
The invention aims to provide an automatic registration method of visible light and infrared images based on contour polygon fitting, aiming at the defects of the existing multi-source image registration method.
In order to achieve the purpose, the invention adopts the following technical scheme:
a method for automatically registering visible light and infrared images based on contour polygon fitting comprises the following steps:
1) respectively extracting main outlines of the reference image and the image to be registered;
2) redundant points and noise of the contour extracted by the contour are more, which not only increases the matching difficulty, but also is easy to generate errors in the registration process for the complex contour; polygon fitting is needed to be carried out on the extracted contour, the complex contour is simplified, and redundant points and noise in the contour are removed;
3) carrying out contour matching and control point selection;
4) selecting an image transformation model, and estimating image transformation parameters according to the control points;
5) and according to the estimated transformation parameters, carrying out resampling and interpolation operation on the image to be registered. And (3) calculating the coordinates of each pixel in the registration result image one by one according to the transformation parameters, which easily causes that the calculated coordinates of the pixel points are not all integers, and solving the problem by adopting interpolation operation.
The extraction profile in the step 1) is mainly divided into two parts: edge detection and contour tracking. And taking the infrared image as a reference image and taking the visible light image as an image to be registered. Although the infrared and visible images have low gray scale similarity, they have common target information, commonThe contour correlation of the target information is large. And detecting the reference image contour and the image contour to be registered with relatively high correlation by using a Canny operator. And describing the outline by adopting Freeman chain codes, and storing the coordinates and the chain codes of each boundary point by outline tracking. Describing an outline in a chain code manner requires recording the start of the outline and a list of chain code values for each point on the outline relative to the previous point. Taking 8-domain chain codes as an example, firstly searching outline starting points from left to right and from top to bottom; and searching a second contour point according to the sequence of left-down, right-down and right-down in the anticlockwise direction, and then searching other contour points according to the sequence of left-down, right-up, upper, left-up and left-up until a next contour starting point is found. The traced contours are of varying lengths and contain numerous, chaotic texture edges. In order to improve the operation efficiency and obtain the useful contour which is most beneficial to matching, a contour length threshold T can be set in combination with the extraction effect of the contour in the contour tracking processCOnly contours exceeding the threshold remain.
The step 2) of fitting the profile with polygons adopts an iterative endpoint fitting method: see the paper for details: zhang Sai Zhang Hua, Zhang Xinhong fast algorithm for polygon fitting in image processing [ J ] computer development and application, 2001,4(10): 474-. In the fitting algorithm, the more the iteration times, the higher the fitting accuracy of the contour, and the closer the fitted contour line is to the original contour line. The fitting algorithm fits a contour mainly as follows:
(1) setting a distance threshold T;
(2) selecting a starting point A and an end point B of a contour line as two end points of a fitting polygon;
(3) calculating the distance from all points A, B on the contour line AB to the connecting line A, B, selecting the point C with the maximum distance, and setting the maximum distance value as H;
(4) comparing H and T, if H > T, indicating that C is an end point of the fitting polygon, and continuing to the step (5); if H is less than T, jumping out of the algorithm, and showing that no end point exists on the section of contour line;
(5) the end point C divides the contour line AB into an AC part and a BC part, and the end points on the contour lines of the two parts are respectively found out according to the steps (2), (3), (4) and (5); all the endpoints on the curve AB are found out (A, B, C, D … …), and are connected in sequence to obtain the final fitting polygon, and the endpoint A, B, C, D … … is the polygon vertex.
The contour matching in the step 3) adopts a chain code matching mode. A digitized contour curve can be represented by eight-directional Freeman chain codes. As shown in FIG. 2, PiRepresenting the current pixel, i being the pixel index value, CiRepresents PiChain code value of Ci∈ {0,1, …,7 }. if PiThe next pixel is at b6In position, then CiIs 6. Setting a reference image and an image to be registered as a first image and a second image respectively, obtaining a chain code of each outline in the first image and the second image according to a coding mode of a Freeman chain code, and representing the outline after polygon fitting of the two images by using the chain code.
Assume chain code { aiRepresents a length N in the first imageAProfile A of (1), chain code { b }iRepresents a length N in the second imageBThe profile B of (A); respectively intercepting a contour segment with the length of n by taking the (k +1) th pixel on A as a starting point and the (l +1) th pixel on B as a starting point; wherein, the chain code value corresponding to the (k +1) th pixel on A is akAnd the (l +1) th pixel on B corresponds to the chain code value Bl(ii) a The matching degree of the two contour segments is defined as:
in the formula,wherein i, j ∈ [0, n-1 ]],Dkl nRepresenting a length of nA, B degree of matching of two profile segments, ak+jDenotes the chain code value of the (k + j +1) th pixel on A, bl+jA chain code value representing the (l + j +1) th pixel on B; setting a matching degree threshold TDWhen is coming into contact withTwo profile segments intercepted are shown as matching.
To obtain the control points, matching feature points on the matching contour need to be selected. The feature of each profile is a collection of segments of small profile features centered at feature points. Therefore, a contour segment of a certain length near a feature point on the contour can be selected as a matching unit to find a matching contour segment in another image. I.e. selecting T before and after the contour by taking the feature point on the contour as the centerLEach pixel has a length of (2T)L+1), traversing another image, continuously calculating matching degree, selecting more than TDThe characteristic profile section corresponding to the maximum matching degree is the matching profile section, and the corresponding characteristic point is the matching characteristic point.
The detection of the feature points has a great influence on the registration accuracy of the image. Observing the extracted contour image, the contour feature points mainly comprise angular points, tangent points and inflection points. The curvature of the angular point is large, and the extraction is relatively easy; the tangent point and the inflection point have smaller curvature and are difficult to extract. By adopting the angular point detection method based on the Freeman chain code, missing detection always occurs, and the setting difficulty of each matching threshold parameter is increased. In addition, when the number of matched corner points is small, the registration accuracy of the image may be reduced. The corner detection algorithm is detailed in the following steps: freuk, Guo Lei, Zhao Tian Yun, etc. Freeman chain codes describe curve matching methods [ J ] computer engineering and applications, 2012,48(4):5-8. all corners, tangents and inflections are contained in polygon vertices as can be seen from the polygon fitting process. Therefore, the polygon vertexes can be selected as the feature points, the condition of missing detection is avoided, and the registration effect is better. The specific operation flowchart is shown in fig. 3.
The transformation model in the step 4) adopts a rigid body transformation model:
wherein,for the coordinates of the control points in the image to be registered,the coordinate of a control point in a reference image is shown, and deltax is horizontal displacement and the unit is pixel; Δ y is the vertical displacement in pixels; θ is the rotation angle in degrees.
Defining a transformation parameter matrix:controlling a point matrix of the image to be matched:reference picture control point matrix:wherein m is the logarithm of matched control points, m > 2, andand and……、andare matched pairs of control points. There may be:the transformation parameter matrix M can be calculated from this equation.
A transformation parameter matrix is estimated by adopting a cutting least square algorithm, and the algorithm mainly comprises the following steps:
(1) assuming that M pairs of control points form a set P, calculating the least square solution of the transformation matrix M of all the points in P as:
(2) obtaining an estimated value by using the transformation matrix M and the matrix X obtained in (1)Calculating an estimateAnd the actual valueError between the estimated value and the actual value of each pair of control points in (1):
wherein,for the actual value of the control point i in the image to be registered,i ∈ {1,2, …, m } is an estimate of the control point i in the image to be registered.
(3) The Error is maximized ErrormaxThe corresponding matching control point pair is deleted, and the set P' is updated. Then, the updated P 'is used to calculate a new transformation matrix M' by a least square method.
(4) Setting an error threshold TERepeating (2) and (3) continuously. Up to Errormax<TEAnd obtaining a final transformation matrix.
Under the condition that the number of the outliers is not more than half, the least square method for cutting has strong fault-tolerant capability, can continuously remove the outliers, and can correctly estimate the transformation parameters.
The interpolation algorithm in the step 5) is bilinear interpolation. After the transformation parameters are obtained, the corresponding coordinate transformation needs to be performed on the image to be registered by using the transformation parameters. And (3) calculating the coordinates of each pixel in the registration result image one by one according to the transformation parameters, which easily causes that the calculated coordinates of the pixel points are not all integers, and solving the problem by adopting an interpolation method. The bilinear interpolation is an interpolation method which has small calculated amount, can eliminate the sawtooth phenomenon to a great extent and has good interpolation effect. The essence of the method is to estimate the pixel value of the current non-integer point by using the pixel value weights of 4 neighborhood integer points, and the schematic diagram of the algorithm is shown in FIG. 4.
TABLE 1
Table 1 shows the results of experimental comparison of the method of the invention with the methods of the documents Li H, Manjunath B S, Mitra S K.A contact-based Doppler approach to Multisensor Image Registration [ J ]. IEEE Transactions on imaging processing,1995,4(3):320- & 334, and manual Registration. The experimental results show that: the parameter estimation of the method of the invention is closest to the actual value of the parameter. In addition, the RMSE of the process of the invention is minimal. It can be seen that the registration accuracy of the method of the present invention is the highest compared to the other two methods. The results of the registration of the three images to be registered by the method of the present invention are shown in fig. 9-11.
The invention has high registration precision and high speed, and can effectively solve the automatic registration problem of the visible light image and the infrared image under rigid body transformation.
Drawings
FIG. 1 is a schematic diagram of a curvilinear polygon fit;
FIG. 2 is a schematic diagram of the direction values and the direction of the Freeman chain codes;
FIG. 3 is a flow chart of selecting matching polygon vertices;
FIG. 4 is a schematic diagram of bilinear interpolation (● represents integer pixels in an image);
FIG. 5 is an input infrared image;
FIG. 6 is an input visible light image;
FIG. 7 is a first image to be registered;
FIG. 8 is a second image to be registered;
FIG. 9 is a third image to be registered;
FIG. 10 is a graph of contour matching between a reference image and an image to be registered (the left is the reference image and the right is the image to be registered);
FIG. 11 is a second contour matching graph of the reference image and the image to be registered (the left is the reference image, and the right is the image to be registered);
FIG. 12 is a three-contour matching graph of a reference image and an image to be registered (the left is the reference image, and the right is the image to be registered);
FIG. 13 is a graph of the registration results for image one to be registered;
FIG. 14 is a diagram of the registration result of the second image to be registered;
fig. 15 is a registration result diagram of the image to be registered three.
Detailed Description
The invention is further illustrated with reference to the figures and examples.
Step 1: and inputting a reference image and an image to be registered. 5-6, two registered 734X 473 infrared images and visible light images are known. Taking the infrared image as a reference image, and respectively carrying out the following three geometric transformations (1) on the visible light image by rotating 3 degrees anticlockwise; (2) horizontal displacement 5, vertical displacement 5; (3) horizontal displacement 5, vertical displacement 5, counterclockwise rotation 3 °; three images to be registered are obtained as shown in fig. 7-9.
Step 2: and respectively extracting the outline information of the reference image and the three images to be registered. Firstly, edge detection is carried out on each image by using a Canny operator to obtain a reference image and a contour image with larger related information of an image to be registered. Then, the detected contour is tracked, and the coordinates of each boundary point are stored. In the process, short contours and isolated points in the image are ignored, and only the length T exceeding a certain threshold value is reservedCThe profile of (a). Here TCSee fig. 10-12 for profile matching plots 80. If the noise of the input image is large, the corresponding image needs to be filtered and denoised before edge detection.
And step 3: and performing polygon fitting on the extracted contour. Comparing the fitting effect of different thresholds shows that the larger the T, the more severely the fitted curve segment deforms, especially in the uneven area of the original contour. The selection of T-2 can remove the redundant points on the contour and maintain the shape of the original contour curve to a large extent. At the same time, the vertices to which the profile polygons fit are stored. Here, polygon vertices are selected as feature points of the contour. The polygon vertex covers several common characteristic points such as angular points, tangent points, inflection points and the like, and the problem of missing detection does not exist. The characteristic points can be stored in the process of polygon fitting, and the algorithm complexity is reduced.
And 4, step 4: coding the outline after polygon fitting by using Freeman chain code, and selecting front and back T points with characteristic point as centerLOne pixel is (2T) longL+1) feature chain code segment as matching unit for contour matching. Setting a reference image and an image to be registered as a first image and a second image respectively, and assuming a chain code { a }iRepresents a length N in the first imageAProfile A of (1), chain code { b }iRepresents a length N in the second imageBThe profile B of (A); respectively intercepting a contour segment with the length of n by taking the (k +1) th pixel on A as a starting point and the (l +1) th pixel on B as a starting point; wherein, the chain code value corresponding to the (k +1) th pixel on A is akAnd the (l +1) th pixel on B corresponds to the chain code value Bl(ii) a The matching degree of the two contour segments is defined as:
in the formula,wherein i, j ∈ [0, n-1 ]],Dkl nDenotes the degree of match, a, of A, B two contour segments of length nk+jDenotes the chain code value of the (k + j +1) th pixel on A, bl+jA chain code value representing the (l + j +1) th pixel on B; setting a matching degree threshold TDWhen is coming into contact withTwo profile segments intercepted are shown as matching. And the corresponding characteristic points on the matched contour segment are the matched characteristic points.
And 5: an image transformation model is selected and transformation parameters are estimated. It is assumed here that the rigid transformation between the reference image and the image to be registered is a rigid transformation model:
wherein,for the coordinates of the control points in the image to be registered,the coordinate of a control point in a reference image is shown, and deltax is horizontal displacement and the unit is pixel; Δ y is the vertical displacement in pixels; θ is the rotation angle in degrees.
Defining a transformation parameter matrix:controlling a point matrix of the image to be matched:reference picture control point matrix:wherein m is the logarithm of matched control points, m > 2, andand and……、andare matched pairs of control points. There may be:
the feature points which are mismatched are continuously removed by using a cutting least square method, and a transformation parameter matrix of the image is estimated according to the selected control points, as shown in fig. 10-12, which are contour matching result graphs of three images to be registered, wherein the matched control points after being selected are represented by '■'.
Step 6: and performing resampling and interpolation operation on the three images to be registered according to the calculated transformation parameters. Fig. 13-15 show the registration results of three images to be registered.
And 7: registration accuracy is described. The registration accuracy is described in terms of Root Mean Square Error (RMSE). The smaller the RMSE value, the higher the registration accuracy. The RMSE calculation is as follows:
in the formula,wherein n 'is the final control point logarithm, and n' is more than 2 to ensure the correct solution of the transformation parameters in the experiment; (x)i,yi) Is the control point coordinates in the reference image;is the control point coordinate in the image to be registered; Δ x is the horizontal displacement in pixels; Δ y is the vertical displacement in pixels; θ is the rotation angle in degrees.
The content of the invention is limited to the registration of visible light images and infrared images under rigid body transformation, and other transformation models are not within the spirit and principle of the invention.
TABLE 1
Table 1 shows the results of experimental comparison of the method of the invention with the methods of the documents Li H, Manjunath B S, Mitra S K.A contact-based Doppler approach to Multisensor Image Registration [ J ]. IEEE Transactions on imaging processing,1995,4(3):320- & 334, and manual Registration. The experimental results show that: the parameter estimation of the method of the invention is closest to the actual value of the parameter. In addition, the RMSE of the process of the invention is minimal. It can be seen that the registration accuracy of the method of the present invention is the highest compared to the other two methods. The results of the registration of the three images to be registered by the method of the present invention are shown in fig. 9-11. In addition, the invention has no requirement on the closed contour extraction quantity of the input image.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (5)

1. A method for automatically registering visible light and infrared images based on contour polygon fitting is characterized by comprising the following steps:
1) respectively extracting main outlines of the reference image and the image to be registered;
extracting a main contour, dividing the main contour into an edge detection part and a contour tracking part, carrying out edge detection by adopting a Canny operator which is best in noise suppression and edge positioning compromise, and tracking all contours in a counterclockwise direction;
2) performing polygon fitting on the main profile extracted in the step 1) by adopting an iterative endpoint fitting algorithm, simplifying the complex profile on the premise of keeping the profile characteristics, and removing redundant points and noise in the profile;
3) matching the fitted contours and selecting control points; in order to obtain the control points, matching feature points on the matching contour need to be selected, and polygon vertexes are selected as the feature points;
4) selecting an image transformation model, and estimating image transformation parameters according to the control points;
5) resampling the image to be registered and carrying out interpolation operation;
and (4) calculating coordinates of each pixel in the registration result image one by one according to the transformation parameters obtained in the step 4), and obtaining the gray value of each pixel in the registration result image by using an interpolation algorithm.
2. The method for automatic registration of visible and infrared images based on contour polygon fitting according to claim 1, characterized in that: the extraction quantity of the closed contours of the input images in the step 1) is not required, and the automatic registration of the visible light images and the infrared images under rigid body transformation can be realized under the condition of obtaining the clear main contours of the images.
3. The method for automatic registration of visible and infrared images based on contour polygon fitting according to claim 1, characterized in that: performing polygon fitting on the extracted main profile by adopting an iterative endpoint fitting algorithm in the step 2), wherein the method for fitting one profile mainly comprises the following steps:
(1) setting a distance threshold T;
(2) selecting a starting point A and an end point B of a contour line as two end points of a fitting polygon;
(3) calculating the distance from all points A, B on the contour line AB to the connecting line A, B, selecting the point C with the maximum distance, and setting the maximum distance value as H;
(4) comparing H and T, if H > T, indicating that C is an end point of the fitting polygon, and continuing to the step (5); if H is less than T, jumping out of the algorithm, and showing that no end point exists on the section of contour line;
(5) the end point C divides the contour line AB into an AC part and a BC part, and the end points on the contour lines of the two parts are respectively found out according to the steps (2), (3), (4) and (5); all the endpoints on the curve AB are found out (A, B, C, D … …), and are connected in sequence to obtain the final fitting polygon, and the endpoint A, B, C, D … … is the polygon vertex.
4. The method for automatic registration of visible and infrared images based on contour polygon fitting according to claim 1, characterized in that: the outline matching in the step 3) adopts a chain code matching mode;
firstly, according to the coding mode of the Freeman chain code, representing each fitted outline by the Freeman chain code; setting a reference image and an image to be registered as a first image and a second image respectively, and assuming a chain code { a }iRepresents a length N in the first imageAProfile A of (1), chain code { b }iRepresents a length N in the second imageBThe profile B of (A); respectively intercepting a contour segment with the length of n by taking the (k +1) th pixel on A as a starting point and the (l +1) th pixel on B as a starting point; wherein, the chain code value corresponding to the (k +1) th pixel on A is akAnd the (l +1) th pixel on B corresponds to the chain code value Bl(ii) a The matching degree of the two contour segments is defined as:
in the formula,wherein i, j ∈ [0, n-1 ]],Dkl nDenotes the degree of match, a, of A, B two contour segments of length nk+jDenotes the chain code value of the (k + j +1) th pixel on A, bl+jA chain code value representing the (l + j +1) th pixel on B; setting a matching degree threshold TDWhen is coming into contact withDescription interceptionThe two profile segments of (a) are matched;
the feature of each contour is a set of several sections of small contour features taking a feature point as a center, so that a contour section near the feature point can be selected as a matching unit to search for a matching contour section in another image; and (3) taking the polygon vertex in the step 2) as the center, selecting the same number of pixels from the front and the back of the contour to form a feature contour segment, and searching a matched contour segment in the other image through contour matching degree calculation, wherein the corresponding feature point is the matched feature point.
5. The method for automatic registration of visible and infrared images based on contour polygon fitting according to claim 1, characterized in that: in the step 4), the forward transformation from the pixel points of the image to be registered to the pixel points of the registration result image cannot ensure that each pixel point of the image to be matched has a corresponding pixel point in the registration result image, namely, an unassigned pixel point may appear in the registration result image; in order to avoid the situation, inverse transformation is adopted, namely, for each pixel point in the reference image, the pixel point corresponding to the pixel point in the image to be registered is solved reversely;
the rigid body transformation model is:
wherein,for the coordinates of the control points in the image to be registered,coordinates of control points in the reference image; Δ x is the horizontal displacement in pixels; Δ y is the vertical displacement in pixels; theta is a rotation angle in degrees;
defining a transformation parameter matrix:controlling a point matrix of the image to be matched:reference picture control point matrix:wherein m is the logarithm of matched control points, m > 2, andand andandthe control points are matched control point pairs;
estimating a transformation parameter matrix by using a cutting least square algorithm, wherein the algorithm mainly comprises the following steps:
(1) assuming that M pairs of control points form a set P, calculating the least square solution of the transformation matrix M of all the points in P as:
(2) obtaining an estimated value by using the transformation matrix M and the matrix X obtained in (1)Calculating an estimateAnd the actual valueError between the estimated value and the actual value of each control point:
wherein,for the actual value of the control point i in the image to be registered,an estimated value of a control point i in the image to be registered, i ∈ {1,2, …, m };
(3) the Error is maximized ErrormaxDeleting the corresponding matching control point pair, and updating the set P'; then, calculating a new transformation matrix M 'by using the updated P' and a least square method;
(4) setting an error threshold TEContinuously repeating the steps (2) and (3); up to Errormax<TEObtaining a final transformation matrix;
under the condition that the number of the outliers is not more than half, the least square algorithm of cutting has stronger fault-tolerant capability, can continuously remove the outliers, and correctly estimates the transformation parameters.
CN201410294262.3A 2014-06-25 2014-06-25 Automatic registering method of visible lights and infrared images based on polygon approximation of contour Expired - Fee Related CN104021568B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410294262.3A CN104021568B (en) 2014-06-25 2014-06-25 Automatic registering method of visible lights and infrared images based on polygon approximation of contour

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410294262.3A CN104021568B (en) 2014-06-25 2014-06-25 Automatic registering method of visible lights and infrared images based on polygon approximation of contour

Publications (2)

Publication Number Publication Date
CN104021568A CN104021568A (en) 2014-09-03
CN104021568B true CN104021568B (en) 2017-02-15

Family

ID=51438306

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410294262.3A Expired - Fee Related CN104021568B (en) 2014-06-25 2014-06-25 Automatic registering method of visible lights and infrared images based on polygon approximation of contour

Country Status (1)

Country Link
CN (1) CN104021568B (en)

Families Citing this family (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105447849A (en) * 2015-11-12 2016-03-30 北京建筑大学 Remote-sensing image segmentation water body polygon outline generalization method
CN106353288A (en) * 2016-08-30 2017-01-25 常州正易晟网络科技有限公司 Finished product chemical residue detection device and method based on fluorescent analysis
CN106353319A (en) * 2016-08-30 2017-01-25 常州正易晟网络科技有限公司 Device and method for automatically analyzing texture and process of sewing thread based on video recognition
CN106338263A (en) * 2016-08-30 2017-01-18 常州正易晟网络科技有限公司 Metalwork surface flatness detecting device and method
CN106504225A (en) * 2016-09-27 2017-03-15 深圳增强现实技术有限公司 A kind of recognition methodss of regular polygon and device
CN106548467B (en) * 2016-10-31 2019-05-14 广州飒特红外股份有限公司 The method and device of infrared image and visual image fusion
CN107240128B (en) * 2017-05-09 2020-09-11 北京理工大学 X-ray and color photo registration method based on contour features
WO2018214151A1 (en) * 2017-05-26 2018-11-29 深圳配天智能技术研究院有限公司 Image processing method, terminal device and computer storage medium
CN107368828A (en) * 2017-07-24 2017-11-21 中国人民解放军装甲兵工程学院 High definition paper IMAQ decomposing system and method
CN107633528A (en) * 2017-08-22 2018-01-26 北京致臻智造科技有限公司 A kind of rigid body recognition methods and system
CN107845059A (en) * 2017-10-12 2018-03-27 北京宇航时代科技发展有限公司 Human meridian point's state dynamically normalized digital analysis system and method
CN108416839B (en) * 2018-03-08 2022-04-08 云南电网有限责任公司电力科学研究院 Three-dimensional reconstruction method and system for contour line of multiple X-ray rotating images
CN109166098A (en) * 2018-07-18 2019-01-08 上海理工大学 Work-piece burr detection method based on image procossing
CN110246173B (en) * 2018-08-14 2023-11-03 浙江大华技术股份有限公司 Method and device for judging shape area
CN109886878B (en) * 2019-03-20 2020-11-03 中南大学 Infrared image splicing method based on coarse-to-fine registration
CN110082355A (en) * 2019-04-08 2019-08-02 安徽驭风风电设备有限公司 A kind of wind electricity blade detection system
CN110245575B (en) * 2019-05-21 2023-04-25 东华大学 Human body type parameter capturing method based on human body contour line
CN110414470B (en) * 2019-08-05 2023-05-09 深圳市矽赫科技有限公司 Inspection method based on terahertz and visible light
CN110596120A (en) * 2019-09-06 2019-12-20 深圳新视智科技术有限公司 Glass boundary defect detection method, device, terminal and storage medium
CN110766730B (en) * 2019-10-18 2023-02-28 上海联影智能医疗科技有限公司 Image registration and follow-up evaluation method, storage medium and computer equipment
CN113409371B (en) * 2021-06-25 2023-04-07 浙江商汤科技开发有限公司 Image registration method and related device and equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567979A (en) * 2012-01-20 2012-07-11 南京航空航天大学 Vehicle-mounted infrared night vision system and multi-source images fusing method thereof
CN103514606A (en) * 2013-10-14 2014-01-15 武汉大学 Heterology remote sensing image registration method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567979A (en) * 2012-01-20 2012-07-11 南京航空航天大学 Vehicle-mounted infrared night vision system and multi-source images fusing method thereof
CN103514606A (en) * 2013-10-14 2014-01-15 武汉大学 Heterology remote sensing image registration method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
刚体变化下基于轮廓的多传感器图像匹配算法;陈桂友等;《系统工程与电子技术》;20070731;第29卷(第7期);第1节 *

Also Published As

Publication number Publication date
CN104021568A (en) 2014-09-03

Similar Documents

Publication Publication Date Title
CN104021568B (en) Automatic registering method of visible lights and infrared images based on polygon approximation of contour
CN109903327B (en) Target size measurement method of sparse point cloud
CN109658515B (en) Point cloud meshing method, device, equipment and computer storage medium
Khaloo et al. Robust normal estimation and region growing segmentation of infrastructure 3D point cloud models
Choi et al. RGB-D edge detection and edge-based registration
Fantoni et al. Accurate and automatic alignment of range surfaces
CN110688947B (en) Method for synchronously realizing human face three-dimensional point cloud feature point positioning and human face segmentation
CN108225319B (en) Monocular vision rapid relative pose estimation system and method based on target characteristics
CN108830888B (en) Coarse matching method based on improved multi-scale covariance matrix characteristic descriptor
Chen et al. Robust affine-invariant line matching for high resolution remote sensing images
CN106981077A (en) Infrared image and visible light image registration method based on DCE and LSS
CN106296587B (en) Splicing method of tire mold images
US20150348269A1 (en) Object orientation estimation
Petit et al. Augmenting markerless complex 3D objects by combining geometrical and color edge information
Kerautret et al. 3D geometric analysis of tubular objects based on surface normal accumulation
Chen et al. A novel Fourier descriptor based image alignment algorithm for automatic optical inspection
CN109064473B (en) 2.5D ultrasonic panoramic image segmentation method
CN111259788A (en) Method and device for detecting head and neck inflection point and computer equipment
CN117237428B (en) Data registration method, device and medium for three-dimensional point cloud
Dambreville et al. A geometric approach to joint 2D region-based segmentation and 3D pose estimation using a 3D shape prior
CN115222912A (en) Target pose estimation method and device, computing equipment and storage medium
Sakai et al. Phase-based window matching with geometric correction for multi-view stereo
CN102679871B (en) Rapid detection method of sub-pixel precision industrial object
Mokhtarian Silhouette-based object recognition with occlusion through curvature scale space
Parmar et al. An efficient technique for subpixel accuracy using integrated feature based image registration

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20170215

Termination date: 20190625

CF01 Termination of patent right due to non-payment of annual fee