CN112669360B - Multi-source image registration method based on non-closed multi-dimensional contour feature sequence - Google Patents

Multi-source image registration method based on non-closed multi-dimensional contour feature sequence Download PDF

Info

Publication number
CN112669360B
CN112669360B CN202011379734.7A CN202011379734A CN112669360B CN 112669360 B CN112669360 B CN 112669360B CN 202011379734 A CN202011379734 A CN 202011379734A CN 112669360 B CN112669360 B CN 112669360B
Authority
CN
China
Prior art keywords
image
edge
curve
line segment
source image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011379734.7A
Other languages
Chinese (zh)
Other versions
CN112669360A (en
Inventor
曾操
牟一飞
朱圣棋
廖桂生
李力新
陶海红
许京伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN202011379734.7A priority Critical patent/CN112669360B/en
Publication of CN112669360A publication Critical patent/CN112669360A/en
Application granted granted Critical
Publication of CN112669360B publication Critical patent/CN112669360B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention provides a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence. The method comprises the steps of carrying out noise reduction on a multi-source image, carrying out edge extraction on the image subjected to noise reduction through a Sobel operator to obtain a preliminary profile curve, and repairing the extracted profile curve as much as possible through corrosion and expansion operation to obtain a non-closed curve. Then, the method for searching concave-convex points of the edge curve is used for estimating concave-convex points on a non-closed curve, the invariance characteristic of the edge curve of the multi-source image is described by using a broken line segment fitting method, the infrared image is initially registered by taking the optical image as reference, and finally, the optimal transformation model parameters are searched by using an iterative optimization method, so that the multi-source image registration is realized. The measured data verifies that the method can enable the multi-source image with poor image characteristics and low image quality to reach the pixel level registration precision, and improves the utilization rate of image information.

Description

Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
Technical Field
The invention belongs to the technical field of multi-source image registration, and particularly relates to a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence.
Background
In practice, due to different shooting environments, different imaging mechanisms, and the fact that the multi-source images cannot obtain a sufficient number of point features or complete contour features due to the movement of the imaging platform and mechanical jitter, the conventional method is difficult to perform registration fusion on such images. The results of a large number of infrared and visible light image registration experiments and the analysis of the multi-source image field Jing Texing can be used to obtain: the details of the ground object target in the infrared image and the visible light image are sometimes very different, and the ground object target in the infrared image is generally fuzzy due to the imaging principle and the resolution of the infrared image. However, the edge characteristics of the surface feature targets of the two types of images are roughly consistent in position on a large scale, although the surface boundaries of some surface feature targets are fuzzy in the multi-source images (infrared and SAR) and cannot be extracted, the registration of the multi-source images can be realized by using the edge geometric characteristics as long as enough edge geometric characteristics exist, so that the same surface feature target in the two images is aligned under the same coordinate system.
The most common method in the conventional edge feature-based multi-source image registration method is a contour feature-based method. The method has the basic idea that firstly, outline extraction is carried out on an image, then closed outlines in the image to be registered and a reference image are searched, then polygon fitting is carried out on the closed outlines, and finally the matching relation between the image to be registered and the reference image is searched according to a fitting polygon to realize image registration. However, in reality, the infrared image and the visible light image show obvious difference in gray scale characteristics and scene detail characteristics, and it is difficult to achieve that the outlines of objects in the same scene in the optical image and the infrared image are completely closed. In addition, in general, the scene contained in the optical image is larger, and the infrared image is smaller, so that the outline of the same object may be completely closed in the optical image, but the infrared image is often not closed or only a part of the complete outline under the influence of the shooting environment, the lens angle, the definition and other factors, and the conventional method based on the complete outline features cannot realize multi-source image data registration aiming at such problems.
Disclosure of Invention
In order to solve the above problems in the prior art, the invention provides a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence. The technical problem to be solved by the invention is realized by the following technical scheme:
the invention provides a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence, which comprises the following steps:
s1: acquiring a multi-source image, wherein the multi-source image comprises an optical image and an infrared image;
s2: carrying out denoising processing on the multi-source image by using wavelet denoising to obtain a denoised image after denoising processing;
s3: extracting edge curve characteristics of the noise-reduced image to obtain an edge profile curve;
s4: carrying out concave-convex point estimation on the edge contour curve by using an improved concave-convex point estimation method to obtain concave-convex points on the edge contour curve;
s5: sequentially connecting concave and convex points on the curve along the curve to obtain a plurality of broken line segments;
s6: fitting each broken line segment in the optical image and each broken line segment in the infrared image by using a broken line segment fitting method to obtain a first combined line segment and a second combined line segment, wherein the first combined line segment and the second combined line segment are sequentially connected;
s7: forming the first combined line segment into a first set of edge curve feature sequences of an optical image and forming the second combined line segment into a second set of edge curve feature sequences of an infrared image;
s8: based on the first edge curve characteristic sequence set and the second edge curve characteristic sequence set, carrying out initial registration on the infrared image and the optical image to obtain an image subjected to initial registration and an initial transformation matrix representing an initial corresponding relation when the infrared image and the optical image are registered;
s9: traversing and searching the initially registered image by using an iterative optimization method, determining a pixel point which does not accord with a pixel value precision threshold, determining a parameter of the pixel point in the initial transformation matrix, and adjusting the parameter until the pixel point accords with the pixel value precision threshold to obtain an optimal transformation matrix;
s10: and registering the infrared image with the optical image by using the optimal transformation matrix.
Optionally, the step of S2 includes:
s21: performing wavelet transformation on the multisource image to obtain a wavelet-transformed multisource image;
s22: determining signals and noise in the multi-source image after wavelet transformation based on different behaviors of the signals and the noise;
s23: and reconstructing the multi-source image based on the signals and the noise in the multi-source image to obtain the noise-reduced multi-source image.
Optionally, the step S3 includes:
s31: distinguishing the foreground and the background of the multi-source image subjected to noise reduction by using a target filter;
s32: performing edge extraction on the denoised multi-source image by using a Sobel operator;
s33: performing corrosion operation and expansion operation on the image with the edge extracted to obtain edge curve characteristics;
s34: and connecting the edge curve features to form an edge feature curve.
Optionally, the step S4 includes:
s41: estimating concave-convex points on the edge contour curve by adopting a method combining direction determination and a second derivative;
s42: counting the number of concave and convex points on each contour curve;
s43: for each edge contour curve, when the number of concave-convex points on the edge contour curve is less than a preset number threshold, rejecting the edge contour curve, and when the number of concave-convex points on the edge contour curve is not less than the preset number threshold, reserving the edge contour curve;
s44: traversing each reserved edge contour curve, and when the edge contour curve exceeds a preset number of concave salient points in a preset range, determining that the edge contour curve has a sawtooth section;
s45: and taking the average value of the extreme points of the sawtooth segment with the error as a new concave-convex point.
Optionally, the step S6 includes:
s61: setting a threshold value K;
s62: aiming at the optical image and the infrared image, respectively making each point of each section of arc of a plurality of broken line sections of the optical image and the infrared image as a perpendicular line of a corresponding chord, finding out a point on the arc corresponding to the longest section of the perpendicular line, and recording the length of the drooping line section and the position of the corresponding point on the arc;
s63: if the vertical line segment exceeds a set threshold value K, a point corresponding to the broken line segment is used as a vertex of the broken line segment;
and S64, sequentially connecting all the points determined on the broken line segment to obtain a first combined line segment and a second combined line segment.
Optionally, the step of S7 includes:
and calculating and counting the ratio of each adjacent edge of the first combined line segment and the included angle of each adjacent edge to form a first edge curve characteristic sequence set, and calculating and counting the ratio of each adjacent edge of the second combined line segment and the included angle of each adjacent edge to form a second edge curve characteristic sequence set.
Optionally, the step S9 includes:
s81: traversing and searching the image after the initial registration by using an iterative optimization method, determining a pixel point which does not accord with a pixel value precision threshold, and determining a parameter of the pixel point in the initial transformation matrix;
s82: taking the rotation angle and the position translation amount of the initial transformation matrix in the transformation process as initial parameters;
s83: determining parameters of the pixel point in the initial transformation matrix, selecting proper step size according to a pixel value precision threshold, and changing parameters corresponding to the pixel point which does not accord with the pixel value precision threshold in an interval angle range and a translation range containing the initial parameters to obtain a plurality of transformed matrixes to be transformed;
s84: registering the optical image and the infrared image based on each matrix to be transformed to obtain a registered image;
s85: and when the root mean square error of the vertex of the fitting broken line segment in the registered image and the root mean square error of the vertex of the corresponding broken line segment in the reference image is smaller than the preset root mean square error, determining the matrix to be transformed as the optimal transformation matrix.
The invention discloses a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence. The method comprises the steps of carrying out noise reduction on a multi-source image, carrying out edge extraction on the image subjected to noise reduction through a Sobel operator to obtain a preliminary profile curve, and repairing the extracted profile curve as much as possible through corrosion and expansion operation to obtain a non-closed curve. Then, the method for searching concave-convex points of the edge curve is used for estimating concave-convex points on a non-closed curve, the invariance characteristic of the edge curve of the multi-source image is described by using a broken line segment fitting method, the infrared image is initially registered by taking the optical image as reference, and finally, the optimal transformation model parameters are searched by using an iterative optimization method, so that the multi-source image registration is realized. Therefore, the method can enable the multi-source image with poor image characteristics and low image quality to reach the pixel level registration precision, and improves the utilization rate of image information.
The present invention will be described in further detail with reference to the accompanying drawings and examples.
Drawings
Fig. 1 is a flowchart of a multi-source image registration method based on a non-closed multi-dimensional profile feature sequence according to an embodiment of the present invention;
FIG. 2a is an optical image of a street;
FIG. 2b is an infrared image of a street;
FIG. 2c is a partially magnified optical image of FIG. 2 b;
FIG. 3a is an optical image of FIG. 2a after noise reduction;
FIG. 3b is an infrared image of FIG. 2b after noise reduction;
FIG. 3c is the noise reduced image of FIG. 2 c;
FIG. 4a is an optical image of a street image profile extraction;
FIG. 4b is an infrared image of a street image profile extraction;
FIG. 5a is a diagram of the results of a preliminary registration of the registered images;
FIG. 5b is a diagram showing the results of preliminary registration for overlay;
FIG. 5c is a graph of the results of preliminary registration of an enlarged view of the superimposed images;
FIG. 6a is a continuous edge graph of the process of searching for the extreme points of the edge curve;
FIG. 6b is a graph showing the presence of a sawtooth error;
FIG. 7a is an original curve image;
FIG. 7b is a graph of fit determination;
FIG. 7c is a graph showing the results of a broken line segment fitting;
FIG. 8a is a diagram of an original broken line segment;
FIG. 8b is a line segment view after rotation and enlargement;
FIG. 9a is a graph of the results of optimizing registration for registered images;
FIG. 9b is a diagram showing the results of optimizing registration for overlay;
fig. 9c is a diagram of the result of optimizing registration for an enlarged view of the superimposed image.
Detailed Description
The present invention will be described in further detail with reference to specific examples, but the embodiments of the present invention are not limited thereto.
Example one
As shown in fig. 1, a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence provided in an embodiment of the present invention includes:
s1: a multi-source image is acquired,
the multi-source image comprises an optical image and an infrared image, and the optical image and the infrared image can be selected from automobiles on a road.
Referring to fig. 2a to 2c, fig. 2a is an optical image of a street, and fig. 2b is an infrared image of a street; fig. 2c shows a partially enlarged image with an optical image size of 1920 × 1080 pixels and an infrared image size of 640 × 480 pixels.
S2: carrying out denoising processing on the multisource image by using wavelet denoising to obtain a denoised image after denoising processing;
s3: extracting edge curve characteristics of the noise-reduced image to obtain an edge profile curve;
s4: carrying out concave-convex point estimation on the edge contour curve by using an improved concave-convex point estimation method to obtain concave-convex points on the edge contour curve;
s5: sequentially connecting concave and convex points on the curve along the curve to obtain a plurality of broken line segments;
s6: fitting each broken line segment in the optical image and each broken line segment in the infrared image by using a broken line segment fitting method to obtain a first combined line segment and a second combined line segment, wherein the first combined line segment and the second combined line segment are sequentially connected;
s7: combining the first combined line segments into a first set of edge curve feature sequences of the optical image and the second combined line segments into a second set of edge curve feature sequences of the infrared image;
s8: based on the first edge curve characteristic sequence set and the second edge curve characteristic sequence set, performing initial registration on the infrared image and the optical image to obtain an image after the initial registration and an initial transformation matrix representing an initial corresponding relation when the infrared image and the optical image are registered;
s9: traversing and searching the initially registered image by using an iterative optimization method, determining a pixel point which does not accord with a pixel value precision threshold, determining a parameter of the pixel point in an initial transformation matrix, and adjusting the parameter until the pixel point accords with the pixel value precision threshold to obtain an optimal transformation matrix;
s10: and registering the infrared image with the optical image by using the optimal transformation matrix.
The invention discloses a multi-source image registration method based on a non-closed multi-dimensional contour feature sequence. The method comprises the steps of carrying out noise reduction treatment on a multi-source image, carrying out edge extraction on the image subjected to noise reduction through a Sobel operator to obtain a preliminary profile curve, and repairing the extracted profile curve as much as possible through corrosion and expansion operation to obtain an unclosed curve. Then, the method for searching concave-convex points of the edge curve is used for estimating concave-convex points on a non-closed curve, the invariance characteristic of the edge curve of the multi-source image is described by using a broken line segment fitting method, the infrared image is initially registered by taking the optical image as reference, and finally, the optimal transformation model parameters are searched by using an iterative optimization method, so that the multi-source image registration is realized. Therefore, the method can enable the multi-source image with poor image characteristics and low image quality to reach the pixel level registration precision, and improves the utilization rate of image information.
Example two
As an alternative embodiment of the present invention, the step S2 includes:
s21: performing wavelet transformation on the multisource image to obtain a wavelet-transformed multisource image;
s22: determining signals and noise in the multi-source image after wavelet transformation based on different behaviors of the signals and the noise;
s23: and reconstructing the multisource image based on the signals and the noise in the multisource image to obtain the denoised multisource image.
Referring to fig. 3 a-3 b, fig. 3a is the wavelet-transformed denoised image of fig. 2a, fig. 3b is the wavelet-transformed denoised image of fig. 2b, and fig. 3c is the wavelet-transformed denoised image of fig. 2 b. Comparing fig. 3b with fig. 3c, it can be seen that the detail of the infrared image is obviously enhanced, and the edge texture detail is more obvious, so that better edge curve features can be extracted. The denoising algorithm based on wavelet transform generally comprises three steps: the first step is to carry out wavelet transformation on multi-source image data; the second step is to distinguish the noise from the data information by using different behavior of the signal and the noise; the third step is to reconstruct the image data information using an algorithm.
EXAMPLE III
As an alternative embodiment of the present invention, the step S3 includes:
s31: distinguishing the foreground and the background of the multi-source image subjected to noise reduction by using a target filter;
s32: performing edge extraction on the denoised multi-source image by using a Sobel operator;
s33: performing corrosion operation and expansion operation on the image with the edge extracted to obtain edge curve characteristics;
s34: and connecting the edge curve features to form an edge feature curve.
It can be understood that the edge contour of the object in the image can be detected through the local search window because the gray value of the image is substantially unchanged when and only when the local search window moves along the surface edge curve of the ground object, and the gray value of the image is obviously changed when the local search window moves along any other direction. The target filter was designed as follows:
Figure BDA0002809042840000091
in the formula, M is a threshold of the target filter, and g (x, y) is a gradation value of the image.
When the gray scale value is larger than M, the gray scale value is recorded as 255, and when the gray scale value is smaller than M, the gray scale value is recorded as 0. The foreground of the input image is distinguished from the background by an object filter, and background regions which are not interesting are removed. And (3) performing edge extraction on the image subjected to noise reduction through a Sobel operator, wherein the operator comprises two groups of 3x3 matrixes which are respectively in the transverse direction and the longitudinal direction, and performing plane convolution on the matrixes and the image to obtain transverse and longitudinal brightness difference approximate values respectively. If A represents the original image, and Gx and Gy represent the detected images of the horizontal and vertical edges, respectively, the formula is as follows:
Figure BDA0002809042840000101
and
Figure BDA0002809042840000102
the approximate values of the transverse and longitudinal gradients for each pixel of the image can be combined using the following formula to calculate the magnitude of the gradient.
Figure BDA0002809042840000103
The gradient direction can then be calculated using the following formula.
Figure BDA0002809042840000104
In the above example, if the above angle θ is equal to zero, it represents that the image has a longitudinal edge there, and is darker to the left and to the right.
Performing corrosion operation A theta B = { (p + B) ∈ A, B ∈ B } and expansion operation on the image with the edge extracted
Figure BDA0002809042840000105
(where A is the input binary image and B is the structural element) as complete and continuous edge curve features as possible.
Referring to fig. 4a and 4b, fig. 4a is an optical image of the image contour feature extraction of fig. 3a, and fig. 4b is an infrared image of the image contour feature extraction of fig. 3 b. Screening the image contour features in fig. 4a and 4b, repairing the contour curves by using an expansion corrosion algorithm, then removing too short contour curves, then accurately estimating concave and convex points on the curve features, performing broken line fitting, performing preliminary registration, wherein the registration result image is shown in fig. 5a, the superimposed image is shown in fig. 5b, the magnified image is shown in fig. 5c, and the magnified image 5c shows that although the optical and infrared images are preliminarily registered, certain dislocation and ghost images exist in the images, such as roads, bridges and the like, and larger errors exist.
Example four
As an alternative embodiment of the present invention, the step S4 includes:
s41: estimating concave-convex points on the edge contour curve by adopting a method combining direction determination and a second derivative;
s42: counting the number of concave and convex points on each contour curve;
s43: for each edge contour curve, when the number of concave-convex points on the edge contour curve is less than a preset number threshold, rejecting the edge contour curve, and when the number of concave-convex points on the edge contour curve is not less than the preset number threshold, reserving the edge contour curve;
s44: traversing each reserved edge contour curve, and when the edge contour curve exceeds a preset number of concave salient points in a preset range, determining that the edge contour curve has a sawtooth section;
s45: and taking the average value of the extreme points of the sawtooth segment with the error as a new concave-convex point.
It can be understood that the concave-convex points on the edge curve are preliminarily estimated by adopting a method of combining the direction determination and the second derivative. Defined in derivative definition: if the derivative f '(x) of the function y = f (x) is derivable at x, then the derivative of y' is said to be the second derivative of the function y = f (x) at point x, noted as y ", f" (x),
defined by the limit definition: function f (x) at x 0 The second derivative of (f ″) (x) 0 ) Is that the derivative function y = f' (x) is at x 0 Derivative of (i) i.e
Figure BDA0002809042840000111
Relief according to the curve, f' (x) 0 ) Curve at x > 0 0 The points are convex upwards; f' (x) 0 ) At < 0, the curve is at x 0 Making the points concave, counting the number of concave and convex points on each curve, if the number is less than 3, rejecting the curve, and if the number is more than 3, keeping the curve; traversing the whole curve, setting a certain range (generally, taking the range of 2% of the pixel size of the picture), and if concave and convex points appear in a certain range for multiple times, judging that the curve in the range has sawteeth, referring to fig. 6b. And taking the average value of the extreme points of the sawtooth segment with errors as a new concave-convex point.
As shown in fig. 6a to 6b, fig. 6a is a continuous edge curve graph of the process of searching for the extreme point of the edge curve, fig. 6b is a graph with a sawtooth error, the right curve in fig. 6a and 6b is rotated and amplified compared with the left curve, in addition, the edge curve extracted near the point E due to the influence of the imaging mechanism or the shooting environment has a sawtooth, concave and convex point estimation is performed by using an improved method, and it can be seen from the graphs that the positions are basically consistent, the rotation amplification invariance is satisfied, the invariant feature can be used as the invariant feature of image registration, and in addition, certain robustness and anti-interference capability are provided.
EXAMPLE five
As an alternative embodiment of the present invention, the step S6 includes:
s61: setting a threshold value K;
s62: aiming at the optical image and the infrared image, respectively making each point of each section of arc of a plurality of broken line sections of the optical image and the infrared image as a perpendicular line of a corresponding chord, finding out a point on the arc corresponding to the longest section of the perpendicular line, and recording the length of the drooping line section and the position of the corresponding point on the arc;
s63: if the vertical line segment exceeds a set threshold value K, a point corresponding to the broken line segment is used as a vertex of the broken line segment;
and S64, sequentially connecting all the points determined on the broken line segment to obtain a first combined line segment and a second combined line segment.
Referring to FIG. 3a, the estimated concave and convex points on the curve are connected with corresponding graph 3a along the curve in sequence, K is an empirical value selected according to experience, each point on each arc is used as a perpendicular line of a corresponding chord, the point on the arc corresponding to the longest point of the perpendicular line is found, the length of the drooping line segment and the position K of the corresponding point on the arc are recorded i (i =1,2,3.) corresponds to fig. 7b; if the vertical line segment K i If the set threshold K is exceeded, the corresponding point on the curve is taken as a vertex of the broken line segment; and finally, all the points determined on the curve are connected in sequence to finish the matching of the broken line segment and the corresponding graph 3c.
EXAMPLE six
As an alternative embodiment of the present invention, the step S7 includes:
and calculating and counting the ratio of each adjacent edge of the first combined line segment and the included angle of each adjacent edge to form a first edge curve characteristic sequence set, and calculating and counting the ratio of each adjacent edge of the second combined line segment and the included angle of each adjacent edge to form a second edge curve characteristic sequence set.
As shown in fig. 7 a-7 c, the ratio of each adjacent side of a broken line segment and the included angle of each adjacent side are calculated and counted to establish a feature description sequence, and the feature description sequence is established for all the fitted broken line segments according to the method. And establishing an initial matching relation (transformation matrix) by using the feature description sequence to realize initial registration of the multi-source image. The transformation matrix and its parameters are as follows:
Figure BDA0002809042840000131
k x representing the transformation ratio, k, of an infrared image to an optical image in the direction of the x-axis y Representing the transformation ratio of the infrared image to the optical image along the y-axis direction; theta represents a rotation angle at which the infrared image is transformed into the optical image; s x ,s y The number of translated pixel points for transforming the infrared image to the optical image along the x and y axes, respectively, however, a large error initially exists as in fig. 5c, and the following optimization improvement is made for the registration parameters.
EXAMPLE seven
As an alternative embodiment of the present invention, the step S9 includes:
s81: traversing and searching the image after the initial registration by using an iterative optimization method, determining a pixel point which does not accord with a pixel value precision threshold, and determining a parameter of the pixel point in the initial transformation matrix;
s82: taking the rotation angle and the position translation amount of the initial transformation matrix in the transformation process as initial parameters;
s83: determining parameters of the pixel point in the initial transformation matrix, selecting proper step size according to a pixel value precision threshold, and changing parameters corresponding to the pixel point which does not accord with the pixel value precision threshold in an interval angle range and a translation range containing the initial parameters to obtain a plurality of transformed matrixes to be transformed;
s84: registering the optical image and the infrared image based on each matrix to be transformed to obtain a registered image;
s85: and when the root mean square error of the vertex of the fitted broken line segment in the registered image and the vertex of the corresponding broken line segment in the reference image is smaller than the preset root mean square error, determining the matrix to be transformed as the optimal transformation matrix.
The broken line segment is characterized in that as shown in fig. 8 a-8 b, fig. 8b is enlarged and rotated compared with fig. 8a, and it can be seen from the figure that the ratio of the lengths of the adjacent segment does not change during the rotation and enlargement, i.e. the ratio of the lengths of the adjacent segment is not changed
Figure BDA0002809042840000141
The angle between adjacent segments is not changed, i.e. θ = θ 1 . Therefore, the adjacent edge ratio and the included angle have rotation amplification invariance and can be used as description of the characteristic factor.
For more accurate analysis and evaluation of the registration accuracy of the image registration method based on the non-closed contour, the vertices of the broken line segments fitted with some edge curves in 10 sets of infrared images are arbitrarily selected, the points are transformed by using a geometric transformation parameter model, and the Root Mean Square Error (RMSE) of the points and the corresponding points in the optical image is calculated, and the result is shown in table 1.
TABLE 1 registration accuracy evaluation Table
Figure BDA0002809042840000151
Referring to fig. 9a to 9c, the results of registering the infrared image and the optical image are shown in fig. 9a to 9c, and fig. 9a to 9c are three optimized result graphs of the registered image, the superimposed display and the magnified image of the superimposed image in sequence. As can be seen from comparison between fig. 5c and 9c, the superimposed display result in 5c has misalignment, such as the misalignment of the diagonal support bar, and the superimposed display result in 9c has no cases of ghost and region misalignment, so that the vehicles, roads, trees, and the like are basically aligned, and the image registration has a good effect.
Furthermore, the terms "first", "second" and "first" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless specifically defined otherwise.
While the present application has been described in connection with various embodiments, other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed application, from a review of the drawings, the disclosure, and the appended claims. In the claims, the word "comprising" does not exclude other elements or steps, and the word "a" or "an" does not exclude a plurality. A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
The foregoing is a more detailed description of the invention in connection with specific preferred embodiments and it is not intended that the invention be limited to these specific details. For those skilled in the art to which the invention pertains, several simple deductions or substitutions can be made without departing from the spirit of the invention, and all shall be considered as belonging to the protection scope of the invention.

Claims (6)

1. A multi-source image registration method based on a non-closed multi-dimensional profile feature sequence is characterized by comprising the following steps:
s1: acquiring a multi-source image, wherein the multi-source image comprises an optical image and an infrared image;
s2: carrying out denoising processing on the multi-source image by using wavelet denoising to obtain a denoised image after denoising processing;
s3: extracting edge curve characteristics of the noise-reduced image to obtain an edge profile curve;
s4: carrying out concave-convex point estimation on the edge contour curve by using an improved concave-convex point estimation method to obtain concave-convex points on the edge contour curve;
s5: sequentially connecting concave and convex points on the curve along the curve to obtain a plurality of broken line segments;
s6: fitting each broken line segment in the optical image and each broken line segment in the infrared image by using a broken line segment fitting method to obtain a first combined line segment and a second combined line segment, wherein the broken line segments of the optical image are sequentially connected with each other, and the second combined line segment is sequentially connected with the broken line segments of the infrared image;
s7: forming the first combined line segment into a first set of edge curve feature sequences of an optical image and forming the second combined line segment into a second set of edge curve feature sequences of an infrared image;
s8: based on the first edge curve characteristic sequence set and the second edge curve characteristic sequence set, performing initial registration on the infrared image and the optical image to obtain an image after the initial registration and an initial transformation matrix representing an initial corresponding relation when the infrared image and the optical image are registered;
s9: traversing and searching the image after the initial registration by using an iterative optimization method, determining a pixel point which does not accord with a pixel value precision threshold, determining a parameter of the pixel point in the initial transformation matrix, and adjusting the parameter until the pixel point accords with the pixel value precision threshold to obtain an optimal transformation matrix;
s10: registering the infrared image and the optical image by using the optimal transformation matrix;
the step of S4 comprises:
s41: estimating concave-convex points on the edge contour curve by adopting a method combining direction determination and a second derivative;
s42: counting the number of concave and convex points on each contour curve;
s43: for each edge contour curve, when the number of concave-convex points on the edge contour curve is less than a preset number threshold, rejecting the edge contour curve, and when the number of concave-convex points on the edge contour curve is not less than the preset number threshold, reserving the edge contour curve;
s44: traversing each reserved edge contour curve, and when the edge contour curve exceeds a preset number of concave salient points in a preset range, determining that the edge contour curve has a sawtooth section;
s45: and taking the average value of the extreme points of the sawtooth segment with the error as a new concave-convex point.
2. The multi-source image registration method of claim 1, wherein the step of S2 comprises:
s21: performing wavelet transformation on the multisource image to obtain a wavelet-transformed multisource image;
s22: determining signals and noise in the multi-source image after wavelet transformation based on different behaviors of the signals and the noise;
s23: and reconstructing the multi-source image based on the signals and the noise in the multi-source image to obtain the noise-reduced multi-source image.
3. The multi-source image registration method of claim 1, wherein the step of S3 comprises:
s31: distinguishing the foreground and the background of the multi-source image subjected to noise reduction by using a target filter;
s32: performing edge extraction on the denoised multi-source image by using a Sobel operator;
s33: performing corrosion operation and expansion operation on the image with the edge extracted to obtain edge curve characteristics;
s34: and connecting the edge curve features to form an edge feature curve.
4. The multi-source image registration method of claim 1, wherein the step of S6 comprises:
s61: setting a threshold value K;
s62: aiming at the optical image and the infrared image, respectively making each point of each section of arc of a plurality of broken line sections of the optical image and the infrared image as a perpendicular line of a corresponding chord, finding out a point on the arc corresponding to the longest section of the perpendicular line, and recording the length of the drooping line section and the position of the corresponding point on the arc;
s63: if the vertical line segment exceeds a set threshold value K, taking a point corresponding to the broken line segment as a vertex of the broken line segment;
and S64, sequentially connecting all the points determined on the broken line segment to obtain a first combined line segment and a second combined line segment.
5. The multi-source image registration method according to claim 1, wherein the step of S7 comprises:
and calculating and counting the ratio of each adjacent edge of the first combined line segment and the included angle of each adjacent edge to form a first edge curve characteristic sequence set, and calculating and counting the ratio of each adjacent edge of the second combined line segment and the included angle of each adjacent edge to form a second edge curve characteristic sequence set.
6. The multi-source image registration method according to claim 1, wherein the step of S9 comprises:
s81: traversing and searching the initially registered image by using an iterative optimization method, determining pixel points which do not accord with a pixel value precision threshold, and determining parameters of the pixel points in the initial transformation matrix;
s82: taking the rotation angle and the position translation amount of the initial transformation matrix in the transformation process as initial parameters;
s83: determining parameters of the pixel point in the initial transformation matrix, selecting proper step size according to a pixel value precision threshold, and changing parameters corresponding to the pixel point which does not accord with the pixel value precision threshold in an interval angle range and a translation range containing the initial parameters to obtain a plurality of transformed matrixes to be transformed;
s84: registering the optical image and the infrared image based on each matrix to be transformed to obtain a registered image;
s85: and when the root mean square error of the vertex of the fitted broken line segment in the registered image and the vertex of the corresponding broken line segment in the reference image is smaller than the preset root mean square error, determining the matrix to be transformed as the optimal transformation matrix.
CN202011379734.7A 2020-11-30 2020-11-30 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence Active CN112669360B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011379734.7A CN112669360B (en) 2020-11-30 2020-11-30 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011379734.7A CN112669360B (en) 2020-11-30 2020-11-30 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence

Publications (2)

Publication Number Publication Date
CN112669360A CN112669360A (en) 2021-04-16
CN112669360B true CN112669360B (en) 2023-03-10

Family

ID=75403966

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011379734.7A Active CN112669360B (en) 2020-11-30 2020-11-30 Multi-source image registration method based on non-closed multi-dimensional contour feature sequence

Country Status (1)

Country Link
CN (1) CN112669360B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117079397B (en) * 2023-09-27 2024-03-26 青海民族大学 Wild human and animal safety early warning method based on video monitoring

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103225827A (en) * 2013-04-08 2013-07-31 中山炫能燃气科技股份有限公司 Efficient infrared wave thermal energy and hot flue gas shunt absorption conversion system
CN104050660A (en) * 2014-05-26 2014-09-17 华中科技大学 Method for measuring workpiece round edges
CN104200463A (en) * 2014-08-06 2014-12-10 西安电子科技大学 Fourier-Merlin transform and maximum mutual information theory based image registration method
CN104318548A (en) * 2014-10-10 2015-01-28 西安电子科技大学 Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN106780528A (en) * 2016-12-01 2017-05-31 广西师范大学 Image symmetrical shaft detection method based on edge matching
CN109308715A (en) * 2018-09-19 2019-02-05 电子科技大学 A kind of optical imagery method for registering combined based on point feature and line feature
CN109829489A (en) * 2019-01-18 2019-05-31 刘凯欣 A kind of cultural relic fragments recombination method and device based on multilayer feature
CN110472658A (en) * 2019-07-05 2019-11-19 哈尔滨工程大学 A kind of the level fusion and extracting method of the detection of moving-target multi-source
CN111145228A (en) * 2019-12-23 2020-05-12 西安电子科技大学 Heterogeneous image registration method based on local contour point and shape feature fusion
CN111709170A (en) * 2020-06-05 2020-09-25 北京师范大学 Separation method, equipment and storage medium for tropical and non-tropical cyclone precipitation

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103225827A (en) * 2013-04-08 2013-07-31 中山炫能燃气科技股份有限公司 Efficient infrared wave thermal energy and hot flue gas shunt absorption conversion system
CN104050660A (en) * 2014-05-26 2014-09-17 华中科技大学 Method for measuring workpiece round edges
CN104200463A (en) * 2014-08-06 2014-12-10 西安电子科技大学 Fourier-Merlin transform and maximum mutual information theory based image registration method
CN104318548A (en) * 2014-10-10 2015-01-28 西安电子科技大学 Rapid image registration implementation method based on space sparsity and SIFT feature extraction
CN106780528A (en) * 2016-12-01 2017-05-31 广西师范大学 Image symmetrical shaft detection method based on edge matching
CN109308715A (en) * 2018-09-19 2019-02-05 电子科技大学 A kind of optical imagery method for registering combined based on point feature and line feature
CN109829489A (en) * 2019-01-18 2019-05-31 刘凯欣 A kind of cultural relic fragments recombination method and device based on multilayer feature
CN110472658A (en) * 2019-07-05 2019-11-19 哈尔滨工程大学 A kind of the level fusion and extracting method of the detection of moving-target multi-source
CN111145228A (en) * 2019-12-23 2020-05-12 西安电子科技大学 Heterogeneous image registration method based on local contour point and shape feature fusion
CN111709170A (en) * 2020-06-05 2020-09-25 北京师范大学 Separation method, equipment and storage medium for tropical and non-tropical cyclone precipitation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Pairwise Matching for 3D Fragment Reassembly Based on Boundary Curves and Concave-Convex Patches;Qunhui Li等;《 IEEE Access》;20191223;第8卷;第6153-6167页 *
基于线性约束的散乱点云配准技术的研究;白晨;《中国优秀硕士学位论文全文数据库 信息科技辑》;20160115;第2016年卷(第1期);第2.2节 *

Also Published As

Publication number Publication date
CN112669360A (en) 2021-04-16

Similar Documents

Publication Publication Date Title
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN109146930B (en) Infrared and visible light image registration method for electric power machine room equipment
CN110349207B (en) Visual positioning method in complex environment
CN111079556A (en) Multi-temporal unmanned aerial vehicle video image change area detection and classification method
CN108229475B (en) Vehicle tracking method, system, computer device and readable storage medium
US20220028043A1 (en) Multispectral camera dynamic stereo calibration algorithm based on saliency features
CN110866924A (en) Line structured light center line extraction method and storage medium
US20070086675A1 (en) Segmenting images and simulating motion blur using an image sequence
CN109961506A (en) A kind of fusion improves the local scene three-dimensional reconstruction method of Census figure
CN110956661B (en) Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix
JP2000517452A (en) Viewing method
CN109727279B (en) Automatic registration method of vector data and remote sensing image
CN114693760A (en) Image correction method, device and system and electronic equipment
CN112669360B (en) Multi-source image registration method based on non-closed multi-dimensional contour feature sequence
CN113744142B (en) Image restoration method, electronic device and storage medium
CA2918947A1 (en) Keypoint identification
JP3814353B2 (en) Image segmentation method and image segmentation apparatus
CN111462246A (en) Equipment calibration method of structured light measurement system
CN111126418A (en) Oblique image matching method based on planar perspective projection
CN110910457B (en) Multispectral three-dimensional camera external parameter calculation method based on angular point characteristics
CN115035281B (en) Rapid infrared panoramic image stitching method
EP0834151B1 (en) Object recognition method
CN111626180B (en) Lane line detection method and device based on polarization imaging
CN114025089A (en) Video image acquisition jitter processing method and system
CN110264417B (en) Local motion fuzzy area automatic detection and extraction method based on hierarchical model

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant