CN111080529A - Unmanned aerial vehicle aerial image splicing method for enhancing robustness - Google Patents
Unmanned aerial vehicle aerial image splicing method for enhancing robustness Download PDFInfo
- Publication number
- CN111080529A CN111080529A CN201911338858.8A CN201911338858A CN111080529A CN 111080529 A CN111080529 A CN 111080529A CN 201911338858 A CN201911338858 A CN 201911338858A CN 111080529 A CN111080529 A CN 111080529A
- Authority
- CN
- China
- Prior art keywords
- image
- points
- algorithm
- point
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/35—Determination of transform parameters for the alignment of images, i.e. image registration using statistical methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
- G06T2207/20028—Bilateral filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Abstract
An unmanned aerial vehicle aerial image splicing method for enhancing robustness belongs to the technical field of image processing. The method comprises the following steps: s1, inputting two aerial images with overlapped areas for preprocessing; s2, processing the aerial image by using an SUR feature detection algorithm to obtain aerial image feature points; s3, performing feature description on the detected feature points by using a long-distance FREAK algorithm, and extracting features of the aerial image; s4, matching the extracted features of the aerial images based on the K nearest neighbor algorithm; s5, purifying the matched feature point pairs by using a random sampling consistency algorithm, and calculating a transformation matrix model parameter between the reference image and the image to be spliced; and S6, processing the images to be spliced by using the coordinate transformation matrix, and completing the aerial image splicing work by adopting a weighted average fusion method. The method has good splicing effect, has stronger robustness on the aerial image with larger rotation change, has better comprehensive performance compared with other classical algorithms, and can meet the actual requirements of the aerial image splicing.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a method for splicing aerial images of an unmanned aerial vehicle with enhanced robustness.
Background
In recent years, the unmanned aerial vehicle aerial photography technology has become an important mode for acquiring information in various fields with the advantages of being flexible, convenient, accurate and efficient in data acquisition and the like. Because the unmanned aerial vehicle camera visual angle and the flight height limit, the demand that the information acquisition is often difficult to satisfy to the image of singleton aerial photography, consequently need splice the fusion with two or more images that have the overlap portion, and then obtain a high accuracy, full view image that the detail is abundant.
The image registration is the most important part in image splicing, and the main idea is to use an overlapping area in a group of aerial images to search a proper mathematical model and transformation parameters, establish an affine relation between the images and finally achieve the aim of matching feature points in the same coordinate system. According to the difference of the used image information, registration algorithms can be divided into three categories, namely gray-level-based registration, feature-based registration and image understanding and interpretation-based registration, wherein the feature-based image registration algorithms are more stable and faster and are widely applied, and mainly comprise Harris and SUSAN operators, SIFT, SURF, ORB and BRISK algorithms and the like. Although the algorithms have advantages, the algorithms are not suitable for actual aerial image registration, for example, the SIFT algorithm has strong robustness but high memory occupation, the SURF algorithm has strong comprehensive performance but easily loses details, and the ORB algorithm has high speed but poor robustness. The aerial images obtained by the unmanned aerial vehicle are high in resolution, and the image sequences are not in the same plane, so that the differences of illumination, dimension and the like exist, and the high requirements are provided for the comprehensive performance of the aerial image splicing algorithm.
This project is funded by the national science fund project (61773087).
Disclosure of Invention
The invention aims to provide an unmanned aerial vehicle aerial image splicing method for enhancing robustness, aiming at the problems of serious detail loss and poor robustness of the current unmanned aerial vehicle aerial image splicing method.
In order to achieve the purpose, the invention adopts the technical scheme that:
an unmanned aerial vehicle aerial image splicing method for enhancing robustness comprises the following steps:
s1: inputting two aerial images with overlapped areas, and preprocessing the two aerial images;
s2: processing the aerial images by using an edge-enhanced SURF (Canny-SURF) feature detection algorithm to obtain feature points of the two aerial images;
s3: performing feature description on the detected feature points by using a long-distance FREAK (L-FREAK) algorithm, and extracting features of the aerial image;
s4: matching the extracted features of the aerial images by a K-Nearest Neighbor (KNN) based algorithm;
s5: purifying the matched feature point pairs by using a RANdom SAmple Consensus (RANSAC) algorithm, improving the matching accuracy, and calculating the parameters of a transformation matrix model between the reference image and the image to be spliced;
s6: and processing the images to be spliced by using a coordinate transformation matrix, and performing weighted average fusion on the processed images to be spliced and the reference image in the same coordinate system by adopting a weighted average fusion algorithm to finish the aerial image splicing work.
Further, in S1, the image is preprocessed. Carrying out smoothing processing on the input aerial image by adopting a Bilateral Bilateral filter, wherein the Bilateral filter expression is as follows:
wherein k represents a normalization factor, x and ξ are respectively a central pixel and a neighborhood pixel, f (ξ) is the pixel value of the neighborhood pixel, and the weights c (ξ, x) and s (f (ξ), f (x)) are inversely related to the geometric and gray scale distance of the two pixels.
Further, in S2, the aerial image obtained in step S1 is processed by using a SURF (Canny-SURF) feature detection algorithm for edge enhancement, and feature point detection is performed on the aerial image. The method specifically comprises the following steps:
s2.1: image feature points are detected using the SURF algorithm.
S2.1.1: a Hessian matrix is constructed. Each feature point in the image can obtain a specific Hessian matrix, and the determinant of the Hessian matrix is as follows:
let l (x, y) be the pixel point of the input image, the Hessian matrix of the image with the scale space of σ can be expressed as:
wherein the function L (x, σ) is the second derivative of the Gaussian function g (σ)Convolution with pixel points.
In order to ensure that the acquired feature points have the scale invariance characteristic, filtering processing needs to be performed on the input image, so that the image gray scale is reduced. The SURF algorithm adopts box filtering to replace Gaussian filtering, the error between an accurate value and an approximate value is balanced through a weighting coefficient, the size of the weighting coefficient is determined by a scale, and finally the Hessian matrix determinant is obtained as follows:
det(H)=LxxLyy-(ωLxy)2
whether the point is a characteristic point can be judged by solving the characteristic value of det (H).
S2.1.2: and constructing a scale space. The scale space is usually represented by a gaussian pyramid, and the SURF algorithm gradually increases the size of the filter to construct the gaussian pyramid on the premise of ensuring that the size of each group of images is not changed.
S2.1.3: and (5) positioning the characteristic points. After the pixel points are processed by using the Hessian matrix, the obtained extreme value is compared with 26 pixel points in the two-dimensional space and the neighborhood space of the extreme value, and the position of the characteristic point is preliminarily determined. And then, purifying the characteristic points by using difference operation, and screening out the characteristic points which are inaccurate and unstable in positioning to obtain the characteristic points for registration.
S2.1.4: and allocating the main direction of the characteristic point. And determining the main direction of the characteristic point by counting the Haar wavelet characteristics in the circular area taking the characteristic point as the center. Specifically, sectors with a central angle of 60 degrees are taken as a unit, the sectors are rotated for one circle at certain intervals, Haar wavelet response vectors of points in each unit sector in the vertical direction and the horizontal direction are added, and the largest sector is taken as the main direction of the characteristic point.
S2.2: in order to solve the problems of strong edge inhibition, inaccurate positioning and the like of the SURF algorithm, Canny algorithm is used for independently detecting edges.
S2.2.1: and (4) gradient calculation. Finite difference G in which pixel gradient direction and magnitude can be derived by first order bias in x and y directionsxAnd GyDetermining:
s2.2.2: and (5) purifying the characteristic points. Firstly, non-maximum suppression is carried out, the amplitude G of the target pixel in the gradient direction theta is compared with the amplitudes of two pixels in the neighborhood, if the amplitude of the target pixel is maximum, the target pixel is reserved, and if the amplitude of the target pixel is not maximum, the target pixel is discarded. The image is then segmented using a dual threshold method to remove false edges due to noise and gray scale variations. When the image is divided by using the double thresholds, the points with gradient values smaller than the low threshold are restrained, and the rest pixel points are strong edges and weak edges, wherein the strong edges are real edges, and false edges caused by noise and gray contained in the weak edges need to be removed.
S2.2.3: and (4) lagging edge tracking. In general, the true edges of the strong and weak edges should be connected to each other, and the edges caused by noise generally exist separately. The lag edge tracking is that in the 3 x 3 neighborhood of the weak edge, if the strong edge point exists, the strong edge point is retained, otherwise, the inhibition is carried out, and the edges are all connected finally through continuous recursive tracking.
S2.3: and (4) combining the feature point sets obtained in the S2.1 and the S2.2 by using a Canny-SURF algorithm, and removing repeated feature points to complete feature detection.
Further, in S3, feature points detected in S2 are described by using a long-distance FREAK (L-FREAK) algorithm, and image features are extracted. The method comprises the following specific steps:
s3.1: and sampling the characteristic points. The L-FREAK algorithm is a binary descriptor simulating the human visual system, and the sampling mode is to search sampling points in a circular area around a characteristic point in an exponential decreasing mode by taking the characteristic point as the center of a circle.
S3.2: and constructing a feature descriptor. After sampling is finished, intensity comparison is needed to obtain a binary string generation FREAK descriptor, and the descriptor is represented by F:
where α is the eigenvector dimension, N is the desired eigenvector dimension, PαAs a pair of sample points, T (P)α) As a function of the sampling point pairs, the following is defined:
wherein the content of the first and second substances,andeach PαThe gray values of the two middle sampling points.
Sampling according to S3.1 mode, wherein each characteristic point corresponds toThe information redundancy can be generated by each sampling point pair, and a large amount of memory is occupied. So that a binary descriptor is being obtainedThen, it needs to be screened to improve the descriptor recognition. By constructing the matrix, each column takes the first 521 columns as the feature descriptors in the mean.
S3.3: and determining the main direction based on the long-distance feature points. The traditional FREAK algorithm selects 45 sampling points around the characteristic point to extract direction information, and in order to improve the robustness of the algorithm to a picture with a larger rotation scale, the L-FREAK algorithm only uses the direction information of 30 sampling points at a long distance for determining the main direction by taking the thought of a BRISK algorithm long-distance point pair as a reference. Thus, the calculation formula of the feature point gradient O and the angle information θ can be expressed as:
θ=arctan(ox,oy)
where M is the selected logarithm of the feature points, G is the set of pairs of feature points, PoIs the coordinate information of the pair of sampling points,andfor the sampling point pair PoThe position of the two sampling points.
S3.4: and (5) coarse matching of features. And for the binary descriptor generated in the S3.3, measuring the similarity degree by referring to the HANMING distance between the characteristic points in the image and the image with splicing to obtain a rough matching point pair.
Further, in the feature matching in S4, the coarse matching point obtained after the processing in S2 and S3 is subjected to fine matching processing by using a KNN algorithm. Specifically, the feature sets of the reference image and the image to be spliced are set as A and B, and for any descriptor A in AiFinding the nearest and next nearest descriptor B in BiAnd BjD for distance1And d2Is shown to be, ifThen consider AiThe corresponding matching point is BiThe threshold μ takes the value of 2.
Further, in S5, feature point refinement is performed on the matched feature points by using a RANdom SAmple Consensus (RANSAC) algorithm, and transformation matrix parameters are calculated. The method comprises the following specific steps:
s5.1: let t1=(x1,y1,l),t2=(x2,y2And l) respectively projecting the same scene in the reference image and the image to be spliced, wherein the projection satisfies the following relation:
t2=Ht1
wherein the matrix H is a two-dimensional projective transformation matrix from the reference image to the image to be spliced:
wherein m is0,m1,m3,m4For the scale and rotation of the picture, m2Is a horizontal displacement amount, m5Is a vertical displacement amount, m6,m7The deformation amount in the horizontal direction and the vertical direction is shown; the total number of the parameters to be determined is 8, 4 matching points are needed to be solved, and due to the fact that mismatching points exist, purification treatment must be carried out firstly.
S5.2: firstly, randomly selecting 4 pairs of matching points (randomly 3 pairs are not collinear), and solving a transformation matrix H; then, calculating the positions of the other matching points in the image to be spliced by using the obtained matrix H, and expressing the vertical distance between the other matching points and the actual corresponding point by using d; setting a threshold value D, and defining the threshold value D as an interior point if D is less than D; and repeating the steps for multiple times, and taking the H corresponding to the set containing the most interior points as an optimal coordinate transformation matrix. The threshold D is generally determined based on the particular problem and data set selected.
Further, the weighted average fusion method used in S6 specifically includes:
suppose f1(x, y) and f2(x, y) respectively represent the reference image and the image to be stitched after S5 projective transformation, the image f (x, y) after fusion can be represented as:
wherein, a1And a2The weight distributed to the pixel points in the two images satisfies a1+a2=1(a1,a2>0),a1Can be formed by1The ratio of the distance (x, y) to the boundary to the width of the overlap region. a is1The overlap area can be smoothly and seamlessly fused by changing 0 to 1, and the final panoramic aerial image can be obtained.
The invention provides an unmanned aerial vehicle aerial image splicing algorithm with enhanced robustness, aiming at the problems of inaccurate edge positioning and poor robustness of an SURF algorithm in the process of splicing aerial images. Firstly, processing an original image by using bilateral filtering, detecting and describing feature points by using a Canny-SURF algorithm and an L-FREAK binary descriptor, strengthening edge details, roughly matching the obtained feature points by using a KNN algorithm, then purifying matched feature point pairs by using a RANSAC algorithm, calculating parameters of a transformation matrix, finally processing the original image by using the transformation matrix, and performing image fusion by using a weighted average fusion algorithm to finish aerial image splicing. Experiments prove that the method has good splicing effect, has stronger robustness on aerial images with larger rotation change, has better comprehensive performance compared with other classical algorithms, and can meet the actual requirements of aerial image splicing.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is an aerial image used in an embodiment of the present invention, wherein (a) (b) (c) is a reference image and (d) (e) (f) is an image to be stitched; in addition, (a) (d), (b) (e) and (c) (f) are three groups of aerial images selected for illumination, scale and rotation change respectively and used for verifying the robustness of the algorithm.
FIG. 3 is an image obtained by extracting features from the three aerial images of FIG. 2 using the Canny-SURF algorithm and the L-FREAK algorithm according to an embodiment of the present invention.
Fig. 4 is an image obtained by rough feature matching of the aerial image of fig. 3 according to the embodiment of the present invention. (a) The aerial image matching result based on illumination change, (b) the aerial image matching result based on scale change, and (c) the aerial image matching result based on rotation change.
Fig. 5 is a registration image after refinement using the RANSAC algorithm in an embodiment of the present invention. (a) The result of the aerial image registration based on illumination change, (b) the result of the aerial image registration based on scale change, and (c) the result of the aerial image registration based on rotation change.
FIG. 6 shows the panoramic aerial image obtained by stitching with the algorithm of the present invention. (a) The method comprises the steps of (a) an aerial image splicing result based on illumination change, (b) an aerial image splicing result based on scale change, and (c) an aerial image splicing result based on rotation change.
Detailed Description
For a better understanding of the present invention by those skilled in the art, the present invention will be described in further detail below with reference to the accompanying drawings and the following examples.
Referring to fig. 1, the embodiment provides an unmanned aerial vehicle aerial image stitching method for enhancing robustness, including the following steps:
s1: as shown in fig. 2, two aerial images with overlapping regions are input and preprocessed;
s2: processing the aerial images by using an edge-enhanced SURF (Canny-SURF) feature detection algorithm to obtain feature points of the two images;
s3: performing feature description on the detected feature points by using a long-distance FREAK (L-FREAK) algorithm, and extracting image features, as shown in FIG. 3;
s4: matching the extracted features of the aerial image by a K-Nearest Neighbor (KNN) -based algorithm, as shown in fig. 4;
s5: purifying the matched feature point pairs by using a RANdom SAmple Consensus (RANSAC) algorithm, improving the matching accuracy, and calculating transformation matrix model parameters between the reference image and the image to be spliced to obtain a registration image as shown in FIG. 5;
s6: and processing the images to be spliced by using the coordinate transformation matrix, and performing weighted average fusion on the images to be spliced and the reference images in the same coordinate system to finish the aerial image splicing work, as shown in fig. 6.
Specifically, in S1, the aerial image is preprocessed. Performing smoothing processing on the aerial image shown in fig. 2 by using a Bilateral filter, wherein the Bilateral filter expression is as follows:
where x and ξ are the center pixel and the neighborhood pixels, respectively, and the weights c (ξ, x) and s (f (ξ), f (x)) are inversely related to the geometric and gray scale distances of the two pixels.
In S2, feature point detection is performed on the image by using a SURF (Canny-SURF) algorithm with edge emphasis. The method specifically comprises the following steps:
s2.1: image feature points are detected using the SURF algorithm.
S2.1.1: a Hessian matrix is constructed. The Hessian matrix with the pixel point l (x, y) as σ in the scale space can be expressed as:
wherein the function L (x, σ) is the second derivative of the Gaussian function g (σ)Convolution with pixel points. The SURF algorithm adopts box filtering to replace Gaussian filtering, and finally the Hessian matrix determinant is obtained as follows:
det(H)=LxxLyy-(ωLxy)2
whether the point is a characteristic point can be judged by solving the characteristic value of det (H).
S2.1.2: and constructing a scale space. The scale space is represented by a Gaussian pyramid, and the SURF algorithm gradually increases the size of the filter to construct the Gaussian pyramid on the premise of ensuring that the size of each group of images is not changed.
S2.1.3: and (5) positioning the characteristic points. After the pixel points are processed by using the Hessian matrix, the obtained extreme value is compared with 26 pixel points in the two-dimensional space and the neighborhood space of the extreme value, and the position of the characteristic point is preliminarily determined. And then, purifying the characteristic points by using difference operation, and screening out the characteristic points which are inaccurate and unstable in positioning to obtain the characteristic points for registration.
S2.1.4: and allocating the main direction of the characteristic point. Taking a sector with a central angle of 60 degrees as a unit, rotating the sector for one circle at certain intervals, adding Haar wavelet response vectors of points in each unit sector in the vertical and horizontal directions, and taking the maximum as the main direction of the characteristic point.
S2.2: in order to solve the problems of strong edge inhibition, inaccurate positioning and the like of the SURF algorithm, Canny algorithm is used for independently detecting edges.
S2.2.1: and (4) gradient calculation. Finite difference G in which pixel gradient direction and magnitude can be derived by first order bias in x and y directionsxAnd GyDetermining:
s2.2.2: and (5) purifying the characteristic points. Firstly, non-maximum suppression is carried out, the amplitude G of the target pixel in the gradient direction theta is compared with the amplitudes of two pixels in the neighborhood, if the amplitude is maximum, the amplitude is retained, and if the amplitude is not maximum, the amplitude is discarded. The image is then segmented using a dual threshold method to remove false edges due to noise and gray scale variations.
S2.2.3: and (4) lagging edge tracking. The lag edge tracking is that in the 3 x 3 neighborhood of the weak edge, if the strong edge point exists, the strong edge point is retained, otherwise, the inhibition is carried out, and the edges are all connected finally through continuous recursive tracking.
S2.3: and (4) combining the feature point sets obtained in the S2.1 and the S2.2 by using a Canny-SURF algorithm, and removing repeated feature points to complete feature detection.
Further, in S3, feature points detected in S2 are described by using a long-distance FREAK (L-FREAK) algorithm, and image features are extracted. The method comprises the following specific steps:
s3.1: and sampling the characteristic points. The sampling mode of the L-FREAK algorithm is to search sampling points in a circular area around the characteristic points in an exponential decreasing mode by taking the characteristic points as the circle center.
S3.2: and constructing a feature descriptor. After sampling is finished, intensity comparison is carried out to obtain a binary string generation FREAK descriptor, and the descriptor is represented by F:
where N is the feature vector dimension, T (P)α) As a function of the sampling point pairs, the following is defined:
wherein, I (P)α) Is the gray value of the sampling point after Gaussian blur.
Sampling according to S3.1 mode, wherein each characteristic point corresponds toThe information redundancy can be generated by each sampling point pair, and a large amount of memory is occupied. By constructing the matrix, each column takes the first 521 columns as the feature descriptors in the mean.
S3.3: and determining the main direction based on the long-distance feature points. By taking the idea of the BRISK algorithm for long-distance point pairs as a reference, the L-FREAK algorithm only uses the direction information of 30 sampling points at a long distance to determine the main direction. The calculation formula of the feature point gradient O and the angle information θ can be expressed as:
θ=arctan(ox,oy)
where M is the selected logarithm of the feature points, G is the set of pairs of feature points, PoAs coordinate information of the sampling points, fig. 3 shows the extracted feature points.
S3.4: and (5) coarse matching of features. For the binary descriptor generated in S3.3, the similarity is measured by referring to the hamming distance between the feature points in the image and the image with mosaic to obtain a rough matching point pair, as shown in fig. 4.
Further, the feature matching in S4 is to perform fine matching on the coarse matching points in fig. 4 by using a KNN algorithm. And searching the closest and most advanced corresponding point in the image to be spliced, and if the ratio is smaller than the threshold value, retaining the point, wherein the threshold value mu is 2.
Further, the S5 performs feature point refinement and transformation matrix parameter calculation using the RANSAC algorithm. The method comprises the following specific steps:
s5.1: the matrix H is a two-dimensional projective transformation matrix from the reference image to the image to be stitched:
from the feature point (x)1,y1) To (x)2,y2) The perspective transformation formula of (a) is:
the total number of the parameters to be determined is 8, 4 matching points are needed to be solved, and due to the fact that mismatching points exist, purification treatment must be carried out firstly.
S5.2: firstly, randomly selecting 4 pairs of matching points (randomly 3 pairs are not collinear), and solving a transformation matrix H; then, calculating the positions of the other matching points in the image to be spliced by using the obtained matrix H, and expressing the vertical distance between the other matching points and the actual corresponding point by using d; setting a threshold value D, wherein the value is 1 in the embodiment, and if D is less than D, defining the threshold value D as an interior point; repeating the above steps for multiple times, taking H corresponding to the set containing the most interior points as the optimal coordinate transformation matrix, and obtaining a registration image as shown in fig. 5.
Further, the step of S6, which uses a weighted average fusion method specifically includes:
the image f (x, y) after using the fusion algorithm can be represented as:
wherein, a1And a2The weights assigned to the pixel points in the two images. a is1The overlap area can be smoothly and seamlessly fused by changing 0 to 1, and the final panoramic aerial image can be obtained.
The pictures used in the simulation experiment are all actual images shot by the unmanned aerial vehicle, the resolution ratio is 4000 x 3000, as shown in fig. 2, an overlapped scene exists between two images in each group, and in order to test the robustness of the algorithm, three groups of images respectively aim at illumination, rotation and scale change. Fig. 6 shows a panoramic picture obtained by stitching the algorithm of the present invention with the SURF algorithm, which shows that the present invention has better completed the stitching task, and has smooth transition and no obvious stitching seam.
When the method is spliced with aerial images based on SIFT algorithm and SURF algorithm, the robustness comparison tables are shown in tables 1, 2 and 3, and it can be seen that the comprehensive performances such as splicing speed, robustness and the like of the method are superior to those of other two algorithms.
TABLE 1 characteristic points extraction time comparison(s)
TABLE 2 feature points matching time comparison(s)
TABLE 3 Algorithm robustness comparison (%)
The above description is only a preferred embodiment of the present invention, and not intended to limit the present invention, the scope of the present invention is defined by the appended claims, and all structural changes that can be made by using the contents of the description and the drawings of the present invention are intended to be embraced therein.
Claims (5)
1. An unmanned aerial vehicle aerial image splicing method for enhancing robustness is characterized by comprising the following steps:
s1: inputting two aerial images with overlapped areas, and preprocessing the two aerial images;
s2: processing the aerial images by using an edge-enhanced SURF feature detection algorithm, and detecting feature points of the aerial images to obtain feature points of the two aerial images;
s2.1: detecting image feature points by using a SURF algorithm;
s2.1.1: constructing a Hessian matrix; each feature point in the image can obtain a specific Hessian matrix, and if l (x, y) is a pixel point of the input image, the Hessian matrix of the pixel point l (x, y) with a scale space of σ is expressed as:
wherein the function L (x, σ) is the second derivative of the Gaussian function g (σ)Convolution with pixel points;
the SURF algorithm adopts box filtering, and finally the Hessian matrix determinant is obtained as follows:
det(H)=LxxLyy-(ωLxy)2
judging whether the point is a characteristic point or not by solving the characteristic value of det (H);
s2.1.2: constructing a scale space; under the premise of ensuring that the size of each group of images is not changed, the SURF algorithm gradually increases the size of the filter to construct a Gaussian pyramid;
s2.1.3: positioning the characteristic points; after the Hessian matrix is used for processing the pixel points, comparing the obtained extreme value with 26 pixel points in the two-dimensional space and the neighborhood space of the extreme value, and preliminarily determining the positions of the characteristic points; then, feature points are purified by using difference operation to obtain feature points for registration;
s2.1.4: distributing the main direction of the characteristic points; determining the main direction of the characteristic point by counting the Haar wavelet characteristics in the circular area with the characteristic point as the center; specifically, taking a sector with a central angle of 60 degrees as a unit, rotating the sector for one circle at certain intervals, adding Haar wavelet response vectors of points in each unit sector in the vertical and horizontal directions, and taking the maximum as the main direction of a characteristic point;
s2.2: performing edge detection by using a Canny algorithm;
s2.2.1: calculating a gradient; finite difference G in which pixel gradient direction and magnitude can be derived by first order bias in x and y directionsxAnd GyDetermining:
s2.2.2: purifying the characteristic points; firstly, carrying out non-maximum value suppression, comparing the amplitude G of a target pixel in the gradient direction theta with the amplitudes of two pixels in a neighborhood, if the amplitude of the target pixel is maximum, retaining, and if not, discarding; then, an image is segmented by using a double threshold method, and false edges generated by noise and gray level change are removed;
s2.2.3: lagging edge tracking; the lag edge tracking is that in the 3 x 3 neighborhood of the weak edge, if the strong edge point exists, the strong edge point is retained, otherwise, the strong edge point is inhibited, and the edges are all connected finally through continuous recursive tracking;
s2.3: combining the feature point sets obtained in S2.1 and S2.2 by using a Canny-SURF algorithm, and removing repeated feature points to finish feature detection;
s3: carrying out feature description on the detected feature points by using a long-distance L-FREAK algorithm, and extracting features of the aerial image;
s3.1: sampling characteristic points; the sampling mode of the L-FREAK algorithm is to search sampling points in a circular area around a characteristic point in an exponential decreasing mode by taking the characteristic point as a circle center;
s3.2: constructing a feature descriptor; after sampling is finished, obtaining a binary string through intensity comparison to generate a FREAK descriptor, wherein the descriptor is represented by F:
where α is the eigenvector dimension, N is the desired eigenvector dimension, PαAs a pair of sample points, T (P)α) As a function of the sampling point pairs, the following is defined:
wherein the content of the first and second substances,andeach PαGray values of the middle two sampling points;
sampling according to S3.1 mode, wherein each characteristic point corresponds toThe information redundancy is generated by sampling point pairs, and each column is taken out according to the average value by constructing a matrixThe top 521 column is taken as a feature descriptor;
s3.3: determining a main direction based on the long-distance feature points; in order to improve the robustness of the algorithm to the image with larger rotation scale, the L-FREAK algorithm only uses the direction information of 30 remote sampling points to determine the main direction; thus, the calculation formula of the feature point gradient O and the angle information θ is expressed as:
θ=arctan(ox,oy)
where M is the selected logarithm of the feature points, G is the set of pairs of feature points, PoIs the coordinate information of the pair of sampling points,andfor the sampling point pair PoThe positions of the middle two sampling points;
s3.4: coarse matching of features; for the binary descriptor generated in S3.3, measuring the similarity degree by referring to the HANMING distance between the characteristic points in the image and the spliced image to obtain a rough matching point pair;
s4: matching the extracted features of the aerial images based on a K nearest neighbor algorithm;
s5: purifying the matched characteristic point pairs by using a random sampling consistency algorithm, improving the matching accuracy, and simultaneously calculating the transformation matrix model parameters between the reference image and the image to be spliced;
s6: and processing the images to be spliced by using a coordinate transformation matrix, and performing weighted average fusion on the processed images to be spliced and the reference image in the same coordinate system by adopting a weighted average fusion algorithm to finish the aerial image splicing work.
2. The unmanned aerial vehicle aerial image stitching method for enhancing robustness as claimed in claim 1, wherein in S1, the preprocessing of the image is specifically: carrying out smoothing processing on the input aerial image by adopting a Bilateral Bilateral filter, wherein the Bilateral filter expression is as follows:
wherein k represents a normalization factor, x and ξ are respectively a central pixel and a neighborhood pixel, f (ξ) is the pixel value of the neighborhood pixel, and the weights c (ξ, x) and s (f (ξ), f (x)) are inversely related to the geometric and gray scale distance of the two pixels.
3. The unmanned aerial vehicle aerial image stitching method for enhancing robustness as claimed in claim 1, wherein in the step S4, the feature matching is performed by performing fine matching on the coarse matching points obtained after the processing of S2 and S3 by using a KNN algorithm; specifically, the feature sets of the reference image and the image to be spliced are set as A and B, and for any descriptor A in AiFinding the nearest and next nearest descriptor B in BiAnd BjD for distance1And d2Is shown to be, ifThen consider AiThe corresponding matching point is BiThe threshold μ takes the value of 2.
4. The unmanned aerial vehicle aerial image splicing method for enhancing robustness as claimed in claim 1, wherein S5 is used for feature point purification of matched feature points by using a random sampling consistency algorithm, and calculating transformation matrix parameters; the method comprises the following specific steps:
s5.1: let t1=(x1,y1,l),t2=(x2,y2And l) respectively projecting the same scene in the reference image and the image to be spliced, wherein the projection satisfies the following relation: t is t2=Ht1;
Wherein the matrix H is a two-dimensional projective transformation matrix from the reference image to the image to be spliced:
i.e. from the characteristic point (x)1,y1) To (x)2,y2) The perspective transformation formula of (a) is:
wherein m is0,m1,m3,m4For the scale and rotation of the picture, m2Is a horizontal displacement amount, m5Is a vertical displacement amount, m6,m7The deformation amount in the horizontal direction and the vertical direction is shown; the total number of the parameters to be determined is 8, 4 matching points are needed to be solved, and due to the fact that mismatching points exist, purification treatment must be carried out firstly;
s5.2: firstly, randomly selecting 4 pairs of matching points, wherein 3 arbitrary pairs of matching points are not collinear, and solving a transformation matrix H; then, calculating the positions of the other matching points in the image to be spliced by using the obtained matrix H, and expressing the vertical distance between the other matching points and the actual corresponding point by using d; setting a threshold value D, and defining the threshold value D as an interior point if D is less than D; repeating the steps for multiple times, and taking the H corresponding to the set containing the most interior points as an optimal coordinate transformation matrix; the threshold D is generally determined based on the particular problem and data set selected.
5. The unmanned aerial vehicle aerial image stitching method for enhancing robustness as claimed in claim 1, wherein the weighted average fusion method used in S6 specifically comprises: suppose f1(x, y) and f2(x, y) respectively represent the reference image and the image to be stitched after S5 projective transformation, the image f (x, y) after fusion can be represented as:
wherein, a1And a2The weight distributed to the pixel points in the two images satisfies a1+a2=1(a1,a2>0),a1Can be formed by1(x, y) the ratio of the distance to the boundary to the width of the overlap region; a is1The overlap area can be smoothly and seamlessly fused by changing 0 to 1, and the final panoramic aerial image can be obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911338858.8A CN111080529A (en) | 2019-12-23 | 2019-12-23 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911338858.8A CN111080529A (en) | 2019-12-23 | 2019-12-23 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111080529A true CN111080529A (en) | 2020-04-28 |
Family
ID=70316904
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911338858.8A Withdrawn CN111080529A (en) | 2019-12-23 | 2019-12-23 | Unmanned aerial vehicle aerial image splicing method for enhancing robustness |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111080529A (en) |
Cited By (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111627007A (en) * | 2020-05-27 | 2020-09-04 | 电子科技大学 | Spacecraft defect detection method based on self-optimization matching network image stitching |
CN111754408A (en) * | 2020-06-29 | 2020-10-09 | 中国矿业大学 | High-real-time image splicing method |
CN111861866A (en) * | 2020-06-30 | 2020-10-30 | 国网电力科学研究院武汉南瑞有限责任公司 | Panoramic reconstruction method for substation equipment inspection image |
CN111882520A (en) * | 2020-06-16 | 2020-11-03 | 歌尔股份有限公司 | Screen defect detection method and device and head-mounted display equipment |
CN111915485A (en) * | 2020-07-10 | 2020-11-10 | 浙江理工大学 | Rapid splicing method and system for feature point sparse workpiece images |
CN111968035A (en) * | 2020-08-05 | 2020-11-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN112016484A (en) * | 2020-08-31 | 2020-12-01 | 深圳市赛为智能股份有限公司 | Plant disturbance evaluation method and device, computer equipment and storage medium |
CN112037130A (en) * | 2020-08-27 | 2020-12-04 | 江苏提米智能科技有限公司 | Adaptive image splicing and fusing method and device, electronic equipment and storage medium |
CN112070754A (en) * | 2020-09-11 | 2020-12-11 | 武汉百家云科技有限公司 | Tunnel segment water leakage detection method and device, electronic equipment and medium |
CN112164043A (en) * | 2020-09-23 | 2021-01-01 | 苏州大学 | Method and system for splicing multiple fundus images |
CN112258391A (en) * | 2020-10-12 | 2021-01-22 | 武汉中海庭数据技术有限公司 | Fragmented map splicing method based on road traffic marking |
CN112288634A (en) * | 2020-10-29 | 2021-01-29 | 江苏理工学院 | Splicing method and device for aerial images of multiple unmanned aerial vehicles |
CN112308779A (en) * | 2020-10-29 | 2021-02-02 | 上海电机学院 | Image splicing method for power transmission line |
CN112634130A (en) * | 2020-08-24 | 2021-04-09 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator |
CN112750080A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Splicing method and device for power line pictures |
CN112926597A (en) * | 2021-03-01 | 2021-06-08 | 天地伟业技术有限公司 | MEMS in-plane dynamic characteristic analysis method based on SURF feature point matching method |
CN113096016A (en) * | 2021-04-12 | 2021-07-09 | 广东省智能机器人研究院 | Low-altitude aerial image splicing method and system |
CN113313692A (en) * | 2021-06-03 | 2021-08-27 | 广西大学 | Automatic banana young plant identification and counting method based on aerial visible light image |
CN113469924A (en) * | 2021-06-18 | 2021-10-01 | 汕头大学 | Rapid image splicing method capable of keeping brightness consistent |
CN113469297A (en) * | 2021-09-03 | 2021-10-01 | 深圳市海邻科信息技术有限公司 | Image tampering detection method, device, equipment and computer readable storage medium |
CN113592929A (en) * | 2021-08-04 | 2021-11-02 | 北京优翼科科技有限公司 | Real-time splicing method and system for aerial images of unmanned aerial vehicle |
CN113642397A (en) * | 2021-07-09 | 2021-11-12 | 西安理工大学 | Object length measuring method based on mobile phone video |
CN113723465A (en) * | 2021-08-02 | 2021-11-30 | 哈尔滨工业大学 | Improved feature extraction method and image splicing method based on same |
CN114041878A (en) * | 2021-10-19 | 2022-02-15 | 山东建筑大学 | Three-dimensional reconstruction method and system for CT image of bone joint replacement surgical robot |
CN114266703A (en) * | 2022-03-03 | 2022-04-01 | 凯新创达(深圳)科技发展有限公司 | Image splicing method and system |
CN114373153A (en) * | 2022-01-12 | 2022-04-19 | 北京拙河科技有限公司 | Video imaging optimization system and method based on multi-scale array camera |
CN115358930A (en) * | 2022-10-19 | 2022-11-18 | 成都菁蓉联创科技有限公司 | Real-time image splicing method and target detection method based on multiple unmanned aerial vehicles |
CN115439424A (en) * | 2022-08-23 | 2022-12-06 | 成都飞机工业(集团)有限责任公司 | Intelligent detection method for aerial video image of unmanned aerial vehicle |
CN115620181A (en) * | 2022-12-05 | 2023-01-17 | 海豚乐智科技(成都)有限责任公司 | Aerial image real-time splicing method based on mercator coordinate slices |
CN117372893A (en) * | 2023-02-03 | 2024-01-09 | 河海大学 | Flood disaster assessment method based on improved remote sensing image feature matching algorithm |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A kind of unmanned plane image split-joint method |
-
2019
- 2019-12-23 CN CN201911338858.8A patent/CN111080529A/en not_active Withdrawn
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106940876A (en) * | 2017-02-21 | 2017-07-11 | 华东师范大学 | A kind of quick unmanned plane merging algorithm for images based on SURF |
CN109829853A (en) * | 2019-01-18 | 2019-05-31 | 电子科技大学 | A kind of unmanned plane image split-joint method |
Non-Patent Citations (2)
Title |
---|
ALEXANDRE ALAHI ET AL: "FREAK:Fast Retina Keypoint", 《2012 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
樊佩琦: "画幅式红外图像拼接技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111627007B (en) * | 2020-05-27 | 2022-06-14 | 电子科技大学 | Spacecraft defect detection method based on self-optimization matching network image stitching |
CN111627007A (en) * | 2020-05-27 | 2020-09-04 | 电子科技大学 | Spacecraft defect detection method based on self-optimization matching network image stitching |
CN111882520A (en) * | 2020-06-16 | 2020-11-03 | 歌尔股份有限公司 | Screen defect detection method and device and head-mounted display equipment |
CN111882520B (en) * | 2020-06-16 | 2023-10-17 | 歌尔光学科技有限公司 | Screen defect detection method and device and head-mounted display equipment |
CN111754408A (en) * | 2020-06-29 | 2020-10-09 | 中国矿业大学 | High-real-time image splicing method |
CN111861866A (en) * | 2020-06-30 | 2020-10-30 | 国网电力科学研究院武汉南瑞有限责任公司 | Panoramic reconstruction method for substation equipment inspection image |
CN111915485A (en) * | 2020-07-10 | 2020-11-10 | 浙江理工大学 | Rapid splicing method and system for feature point sparse workpiece images |
CN111968035A (en) * | 2020-08-05 | 2020-11-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN111968035B (en) * | 2020-08-05 | 2023-06-20 | 成都圭目机器人有限公司 | Image relative rotation angle calculation method based on loss function |
CN112634130A (en) * | 2020-08-24 | 2021-04-09 | 中国人民解放军陆军工程大学 | Unmanned aerial vehicle aerial image splicing method under Quick-SIFT operator |
CN112037130A (en) * | 2020-08-27 | 2020-12-04 | 江苏提米智能科技有限公司 | Adaptive image splicing and fusing method and device, electronic equipment and storage medium |
CN112037130B (en) * | 2020-08-27 | 2024-03-26 | 江苏提米智能科技有限公司 | Self-adaptive image stitching fusion method and device, electronic equipment and storage medium |
CN112016484A (en) * | 2020-08-31 | 2020-12-01 | 深圳市赛为智能股份有限公司 | Plant disturbance evaluation method and device, computer equipment and storage medium |
CN112016484B (en) * | 2020-08-31 | 2024-04-05 | 深圳市赛为智能股份有限公司 | Plant invasion evaluation method, plant invasion evaluation device, computer equipment and storage medium |
CN112070754A (en) * | 2020-09-11 | 2020-12-11 | 武汉百家云科技有限公司 | Tunnel segment water leakage detection method and device, electronic equipment and medium |
CN112164043A (en) * | 2020-09-23 | 2021-01-01 | 苏州大学 | Method and system for splicing multiple fundus images |
CN112258391A (en) * | 2020-10-12 | 2021-01-22 | 武汉中海庭数据技术有限公司 | Fragmented map splicing method based on road traffic marking |
CN112308779A (en) * | 2020-10-29 | 2021-02-02 | 上海电机学院 | Image splicing method for power transmission line |
CN112288634A (en) * | 2020-10-29 | 2021-01-29 | 江苏理工学院 | Splicing method and device for aerial images of multiple unmanned aerial vehicles |
CN112750080A (en) * | 2021-01-12 | 2021-05-04 | 云南电网有限责任公司电力科学研究院 | Splicing method and device for power line pictures |
CN112926597A (en) * | 2021-03-01 | 2021-06-08 | 天地伟业技术有限公司 | MEMS in-plane dynamic characteristic analysis method based on SURF feature point matching method |
CN113096016A (en) * | 2021-04-12 | 2021-07-09 | 广东省智能机器人研究院 | Low-altitude aerial image splicing method and system |
CN113313692A (en) * | 2021-06-03 | 2021-08-27 | 广西大学 | Automatic banana young plant identification and counting method based on aerial visible light image |
CN113469924A (en) * | 2021-06-18 | 2021-10-01 | 汕头大学 | Rapid image splicing method capable of keeping brightness consistent |
CN113642397A (en) * | 2021-07-09 | 2021-11-12 | 西安理工大学 | Object length measuring method based on mobile phone video |
CN113642397B (en) * | 2021-07-09 | 2024-02-06 | 西安理工大学 | Object length measurement method based on mobile phone video |
CN113723465B (en) * | 2021-08-02 | 2024-04-05 | 哈尔滨工业大学 | Improved feature extraction method and image stitching method based on same |
CN113723465A (en) * | 2021-08-02 | 2021-11-30 | 哈尔滨工业大学 | Improved feature extraction method and image splicing method based on same |
CN113592929A (en) * | 2021-08-04 | 2021-11-02 | 北京优翼科科技有限公司 | Real-time splicing method and system for aerial images of unmanned aerial vehicle |
CN113469297A (en) * | 2021-09-03 | 2021-10-01 | 深圳市海邻科信息技术有限公司 | Image tampering detection method, device, equipment and computer readable storage medium |
CN113469297B (en) * | 2021-09-03 | 2021-12-14 | 深圳市海邻科信息技术有限公司 | Image tampering detection method, device, equipment and computer readable storage medium |
CN114041878A (en) * | 2021-10-19 | 2022-02-15 | 山东建筑大学 | Three-dimensional reconstruction method and system for CT image of bone joint replacement surgical robot |
CN114373153A (en) * | 2022-01-12 | 2022-04-19 | 北京拙河科技有限公司 | Video imaging optimization system and method based on multi-scale array camera |
CN114373153B (en) * | 2022-01-12 | 2022-12-27 | 北京拙河科技有限公司 | Video imaging optimization system and method based on multi-scale array camera |
CN114266703A (en) * | 2022-03-03 | 2022-04-01 | 凯新创达(深圳)科技发展有限公司 | Image splicing method and system |
CN115439424B (en) * | 2022-08-23 | 2023-09-29 | 成都飞机工业(集团)有限责任公司 | Intelligent detection method for aerial video images of unmanned aerial vehicle |
CN115439424A (en) * | 2022-08-23 | 2022-12-06 | 成都飞机工业(集团)有限责任公司 | Intelligent detection method for aerial video image of unmanned aerial vehicle |
CN115358930A (en) * | 2022-10-19 | 2022-11-18 | 成都菁蓉联创科技有限公司 | Real-time image splicing method and target detection method based on multiple unmanned aerial vehicles |
CN115358930B (en) * | 2022-10-19 | 2023-02-03 | 成都菁蓉联创科技有限公司 | Real-time image splicing method and target detection method based on multiple unmanned aerial vehicles |
CN115620181B (en) * | 2022-12-05 | 2023-03-31 | 海豚乐智科技(成都)有限责任公司 | Aerial image real-time splicing method based on mercator coordinate slices |
CN115620181A (en) * | 2022-12-05 | 2023-01-17 | 海豚乐智科技(成都)有限责任公司 | Aerial image real-time splicing method based on mercator coordinate slices |
CN117372893A (en) * | 2023-02-03 | 2024-01-09 | 河海大学 | Flood disaster assessment method based on improved remote sensing image feature matching algorithm |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111080529A (en) | Unmanned aerial vehicle aerial image splicing method for enhancing robustness | |
CN110097093B (en) | Method for accurately matching heterogeneous images | |
CN104200461B (en) | The remote sensing image registration method of block and sift features is selected based on mutual information image | |
CN110992263B (en) | Image stitching method and system | |
JP5703312B2 (en) | Efficient scale space extraction and description of feature points | |
CN106940876A (en) | A kind of quick unmanned plane merging algorithm for images based on SURF | |
CN104809731B (en) | A kind of rotation Scale invariant scene matching method based on gradient binaryzation | |
CN110969669B (en) | Visible light and infrared camera combined calibration method based on mutual information registration | |
Uchiyama et al. | Toward augmenting everything: Detecting and tracking geometrical features on planar objects | |
TW201926244A (en) | Real-time video stitching method | |
CN108229500A (en) | A kind of SIFT Mismatching point scalping methods based on Function Fitting | |
CN110569861A (en) | Image matching positioning method based on point feature and contour feature fusion | |
CN112614167A (en) | Rock slice image alignment method combining single-polarization and orthogonal-polarization images | |
Lee et al. | Accurate registration using adaptive block processing for multispectral images | |
CN107967477A (en) | A kind of improved SIFT feature joint matching process | |
CN114331879A (en) | Visible light and infrared image registration method for equalized second-order gradient histogram descriptor | |
Zhang et al. | Automatic crack inspection for concrete bridge bottom surfaces based on machine vision | |
CN111127353A (en) | High-dynamic image ghost removing method based on block registration and matching | |
Zhao et al. | MOCC: A fast and robust correlation-based method for interest point matching under large scale changes | |
Cai et al. | Feature detection and matching with linear adjustment and adaptive thresholding | |
CN103336964A (en) | SIFT image matching method based on module value difference mirror image invariant property | |
CN106651756B (en) | Image registration method based on SIFT and verification mechanism | |
Hong et al. | Image mosaic based on surf feature matching | |
CN113409369A (en) | Multi-mode remote sensing image registration method based on improved RIFT | |
CN111754402A (en) | Image splicing method based on improved SURF algorithm |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20200428 |
|
WW01 | Invention patent application withdrawn after publication |