CN114897705A - Unmanned aerial vehicle remote sensing image splicing method based on feature optimization - Google Patents
Unmanned aerial vehicle remote sensing image splicing method based on feature optimization Download PDFInfo
- Publication number
- CN114897705A CN114897705A CN202210721911.8A CN202210721911A CN114897705A CN 114897705 A CN114897705 A CN 114897705A CN 202210721911 A CN202210721911 A CN 202210721911A CN 114897705 A CN114897705 A CN 114897705A
- Authority
- CN
- China
- Prior art keywords
- image
- feature
- points
- matching
- remote sensing
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 36
- 238000005457 optimization Methods 0.000 title claims abstract description 13
- 238000001514 detection method Methods 0.000 claims abstract description 25
- 230000000694 effects Effects 0.000 claims abstract description 16
- 230000004927 fusion Effects 0.000 claims abstract description 15
- 230000007704 transition Effects 0.000 claims abstract description 9
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 238000005070 sampling Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 239000011159 matrix material Substances 0.000 claims description 21
- 230000009466 transformation Effects 0.000 claims description 21
- 238000004364 calculation method Methods 0.000 claims description 5
- 230000008569 process Effects 0.000 claims description 4
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 2
- 230000008030 elimination Effects 0.000 claims 1
- 238000003379 elimination reaction Methods 0.000 claims 1
- 238000000605 extraction Methods 0.000 abstract description 9
- 238000005259 measurement Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 8
- 238000011156 evaluation Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 230000007547 defect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 238000005311 autocorrelation function Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000009223 counseling Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/13—Edge detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
- G06T7/337—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20112—Image segmentation details
- G06T2207/20164—Salient point detection; Corner detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses an unmanned aerial vehicle remote sensing image splicing method based on feature optimization, which comprises the following steps: acquiring two unmanned aerial vehicle remote sensing images with overlapped areas, and carrying out preprocessing operations such as image filtering, enhancement and the like; performing feature extraction on the preprocessed experimental image by using Harris angular point feature detection, performing feature selection on the extracted Harris angular point feature, generating an SIFT feature descriptor, and performing matching by using Euclidean measurement as a matching similarity criterion; removing the part with pairing error by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy; and carrying out image fusion on the processed image fusion part by using a gradual-in and gradual-out image fusion algorithm, ensuring that the overlapped area realizes smooth transition, and generating a final splicing effect image. The method combines the advantages of an SIFT feature extraction algorithm and a Harris corner detection algorithm, provides a new improved feature detection matching algorithm, and further optimizes the image splicing effect.
Description
Technical Field
The invention belongs to the field of unmanned aerial vehicle remote sensing image information acquisition and application, relates to unmanned aerial vehicle remote sensing and image splicing technologies, and particularly relates to an unmanned aerial vehicle remote sensing image splicing method based on feature optimization.
Background
Remote sensing technology has rapidly gained wide use worldwide since the birth of the 60 th 20 th century. Compared with other platforms or systems for acquiring space remote sensing information, the unmanned aerial vehicle remote sensing system has the advantages of automation, intellectualization, specialization, capability of acquiring space remote sensing information rapidly, low cost, low loss, small risk and the like. The application field of the method is expanded from the early aspects of weather, mapping, traffic and the like to the military field and the aspect of handling emergencies and the like. In the aspect of image information acquisition, traditional equipment, such as platforms for aerial photography, satellite photography and the like, is always limited by factors of large size, inflexibility, high cost and the like, and the defects are just made up by the generation of unmanned aerial vehicle remote sensing.
In the beginning of the 21 st century, the development of the remote sensing technology of the unmanned aerial vehicle enters a new stage, and the remote sensing technology has a quite wide prospect. However, the remote sensing technology of the unmanned aerial vehicle also has a factor limiting the development thereof, and is influenced by the height of the unmanned aerial vehicle and the visual angle of the camera. The sequence pictures shot by the unmanned aerial vehicle have small visual angle, and sometimes hundreds of pictures need to be shot in order to obtain global geographic image information, which brings great trouble to subsequent information processing. To solve this problem, image stitching techniques play an important role. The image splicing means that more than two sequence images are overlapped according to the public part of the sequence images to obtain a new image collection, the spliced images not only facilitate the overall effect of the observation area, but also keep the detail information of the original images, and can accurately and macroscopically recognize and grasp the aerial photography information in time. Therefore, how to realize perfect splicing of mass image data gradually becomes one of the hot problems in the field of remote sensing image splicing of unmanned aerial vehicles.
And the feature-based method is most widely applied in the field of image stitching at present. Chris Harris and Mike Stephens propose a famous Harris corner detection algorithm, and the corners are extracted through an autocorrelation function, so that the corners have invariance of translation and rotation, and the registration precision can reach a sub-pixel level. M, Brown and David G Lowe propose a classic Feature point detection algorithm based on Scale Invariant Feature Transform (SIFT), and realize the splicing of digital images on the basis of the Feature point detection algorithm. Because of the excellent characteristics of the SIFT operator, the algorithm is invariant to transformations such as rotation, scaling and translation of the image. The algorithm enables the digital image stitching technology to enter the research heat again, and is still a research hotspot in the field of image registration today. Bay draws a thought of simplifying approximation in an SIFT algorithm for reference, introduces an integral image and approximately simplifies a Gaussian second-order differential template, and provides a SURF (speeded Up Robust features) algorithm. Compared with the SIFT algorithm, the SURF algorithm is about 3 times faster in speed, but has more error matching points.
Although image stitching technology starts relatively late in China, great progress is still achieved through recent years of endeavor and exploration. A semi-automatic image registration algorithm is proposed by the royal-Rui and the like, and the registration of high-precision images is realized. Zhang Zhangfu et al propose an integral matching technique for multi-level image probability relaxation, which mainly accomplishes fast registration of images from different sensors and with different resolutions. The seal wave of the university of the major-continuance worker and the like provide an image stitching method based on a similar curve, and the algorithm optimizes a matching strategy and reduces the calculation amount. An image splicing algorithm based on Harris angular point detection is proposed by Zhao Zhang Yang and Du Li Min, and the robustness of the algorithm is improved by adopting a robust transformation estimation technology. The image is processed by IDL language in the Panshu of the Chinese geology university, the characteristic extraction is carried out by SIFT algorithm, and finally the image splicing is carried out by the least square method. The remote sensing images of the unmanned aerial vehicle are spliced by utilizing a SIFT algorithm by virtue of the chenchenchenchenchenchenchen Cheng and the like, so that a good experimental result is obtained. Therefore, the combination of the SIFT feature extraction algorithm and the Harris corner detection algorithm for research is of great significance.
Disclosure of Invention
Aiming at the problem of image splicing of unmanned aerial vehicle remote sensing images in image acquisition, the invention provides a method for splicing unmanned aerial vehicle remote sensing images based on feature optimization, which makes full use of feature point information of the unmanned aerial vehicle remote sensing images, improves the accuracy and stability of feature matching and further optimizes the image splicing effect.
In order to achieve the purpose, the invention provides the following technical scheme:
an unmanned aerial vehicle remote sensing image splicing method based on feature optimization comprises the following steps:
s1: input sizes are allmxnThe two unmanned aerial vehicle remote sensing images with the overlapped area, the reference image alpha and the registration image beta;
s2: carrying out preprocessing operations such as image filtering, enhancement and the like on the original unmanned aerial vehicle remote sensing image;
s3: carrying out Harris angular point detection on the preprocessed unmanned aerial vehicle remote sensing image to obtain image characteristic points;
s4: the method comprises the steps of selecting the characteristics of the obtained image, searching for the optimal characteristics, and eliminating irrelevant or redundant characteristics so as to reduce the number of the characteristics, improve the matching precision and reduce the running time;
s5: generating a SIFT descriptor with scale invariance by using the image feature points obtained in the S4;
s6: calculating Euclidean distance between two groups of descriptors by taking Euclidean metric as a matching criterion, and matching two descriptors with the closest distance;
s7: removing the part with wrong pairing by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy;
s8: and performing image fusion on the processed image fusion part by using a gradual-in and gradual-out image fusion algorithm, ensuring that the overlapped area realizes smooth transition, and generating a final splicing effect image.
Further, the specific operations of performing feature selection, finding an optimal feature, and removing irrelevant or redundant features on the obtained image features in step S4 are as follows:
and selecting key points similar to SIFT, and selecting characteristic points with stable properties and containing more information in the image, wherein the points are usually extreme positions. In order to correctly find the feature point at the extreme position, the feature region is divided, the feature point of each region in the image needs to be compared with all its neighboring points, and if it is larger than the surrounding S feature points or smaller than the surrounding S feature points, the point is an extreme point. If the value is the maximum value or the minimum value, it is preferably a local feature point.
Further, in the step S5, the SIFT feature descriptor with scale invariance is generated by using the image feature points obtained in the step S4, and the specific operations are as follows:
whereinRepresenting the modulus of the pixel at that location,indicating the gradient direction at that location, the dimension of L being consistent with the dimension of each preferred feature point. In actual operation, the gradient directions and the gradient amplitudes of all pixels in a circle which takes the preferred characteristic point as the center of a circle and takes 1.5 times of the scale of the Gaussian image where the characteristic point is located as the radius are counted, and 1.5 times of Gaussian filtering is performed. Where the gradient histogram is in the range of 360 degrees, for a total of 36 bins, each bin being 10 degrees, the direction of the feature point is visualized as the peak of the histogram. In addition, a feature point does not necessarily have only one direction, but may have multiple directions, so that multiple feature descriptors can be generated, but only one main direction is provided, and the other directions are all sub-directions. Constructing SIFT descriptor with scale invariance, 4 parameters and two-dimensional position informationxAndyscale and dominant direction, and finally, the generated SIFT descriptor is a 128-dimensional vector.
Further, step S6 uses euclidean metric as a matching criterion to calculate the euclidean distance between two sets of feature descriptors, and matches the two descriptors with the closest distance, which specifically operates as follows:
whereinThe euclidean distance of the descriptor is represented,nrepresenting dimensions, coordinates, of the imagePosition, coordinate of descriptor representing reference map alphaAnd (4) representing the positions of the descriptors of the registration graph beta, matching the two descriptors with the Euclidean distance to finish feature matching one by one.
Further, step S7 uses a random sample consensus algorithm, i.e., RANSAC algorithm, to remove the part with pairing errors, so as to improve the matching accuracy, and the specific steps are as follows:
a1: randomly extracting 4 pairs of matching points (at least 3 pairs of matching points are not coplanar) from experimental data to calculate parameters of a geometric transformation matrix;
a2: performing geometric transformation matrix calculation on the matching point pairs, wherein the obtained matching points with the distance between the point pairs smaller than an empirical threshold are 'inner points', and otherwise, the obtained matching points are 'outer points';
a3: completely scanning the matched point pairs, if the number of the points in the geometric transformation matrix is not less than the threshold valuedRecalculating a down-conversion matrix parameter by using the least squares method, otherwise, starting from the first step again when a given number of repetitions is reachedkThen the process is finished;
a4: evaluating the transformation matrix for certain times according to the parameter error rate of the interior points and the transformation matrix, and selecting a point set with the maximum number of the local interior points obtained after sampling as a final sample set;
a5: and recalculating the transformation model by using the final interior point sample set to be used as a final model.
Further, in step S8, the image fusion is performed by using a gradual-in and gradual-out image fusion algorithm to ensure that the overlapped area realizes smooth transition, and a final stitching effect map is generated, which specifically includes the following steps:
the two images are first superimposed spatially, and the superimposed pixels can be represented as
In formula (3)、As a function of the image(s),、representing the occupied weight, the parameter being determined by the overlap ratio of the images. At the overlapping portion of the first and second portions,will gradually decrease from 1 to 0Will gradually increase from 0 to 1, and ensure to realize the effect of the mixed gas in the overlapped partMake a smooth transition to。Andas shown in equation (4), the abscissa of the current pixel is set toThe coordinates of the boundary at both ends of the common part areAndand is and,,。
compared with the prior art, the invention has the beneficial effects that: compared with the prior art, the invention provides a novel method for splicing unmanned aerial vehicle remote sensing images based on feature optimization, which comprises the steps of carrying out feature extraction and feature selection on preprocessed experimental images by using Harris angular point feature detection, matching by using Euclidean measurement as a matching similarity criterion, removing a matching error part by using a random sampling consistency algorithm, and fusing the images by using a gradual-in and gradual-out image fusion algorithm, thereby improving matching precision, algorithm real-time property and stability, avoiding the defects that a Harris feature extraction operator is sensitive to scale transformation and does not have scale invariance, and having better detection efficiency and matching accuracy compared with the Harris angular point detection splicing method and the SIFT feature detection splicing method.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a flow chart of image stitching according to the present invention;
FIG. 3 is a reference view of the overlapping region of about 20% in the present invention;
FIG. 4 is a diagram of the present invention showing an overlap of about 20% to be spliced;
FIG. 5 is a graph of the improved algorithm stitching results of the present invention;
FIG. 6 is a graph of SIFT feature detection stitching results in the present invention;
fig. 7 is a diagram of the results of Harris corner detection stitching in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to the attached figure 1, the specific implementation steps of the invention are as follows:
step 1: the input sizes are allmxnCarrying out preprocessing operations such as image filtering, enhancing and the like on the two unmanned aerial vehicle remote sensing images with the overlapped area, the reference image alpha and the registration image beta;
(1) firstly, integrally importing two pieces of unmanned aerial vehicle remote sensing image data, and determining a reference image alpha and a registration image beta. The image matrix is sharedmxnAn element of whichmAndnthe number of pixels in the image in the longitudinal and transverse directions on the two-dimensional plane is shown in fig. 3 and 4. Fig. 3 and 4 were obtained under the operating conditions of UX5HP drone remote sensing integrated with POS, with a drone flight height of 100m, a spatial resolution of 0.1m, and a pixel size of 400x 600.
(2) And carrying out Gaussian filtering on the read image data to remove Gaussian noise.
Step 2: carrying out Harris angular point detection on the preprocessed unmanned aerial vehicle remote sensing image to obtain image characteristic points;
harris corner detection utilizes a moving window to calculate a gray level change value in an image, wherein the main process comprises the steps of converting the image into a gray level image, calculating a difference image, smoothing Gaussian, calculating a local extreme value and confirming a corner, so that image feature points are obtained;
and according to the algorithm idea, constructing a mathematical model and calculating the gray difference of the moving window. The gray scale change of the moving window can be expressed by the following formula:
for some small amounts of movement, the formula can be expressed as:
wherein the W function is used as a window function, and the M matrix is a partial derivative matrix.
And step 3: carrying out feature selection on the obtained image features, and searching for optimal features;
similar to SIFT key point selection, the characteristic points with stable property and containing more information in the image are selected, and the points are usually extreme positions. In order to correctly find the feature point at the extreme position, the feature region is divided, the feature point of each region in the image needs to be compared with all its neighboring points, and if it is larger than the surrounding S feature points or smaller than the surrounding S feature points, the point is an extreme point. If the value is the maximum value or the minimum value, it is preferably a local feature point.
And 4, step 4: generating an SIFT descriptor with scale invariance by using image feature points obtained by Harris corner detection;
in the formula (4)Meaning the modulus of the pixel at that location,meaning the direction of the gradient at that location, the dimension of L is consistent with the dimension of each preferred feature point. In actual operation, the gradient directions and the gradient amplitudes of all pixels in a circle which takes the preferred characteristic point as the center of a circle and takes 1.5 times of the scale of the Gaussian image where the characteristic point is located as the radius are counted, and 1.5 times of Gaussian filtering is performed. The gradient histogram is within 360 degrees, 36 columns are provided, each column is 10 degrees, the direction of the feature point is visualized as the peak of the histogram, in addition, one feature point does not necessarily have only one direction, and multiple directions are possible, so that multiple feature descriptors can be generated, but only one main direction is provided, and the others are auxiliary directions. Constructing SIFT descriptor with scale invariance, 4 parameters and two-dimensional position informationxAndyscale and dominant direction, and finally, the generated SIFT descriptor is a 128-dimensional vector.
And 5: calculating Euclidean distance between two groups of feature descriptors by taking Euclidean metric as a matching criterion, and matching the two descriptors with the closest distance;
in formula (5)The euclidean distance of the descriptor is represented,nrepresenting dimensions, coordinates, of the imagePosition, coordinate of descriptor representing reference map alphaAnd (4) representing the positions of the descriptors of the registration graph beta, matching the two descriptors with the Euclidean distance to finish feature matching one by one.
Step 6: removing the part with pairing error by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy;
(1) randomly extracting 4 pairs of matching points (at least 3 pairs of matching points are not coplanar) from experimental data to calculate parameters of a geometric transformation matrix;
(2) performing geometric transformation matrix calculation on the matching point pairs, wherein the obtained matching points with the distance between the point pairs smaller than an empirical threshold are 'inner points', and otherwise, the obtained matching points are 'outer points';
(3) completely scanning the matched point pairs, if the number of the points in the geometric transformation matrix is not less than the threshold valuedRecalculating a down-conversion matrix parameter by using the least squares method, otherwise, starting from the first step again when a given number of repetitions is reachedkThen the process is finished;
(4) evaluating the transformation matrix for certain times according to the parameter error rate of the interior points and the transformation matrix, and selecting a point set with the maximum number of the local interior points obtained after sampling as a final sample set;
(5) and recalculating the transformation model by using the final interior point sample set to be used as a final model.
And 7: fusing the images by using a gradual-in and gradual-out image fusion algorithm to ensure that the overlapped area realizes smooth transition and generate a final splicing effect graph as shown in the attached figure 5;
the two images are first superimposed spatially, and the superimposed pixels can be represented as
In the formula (6)、As a function of the image(s),、representing the occupied weight, the parameter being determined by the overlap ratio of the images. At the overlapping portion of the first and second portions,will gradually decrease from 1 to 0Will gradually increase from 0 to 1, and ensure to realize the effect of the mixed gas in the overlapped partMake a smooth transition to。Andfrom equation (7), the abscissa of the current pixel is set toThe coordinates of the boundary at both ends of the common part areAndand is made of,,。
And 8: evaluating the image splicing effect;
in order to verify the effectiveness and reliability of the method, after the mosaic image of the improved algorithm is realized, image mosaic of the SIFT feature detection operator and the Harris feature detection operator is carried out on the feature extraction operator. The realization effect of the mosaic obtained by different methods is contrasted and analyzed, and the effectiveness and the reliability of the improved algorithm are demonstrated from the subjective aspect and the objective aspect. First, a comparison experiment was designed for the feature extraction capabilities of the SIFT feature detection operator and the Harris feature detection operator, and preliminary comparison was performed, as shown in table 1. To provide a referable evaluation criterion for subjective evaluation, the CCIR500-1 subjective evaluation criterion specified by the international radio counseling council was used, as shown in table 2. In terms of objective evaluation, the reference evaluation criterion is peak signal-to-noise ratio (PSNR), and the calculation steps are as follows:
first, a size is given asm×nThe size of the reference image and the realization image,iandjrepresenting the horizontal and vertical coordinates of the image. Calculating a Mean Square Error (MSE), wherein the MSE is defined as:
psnr (db) is then expressed as:
wherein,which is the maximum pixel value possible for a picture, typically 255. If a pixel value is represented by a B-bit binary number, its value can be represented as;
The image stitching results of the feature extraction operators, namely the SIFT feature detection operator and the Harris feature detection operator, are shown in the attached figures 6 and 7. And calculating the PSNR of the mosaic image relative to the reference image, comparing the PSNR, and accordingly measuring the similarity between the mosaic image and the original image, wherein the larger the value is, the closer the mosaic image is to the original image, and the less the image distortion is, the better the image mosaic realization effect is, i.e. the better the verification mosaic algorithm is, as shown in Table 3. As can be seen from the specific examples, compared with the other two methods, the method provided by the invention improves the PSNR value, namely, the splicing result of the method provided by the invention is the most similar to the original image, and the method has a better splicing effect.
TABLE 1 preliminary comparison of SIFT and Harris algorithms
TABLE 2 CCIR500-1 subjective evaluation five-grade criteria
TABLE 3 PSNR comparison
Claims (6)
1. An unmanned aerial vehicle remote sensing image splicing method based on feature optimization is characterized by comprising the following steps:
s1: the input sizes are allmxnThe two unmanned aerial vehicle remote sensing images with the overlapped area, the reference image alpha and the registration image beta;
s2: carrying out preprocessing such as image filtering, enhancing and the like on an original unmanned aerial vehicle remote sensing image;
s3: carrying out Harris angular point detection on the preprocessed unmanned aerial vehicle remote sensing image to obtain image characteristic points;
s4: the obtained image features are subjected to feature selection, optimal features are searched, irrelevant or redundant features are removed, so that the number of the features is reduced, the matching precision is improved, and the running time is reduced;
s5: generating SIFT descriptors with scale invariance by using the preferred image features of S4;
s6: calculating Euclidean distance between two groups of descriptors by taking Euclidean metric as a matching criterion, and matching two descriptors with the closest distance;
s7: removing the part with pairing error by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy;
s8: and performing image fusion on the processed image fusion part by using a gradual-in and gradual-out image fusion algorithm, ensuring that the overlapped area realizes smooth transition, and generating a final splicing effect image.
2. The unmanned aerial vehicle remote sensing image stitching method based on feature optimization according to claim 1, wherein the specific operations of feature selection, optimal feature search and irrelevant or redundant feature elimination for the obtained image features in the step S4 are as follows:
selecting key points similar to SIFT, and selecting characteristic points with stable properties and containing more information in the image, wherein the points are usually extreme positions;
in order to correctly find the feature points at the extreme value positions, the feature areas are divided, the feature points of each area in the image need to be compared with all adjacent points, and if the feature points are larger than S surrounding feature points or smaller than S surrounding feature points, the feature points are extreme points;
if the value is the maximum value or the minimum value, it is preferably a local feature point.
3. The unmanned aerial vehicle remote sensing image stitching method based on feature optimization according to claim 1, wherein in the step S5, SIFT feature descriptors with scale invariance are generated by using the image feature points obtained in the step S4, and the specific operations are as follows:
whereinRepresenting the modulus of the pixel at that location,the gradient direction at the position is shown, and the scale of the L is consistent with the scale of each preferred characteristic point;
during actual operation, counting the gradient directions and the gradient amplitudes of all pixels in a circle which takes the optimal characteristic point as the center of a circle and takes 1.5 times of the scale of the Gaussian image where the characteristic point is located as the radius, and performing 1.5 times of Gaussian filtering;
wherein the gradient histogram is within 360 degrees, 36 columns, each column being 10 degrees, the direction of the feature point being visualized as the peak of the histogram;
in addition, one feature point does not necessarily have only one direction, and there may be multiple directions, so that multiple feature descriptors can be generated, but only one main direction is available, and the others are all auxiliary directions;
constructing SIFT descriptor with scale invariance, 4 parameters and two-dimensional position informationxAndyscale and dominant direction, and finally, the generated SIFT descriptor is a 128-dimensional vector.
4. The unmanned aerial vehicle remote sensing image splicing method based on feature optimization as claimed in claim 1, wherein step S6 is to calculate the euclidean distance between two sets of feature descriptors using euclidean metric as the matching criterion, and match the two descriptors with the closest distance, and the specific operations are as follows:
whereinThe euclidean distance of the descriptor is represented,nrepresenting dimensions, coordinates, of the imagePosition, coordinate of descriptor representing reference map alphaAnd (4) representing the positions of the descriptors of the registration graph beta, matching the two descriptors with the Euclidean distance to finish feature matching one by one.
5. The unmanned aerial vehicle remote sensing image splicing method based on feature optimization as claimed in claim 1, wherein step S7 uses a random sample consensus algorithm, namely RANSAC algorithm, to remove a part with pairing errors, thereby improving the matching accuracy, and the specific steps are as follows:
a1: randomly extracting 4 pairs of matching points (at least 3 pairs of matching points are not coplanar) from experimental data to calculate parameters of a geometric transformation matrix;
a2: performing geometric transformation matrix calculation on the matching point pairs, wherein the obtained matching points with the distance between the point pairs smaller than an empirical threshold are 'inner points', and otherwise, the obtained matching points are 'outer points';
a3: completely scanning the matched point pairs, if the number of the points in the geometric transformation matrix is not less than the threshold valuedRecalculating a down-conversion matrix parameter by using the least squares method, otherwise, starting from the first step again when a given number of repetitions is reachedkThen the process is finished;
a4: evaluating the transformation matrix for certain times according to the parameter error rate of the interior points and the transformation matrix, and selecting a point set with the maximum number of the local interior points obtained after sampling as a final sample set;
a5: and recalculating the transformation model by using the final interior point sample set to be used as a final model.
6. The unmanned aerial vehicle remote sensing image splicing method based on feature optimization according to claim 1, wherein in the step S8, an image fusion algorithm of gradual in and gradual out is used for image fusion, so as to ensure that a smooth transition is realized in a superposition region, and a final splicing effect map is generated, and the specific steps are as follows:
the two images are first superimposed spatially, and the superimposed pixels can be represented as
In formula (3)、As a function of the image(s),、representing the occupied weight, the parameter being determined by the overlap ratio of the images;
at the overlapping portion of the first and second portions,will gradually decrease from 1 to 0Will gradually increase from 0 to 1, and ensure to realize the effect of the mixed gas in the overlapped partMake a smooth transition to;
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210721911.8A CN114897705A (en) | 2022-06-24 | 2022-06-24 | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210721911.8A CN114897705A (en) | 2022-06-24 | 2022-06-24 | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114897705A true CN114897705A (en) | 2022-08-12 |
Family
ID=82729626
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210721911.8A Pending CN114897705A (en) | 2022-06-24 | 2022-06-24 | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114897705A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205558A (en) * | 2022-08-16 | 2022-10-18 | 中国测绘科学研究院 | Multi-mode image matching method and device with rotation and scale invariance |
CN115311254A (en) * | 2022-09-13 | 2022-11-08 | 万岩铁路装备(成都)有限责任公司 | Steel rail contour matching method based on Harris-SIFT algorithm |
CN116228539A (en) * | 2023-03-10 | 2023-06-06 | 贵州师范大学 | Unmanned aerial vehicle remote sensing image stitching method |
-
2022
- 2022-06-24 CN CN202210721911.8A patent/CN114897705A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115205558A (en) * | 2022-08-16 | 2022-10-18 | 中国测绘科学研究院 | Multi-mode image matching method and device with rotation and scale invariance |
CN115311254A (en) * | 2022-09-13 | 2022-11-08 | 万岩铁路装备(成都)有限责任公司 | Steel rail contour matching method based on Harris-SIFT algorithm |
CN116228539A (en) * | 2023-03-10 | 2023-06-06 | 贵州师范大学 | Unmanned aerial vehicle remote sensing image stitching method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN110097093B (en) | Method for accurately matching heterogeneous images | |
CN109903313B (en) | Real-time pose tracking method based on target three-dimensional model | |
CN114897705A (en) | Unmanned aerial vehicle remote sensing image splicing method based on feature optimization | |
CN111080529A (en) | Unmanned aerial vehicle aerial image splicing method for enhancing robustness | |
CN105654421B (en) | Based on the projective transformation image matching method for converting constant low-rank texture | |
CN112163995B (en) | Splicing generation method and device for oversized aerial strip images | |
CN110569861B (en) | Image matching positioning method based on point feature and contour feature fusion | |
CN102122359B (en) | Image registration method and device | |
CN107240130B (en) | Remote sensing image registration method, device and system | |
CN105160686B (en) | A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators | |
CN107862708A (en) | A kind of SAR and visible light image registration method | |
CN109785370B (en) | Weak texture image registration method based on space time sequence model | |
CN107862319B (en) | Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting | |
CN112017223A (en) | Heterologous image registration method based on improved SIFT-Delaunay | |
Lee et al. | Accurate registration using adaptive block processing for multispectral images | |
CN108550165A (en) | A kind of image matching method based on local invariant feature | |
CN109376641A (en) | A kind of moving vehicle detection method based on unmanned plane video | |
CN105869168A (en) | Multi-source remote sensing image shape registering method based on polynomial fitting | |
CN111798453A (en) | Point cloud registration method and system for unmanned auxiliary positioning | |
CN107689058A (en) | A kind of image registration algorithm based on SURF feature extractions | |
CN113095385A (en) | Multimode image matching method based on global and local feature description | |
CN110246165B (en) | Method and system for improving registration speed of visible light image and SAR image | |
CN116612165A (en) | Registration method for large-view-angle difference SAR image | |
CN113963174A (en) | Bogie identification image feature extraction method based on fusion of multi-view intensity domain and frequency domain |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |