CN114897705A - Unmanned aerial vehicle remote sensing image splicing method based on feature optimization - Google Patents

Unmanned aerial vehicle remote sensing image splicing method based on feature optimization Download PDF

Info

Publication number
CN114897705A
CN114897705A CN202210721911.8A CN202210721911A CN114897705A CN 114897705 A CN114897705 A CN 114897705A CN 202210721911 A CN202210721911 A CN 202210721911A CN 114897705 A CN114897705 A CN 114897705A
Authority
CN
China
Prior art keywords
image
feature
points
matching
remote sensing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210721911.8A
Other languages
Chinese (zh)
Inventor
郭交
程义
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xuzhou Fly Dream Electronic & Technology Co ltd
Original Assignee
Xuzhou Fly Dream Electronic & Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xuzhou Fly Dream Electronic & Technology Co ltd filed Critical Xuzhou Fly Dream Electronic & Technology Co ltd
Priority to CN202210721911.8A priority Critical patent/CN114897705A/en
Publication of CN114897705A publication Critical patent/CN114897705A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20164Salient point detection; Corner detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle remote sensing image splicing method based on feature optimization, which comprises the following steps: acquiring two unmanned aerial vehicle remote sensing images with overlapped areas, and carrying out preprocessing operations such as image filtering, enhancement and the like; performing feature extraction on the preprocessed experimental image by using Harris angular point feature detection, performing feature selection on the extracted Harris angular point feature, generating an SIFT feature descriptor, and performing matching by using Euclidean measurement as a matching similarity criterion; removing the part with pairing error by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy; and carrying out image fusion on the processed image fusion part by using a gradual-in and gradual-out image fusion algorithm, ensuring that the overlapped area realizes smooth transition, and generating a final splicing effect image. The method combines the advantages of an SIFT feature extraction algorithm and a Harris corner detection algorithm, provides a new improved feature detection matching algorithm, and further optimizes the image splicing effect.

Description

Unmanned aerial vehicle remote sensing image splicing method based on feature optimization
Technical Field
The invention belongs to the field of unmanned aerial vehicle remote sensing image information acquisition and application, relates to unmanned aerial vehicle remote sensing and image splicing technologies, and particularly relates to an unmanned aerial vehicle remote sensing image splicing method based on feature optimization.
Background
Remote sensing technology has rapidly gained wide use worldwide since the birth of the 60 th 20 th century. Compared with other platforms or systems for acquiring space remote sensing information, the unmanned aerial vehicle remote sensing system has the advantages of automation, intellectualization, specialization, capability of acquiring space remote sensing information rapidly, low cost, low loss, small risk and the like. The application field of the method is expanded from the early aspects of weather, mapping, traffic and the like to the military field and the aspect of handling emergencies and the like. In the aspect of image information acquisition, traditional equipment, such as platforms for aerial photography, satellite photography and the like, is always limited by factors of large size, inflexibility, high cost and the like, and the defects are just made up by the generation of unmanned aerial vehicle remote sensing.
In the beginning of the 21 st century, the development of the remote sensing technology of the unmanned aerial vehicle enters a new stage, and the remote sensing technology has a quite wide prospect. However, the remote sensing technology of the unmanned aerial vehicle also has a factor limiting the development thereof, and is influenced by the height of the unmanned aerial vehicle and the visual angle of the camera. The sequence pictures shot by the unmanned aerial vehicle have small visual angle, and sometimes hundreds of pictures need to be shot in order to obtain global geographic image information, which brings great trouble to subsequent information processing. To solve this problem, image stitching techniques play an important role. The image splicing means that more than two sequence images are overlapped according to the public part of the sequence images to obtain a new image collection, the spliced images not only facilitate the overall effect of the observation area, but also keep the detail information of the original images, and can accurately and macroscopically recognize and grasp the aerial photography information in time. Therefore, how to realize perfect splicing of mass image data gradually becomes one of the hot problems in the field of remote sensing image splicing of unmanned aerial vehicles.
And the feature-based method is most widely applied in the field of image stitching at present. Chris Harris and Mike Stephens propose a famous Harris corner detection algorithm, and the corners are extracted through an autocorrelation function, so that the corners have invariance of translation and rotation, and the registration precision can reach a sub-pixel level. M, Brown and David G Lowe propose a classic Feature point detection algorithm based on Scale Invariant Feature Transform (SIFT), and realize the splicing of digital images on the basis of the Feature point detection algorithm. Because of the excellent characteristics of the SIFT operator, the algorithm is invariant to transformations such as rotation, scaling and translation of the image. The algorithm enables the digital image stitching technology to enter the research heat again, and is still a research hotspot in the field of image registration today. Bay draws a thought of simplifying approximation in an SIFT algorithm for reference, introduces an integral image and approximately simplifies a Gaussian second-order differential template, and provides a SURF (speeded Up Robust features) algorithm. Compared with the SIFT algorithm, the SURF algorithm is about 3 times faster in speed, but has more error matching points.
Although image stitching technology starts relatively late in China, great progress is still achieved through recent years of endeavor and exploration. A semi-automatic image registration algorithm is proposed by the royal-Rui and the like, and the registration of high-precision images is realized. Zhang Zhangfu et al propose an integral matching technique for multi-level image probability relaxation, which mainly accomplishes fast registration of images from different sensors and with different resolutions. The seal wave of the university of the major-continuance worker and the like provide an image stitching method based on a similar curve, and the algorithm optimizes a matching strategy and reduces the calculation amount. An image splicing algorithm based on Harris angular point detection is proposed by Zhao Zhang Yang and Du Li Min, and the robustness of the algorithm is improved by adopting a robust transformation estimation technology. The image is processed by IDL language in the Panshu of the Chinese geology university, the characteristic extraction is carried out by SIFT algorithm, and finally the image splicing is carried out by the least square method. The remote sensing images of the unmanned aerial vehicle are spliced by utilizing a SIFT algorithm by virtue of the chenchenchenchenchenchenchen Cheng and the like, so that a good experimental result is obtained. Therefore, the combination of the SIFT feature extraction algorithm and the Harris corner detection algorithm for research is of great significance.
Disclosure of Invention
Aiming at the problem of image splicing of unmanned aerial vehicle remote sensing images in image acquisition, the invention provides a method for splicing unmanned aerial vehicle remote sensing images based on feature optimization, which makes full use of feature point information of the unmanned aerial vehicle remote sensing images, improves the accuracy and stability of feature matching and further optimizes the image splicing effect.
In order to achieve the purpose, the invention provides the following technical scheme:
an unmanned aerial vehicle remote sensing image splicing method based on feature optimization comprises the following steps:
s1: input sizes are allmxnThe two unmanned aerial vehicle remote sensing images with the overlapped area, the reference image alpha and the registration image beta;
s2: carrying out preprocessing operations such as image filtering, enhancement and the like on the original unmanned aerial vehicle remote sensing image;
s3: carrying out Harris angular point detection on the preprocessed unmanned aerial vehicle remote sensing image to obtain image characteristic points;
s4: the method comprises the steps of selecting the characteristics of the obtained image, searching for the optimal characteristics, and eliminating irrelevant or redundant characteristics so as to reduce the number of the characteristics, improve the matching precision and reduce the running time;
s5: generating a SIFT descriptor with scale invariance by using the image feature points obtained in the S4;
s6: calculating Euclidean distance between two groups of descriptors by taking Euclidean metric as a matching criterion, and matching two descriptors with the closest distance;
s7: removing the part with wrong pairing by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy;
s8: and performing image fusion on the processed image fusion part by using a gradual-in and gradual-out image fusion algorithm, ensuring that the overlapped area realizes smooth transition, and generating a final splicing effect image.
Further, the specific operations of performing feature selection, finding an optimal feature, and removing irrelevant or redundant features on the obtained image features in step S4 are as follows:
and selecting key points similar to SIFT, and selecting characteristic points with stable properties and containing more information in the image, wherein the points are usually extreme positions. In order to correctly find the feature point at the extreme position, the feature region is divided, the feature point of each region in the image needs to be compared with all its neighboring points, and if it is larger than the surrounding S feature points or smaller than the surrounding S feature points, the point is an extreme point. If the value is the maximum value or the minimum value, it is preferably a local feature point.
Further, in the step S5, the SIFT feature descriptor with scale invariance is generated by using the image feature points obtained in the step S4, and the specific operations are as follows:
Figure 100002_RE-DEST_PATH_IMAGE001
(1)
wherein
Figure 100002_RE-DEST_PATH_IMAGE002
Representing the modulus of the pixel at that location,
Figure 100002_RE-DEST_PATH_IMAGE003
indicating the gradient direction at that location, the dimension of L being consistent with the dimension of each preferred feature point. In actual operation, the gradient directions and the gradient amplitudes of all pixels in a circle which takes the preferred characteristic point as the center of a circle and takes 1.5 times of the scale of the Gaussian image where the characteristic point is located as the radius are counted, and 1.5 times of Gaussian filtering is performed. Where the gradient histogram is in the range of 360 degrees, for a total of 36 bins, each bin being 10 degrees, the direction of the feature point is visualized as the peak of the histogram. In addition, a feature point does not necessarily have only one direction, but may have multiple directions, so that multiple feature descriptors can be generated, but only one main direction is provided, and the other directions are all sub-directions. Constructing SIFT descriptor with scale invariance, 4 parameters and two-dimensional position informationxAndyscale and dominant direction, and finally, the generated SIFT descriptor is a 128-dimensional vector.
Further, step S6 uses euclidean metric as a matching criterion to calculate the euclidean distance between two sets of feature descriptors, and matches the two descriptors with the closest distance, which specifically operates as follows:
Figure 100002_RE-DEST_PATH_IMAGE004
(2)
wherein
Figure 100002_RE-DEST_PATH_IMAGE005
The euclidean distance of the descriptor is represented,nrepresenting dimensions, coordinates, of the image
Figure 100002_RE-DEST_PATH_IMAGE006
Position, coordinate of descriptor representing reference map alpha
Figure 100002_RE-DEST_PATH_IMAGE007
And (4) representing the positions of the descriptors of the registration graph beta, matching the two descriptors with the Euclidean distance to finish feature matching one by one.
Further, step S7 uses a random sample consensus algorithm, i.e., RANSAC algorithm, to remove the part with pairing errors, so as to improve the matching accuracy, and the specific steps are as follows:
a1: randomly extracting 4 pairs of matching points (at least 3 pairs of matching points are not coplanar) from experimental data to calculate parameters of a geometric transformation matrix;
a2: performing geometric transformation matrix calculation on the matching point pairs, wherein the obtained matching points with the distance between the point pairs smaller than an empirical threshold are 'inner points', and otherwise, the obtained matching points are 'outer points';
a3: completely scanning the matched point pairs, if the number of the points in the geometric transformation matrix is not less than the threshold valuedRecalculating a down-conversion matrix parameter by using the least squares method, otherwise, starting from the first step again when a given number of repetitions is reachedkThen the process is finished;
a4: evaluating the transformation matrix for certain times according to the parameter error rate of the interior points and the transformation matrix, and selecting a point set with the maximum number of the local interior points obtained after sampling as a final sample set;
a5: and recalculating the transformation model by using the final interior point sample set to be used as a final model.
Further, in step S8, the image fusion is performed by using a gradual-in and gradual-out image fusion algorithm to ensure that the overlapped area realizes smooth transition, and a final stitching effect map is generated, which specifically includes the following steps:
the two images are first superimposed spatially, and the superimposed pixels can be represented as
Figure 100002_RE-DEST_PATH_IMAGE008
(3)
Figure 100002_RE-DEST_PATH_IMAGE009
(4)
In formula (3)
Figure 100002_RE-DEST_PATH_IMAGE010
Figure 100002_RE-DEST_PATH_IMAGE011
As a function of the image(s),
Figure 100002_RE-DEST_PATH_IMAGE012
Figure 100002_RE-DEST_PATH_IMAGE013
representing the occupied weight, the parameter being determined by the overlap ratio of the images. At the overlapping portion of the first and second portions,
Figure 100002_RE-DEST_PATH_IMAGE014
will gradually decrease from 1 to 0
Figure RE-323040DEST_PATH_IMAGE013
Will gradually increase from 0 to 1, and ensure to realize the effect of the mixed gas in the overlapped part
Figure 100002_RE-DEST_PATH_IMAGE015
Make a smooth transition to
Figure 100002_RE-DEST_PATH_IMAGE016
Figure RE-448997DEST_PATH_IMAGE012
And
Figure RE-333776DEST_PATH_IMAGE013
as shown in equation (4), the abscissa of the current pixel is set to
Figure 100002_RE-DEST_PATH_IMAGE017
The coordinates of the boundary at both ends of the common part are
Figure 100002_RE-DEST_PATH_IMAGE018
And
Figure 100002_RE-DEST_PATH_IMAGE019
and is and
Figure 100002_RE-DEST_PATH_IMAGE020
Figure 100002_RE-DEST_PATH_IMAGE021
Figure 100002_RE-DEST_PATH_IMAGE022
compared with the prior art, the invention has the beneficial effects that: compared with the prior art, the invention provides a novel method for splicing unmanned aerial vehicle remote sensing images based on feature optimization, which comprises the steps of carrying out feature extraction and feature selection on preprocessed experimental images by using Harris angular point feature detection, matching by using Euclidean measurement as a matching similarity criterion, removing a matching error part by using a random sampling consistency algorithm, and fusing the images by using a gradual-in and gradual-out image fusion algorithm, thereby improving matching precision, algorithm real-time property and stability, avoiding the defects that a Harris feature extraction operator is sensitive to scale transformation and does not have scale invariance, and having better detection efficiency and matching accuracy compared with the Harris angular point detection splicing method and the SIFT feature detection splicing method.
Drawings
FIG. 1 is an overall flow diagram of the present invention;
FIG. 2 is a flow chart of image stitching according to the present invention;
FIG. 3 is a reference view of the overlapping region of about 20% in the present invention;
FIG. 4 is a diagram of the present invention showing an overlap of about 20% to be spliced;
FIG. 5 is a graph of the improved algorithm stitching results of the present invention;
FIG. 6 is a graph of SIFT feature detection stitching results in the present invention;
fig. 7 is a diagram of the results of Harris corner detection stitching in the present invention.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to the attached figure 1, the specific implementation steps of the invention are as follows:
step 1: the input sizes are allmxnCarrying out preprocessing operations such as image filtering, enhancing and the like on the two unmanned aerial vehicle remote sensing images with the overlapped area, the reference image alpha and the registration image beta;
(1) firstly, integrally importing two pieces of unmanned aerial vehicle remote sensing image data, and determining a reference image alpha and a registration image beta. The image matrix is sharedmxnAn element of whichmAndnthe number of pixels in the image in the longitudinal and transverse directions on the two-dimensional plane is shown in fig. 3 and 4. Fig. 3 and 4 were obtained under the operating conditions of UX5HP drone remote sensing integrated with POS, with a drone flight height of 100m, a spatial resolution of 0.1m, and a pixel size of 400x 600.
(2) And carrying out Gaussian filtering on the read image data to remove Gaussian noise.
Step 2: carrying out Harris angular point detection on the preprocessed unmanned aerial vehicle remote sensing image to obtain image characteristic points;
harris corner detection utilizes a moving window to calculate a gray level change value in an image, wherein the main process comprises the steps of converting the image into a gray level image, calculating a difference image, smoothing Gaussian, calculating a local extreme value and confirming a corner, so that image feature points are obtained;
and according to the algorithm idea, constructing a mathematical model and calculating the gray difference of the moving window. The gray scale change of the moving window can be expressed by the following formula:
Figure RE-DEST_PATH_IMAGE023
(1)
for some small amounts of movement, the formula can be expressed as:
Figure RE-DEST_PATH_IMAGE024
(2)
Figure RE-DEST_PATH_IMAGE025
(3)
wherein the W function is used as a window function, and the M matrix is a partial derivative matrix.
And step 3: carrying out feature selection on the obtained image features, and searching for optimal features;
similar to SIFT key point selection, the characteristic points with stable property and containing more information in the image are selected, and the points are usually extreme positions. In order to correctly find the feature point at the extreme position, the feature region is divided, the feature point of each region in the image needs to be compared with all its neighboring points, and if it is larger than the surrounding S feature points or smaller than the surrounding S feature points, the point is an extreme point. If the value is the maximum value or the minimum value, it is preferably a local feature point.
And 4, step 4: generating an SIFT descriptor with scale invariance by using image feature points obtained by Harris corner detection;
Figure RE-663127DEST_PATH_IMAGE001
(4)
in the formula (4)
Figure RE-924344DEST_PATH_IMAGE002
Meaning the modulus of the pixel at that location,
Figure RE-593222DEST_PATH_IMAGE003
meaning the direction of the gradient at that location, the dimension of L is consistent with the dimension of each preferred feature point. In actual operation, the gradient directions and the gradient amplitudes of all pixels in a circle which takes the preferred characteristic point as the center of a circle and takes 1.5 times of the scale of the Gaussian image where the characteristic point is located as the radius are counted, and 1.5 times of Gaussian filtering is performed. The gradient histogram is within 360 degrees, 36 columns are provided, each column is 10 degrees, the direction of the feature point is visualized as the peak of the histogram, in addition, one feature point does not necessarily have only one direction, and multiple directions are possible, so that multiple feature descriptors can be generated, but only one main direction is provided, and the others are auxiliary directions. Constructing SIFT descriptor with scale invariance, 4 parameters and two-dimensional position informationxAndyscale and dominant direction, and finally, the generated SIFT descriptor is a 128-dimensional vector.
And 5: calculating Euclidean distance between two groups of feature descriptors by taking Euclidean metric as a matching criterion, and matching the two descriptors with the closest distance;
Figure RE-904030DEST_PATH_IMAGE004
(5)
in formula (5)
Figure RE-DEST_PATH_IMAGE026
The euclidean distance of the descriptor is represented,nrepresenting dimensions, coordinates, of the image
Figure RE-DEST_PATH_IMAGE027
Position, coordinate of descriptor representing reference map alpha
Figure RE-720676DEST_PATH_IMAGE007
And (4) representing the positions of the descriptors of the registration graph beta, matching the two descriptors with the Euclidean distance to finish feature matching one by one.
Step 6: removing the part with pairing error by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy;
(1) randomly extracting 4 pairs of matching points (at least 3 pairs of matching points are not coplanar) from experimental data to calculate parameters of a geometric transformation matrix;
(2) performing geometric transformation matrix calculation on the matching point pairs, wherein the obtained matching points with the distance between the point pairs smaller than an empirical threshold are 'inner points', and otherwise, the obtained matching points are 'outer points';
(3) completely scanning the matched point pairs, if the number of the points in the geometric transformation matrix is not less than the threshold valuedRecalculating a down-conversion matrix parameter by using the least squares method, otherwise, starting from the first step again when a given number of repetitions is reachedkThen the process is finished;
(4) evaluating the transformation matrix for certain times according to the parameter error rate of the interior points and the transformation matrix, and selecting a point set with the maximum number of the local interior points obtained after sampling as a final sample set;
(5) and recalculating the transformation model by using the final interior point sample set to be used as a final model.
And 7: fusing the images by using a gradual-in and gradual-out image fusion algorithm to ensure that the overlapped area realizes smooth transition and generate a final splicing effect graph as shown in the attached figure 5;
the two images are first superimposed spatially, and the superimposed pixels can be represented as
Figure RE-270737DEST_PATH_IMAGE008
(6)
Figure RE-856440DEST_PATH_IMAGE009
(7)
In the formula (6)
Figure RE-286284DEST_PATH_IMAGE010
Figure RE-839494DEST_PATH_IMAGE011
As a function of the image(s),
Figure RE-114617DEST_PATH_IMAGE012
Figure RE-820405DEST_PATH_IMAGE013
representing the occupied weight, the parameter being determined by the overlap ratio of the images. At the overlapping portion of the first and second portions,
Figure RE-155572DEST_PATH_IMAGE014
will gradually decrease from 1 to 0
Figure RE-697542DEST_PATH_IMAGE013
Will gradually increase from 0 to 1, and ensure to realize the effect of the mixed gas in the overlapped part
Figure RE-776357DEST_PATH_IMAGE015
Make a smooth transition to
Figure RE-336651DEST_PATH_IMAGE016
Figure RE-108298DEST_PATH_IMAGE012
And
Figure RE-370521DEST_PATH_IMAGE013
from equation (7), the abscissa of the current pixel is set to
Figure RE-315343DEST_PATH_IMAGE017
The coordinates of the boundary at both ends of the common part are
Figure RE-667827DEST_PATH_IMAGE018
And
Figure RE-423425DEST_PATH_IMAGE019
and is made of
Figure RE-861359DEST_PATH_IMAGE020
Figure RE-344293DEST_PATH_IMAGE021
Figure RE-816863DEST_PATH_IMAGE022
And 8: evaluating the image splicing effect;
in order to verify the effectiveness and reliability of the method, after the mosaic image of the improved algorithm is realized, image mosaic of the SIFT feature detection operator and the Harris feature detection operator is carried out on the feature extraction operator. The realization effect of the mosaic obtained by different methods is contrasted and analyzed, and the effectiveness and the reliability of the improved algorithm are demonstrated from the subjective aspect and the objective aspect. First, a comparison experiment was designed for the feature extraction capabilities of the SIFT feature detection operator and the Harris feature detection operator, and preliminary comparison was performed, as shown in table 1. To provide a referable evaluation criterion for subjective evaluation, the CCIR500-1 subjective evaluation criterion specified by the international radio counseling council was used, as shown in table 2. In terms of objective evaluation, the reference evaluation criterion is peak signal-to-noise ratio (PSNR), and the calculation steps are as follows:
first, a size is given asm×nThe size of the reference image and the realization image,iandjrepresenting the horizontal and vertical coordinates of the image. Calculating a Mean Square Error (MSE), wherein the MSE is defined as:
Figure RE-DEST_PATH_IMAGE028
(8)
psnr (db) is then expressed as:
Figure RE-DEST_PATH_IMAGE029
(9)
wherein,
Figure RE-DEST_PATH_IMAGE030
which is the maximum pixel value possible for a picture, typically 255. If a pixel value is represented by a B-bit binary number, its value can be represented as
Figure RE-DEST_PATH_IMAGE031
The image stitching results of the feature extraction operators, namely the SIFT feature detection operator and the Harris feature detection operator, are shown in the attached figures 6 and 7. And calculating the PSNR of the mosaic image relative to the reference image, comparing the PSNR, and accordingly measuring the similarity between the mosaic image and the original image, wherein the larger the value is, the closer the mosaic image is to the original image, and the less the image distortion is, the better the image mosaic realization effect is, i.e. the better the verification mosaic algorithm is, as shown in Table 3. As can be seen from the specific examples, compared with the other two methods, the method provided by the invention improves the PSNR value, namely, the splicing result of the method provided by the invention is the most similar to the original image, and the method has a better splicing effect.
TABLE 1 preliminary comparison of SIFT and Harris algorithms
Figure RE-DEST_PATH_IMAGE033
TABLE 2 CCIR500-1 subjective evaluation five-grade criteria
Figure RE-DEST_PATH_IMAGE035
TABLE 3 PSNR comparison
Figure RE-DEST_PATH_IMAGE037

Claims (6)

1. An unmanned aerial vehicle remote sensing image splicing method based on feature optimization is characterized by comprising the following steps:
s1: the input sizes are allmxnThe two unmanned aerial vehicle remote sensing images with the overlapped area, the reference image alpha and the registration image beta;
s2: carrying out preprocessing such as image filtering, enhancing and the like on an original unmanned aerial vehicle remote sensing image;
s3: carrying out Harris angular point detection on the preprocessed unmanned aerial vehicle remote sensing image to obtain image characteristic points;
s4: the obtained image features are subjected to feature selection, optimal features are searched, irrelevant or redundant features are removed, so that the number of the features is reduced, the matching precision is improved, and the running time is reduced;
s5: generating SIFT descriptors with scale invariance by using the preferred image features of S4;
s6: calculating Euclidean distance between two groups of descriptors by taking Euclidean metric as a matching criterion, and matching two descriptors with the closest distance;
s7: removing the part with pairing error by using a random sampling consistency algorithm, namely RANSAC algorithm, and improving the matching accuracy;
s8: and performing image fusion on the processed image fusion part by using a gradual-in and gradual-out image fusion algorithm, ensuring that the overlapped area realizes smooth transition, and generating a final splicing effect image.
2. The unmanned aerial vehicle remote sensing image stitching method based on feature optimization according to claim 1, wherein the specific operations of feature selection, optimal feature search and irrelevant or redundant feature elimination for the obtained image features in the step S4 are as follows:
selecting key points similar to SIFT, and selecting characteristic points with stable properties and containing more information in the image, wherein the points are usually extreme positions;
in order to correctly find the feature points at the extreme value positions, the feature areas are divided, the feature points of each area in the image need to be compared with all adjacent points, and if the feature points are larger than S surrounding feature points or smaller than S surrounding feature points, the feature points are extreme points;
if the value is the maximum value or the minimum value, it is preferably a local feature point.
3. The unmanned aerial vehicle remote sensing image stitching method based on feature optimization according to claim 1, wherein in the step S5, SIFT feature descriptors with scale invariance are generated by using the image feature points obtained in the step S4, and the specific operations are as follows:
Figure RE-DEST_PATH_IMAGE001
(1)
wherein
Figure RE-DEST_PATH_IMAGE002
Representing the modulus of the pixel at that location,
Figure RE-DEST_PATH_IMAGE003
the gradient direction at the position is shown, and the scale of the L is consistent with the scale of each preferred characteristic point;
during actual operation, counting the gradient directions and the gradient amplitudes of all pixels in a circle which takes the optimal characteristic point as the center of a circle and takes 1.5 times of the scale of the Gaussian image where the characteristic point is located as the radius, and performing 1.5 times of Gaussian filtering;
wherein the gradient histogram is within 360 degrees, 36 columns, each column being 10 degrees, the direction of the feature point being visualized as the peak of the histogram;
in addition, one feature point does not necessarily have only one direction, and there may be multiple directions, so that multiple feature descriptors can be generated, but only one main direction is available, and the others are all auxiliary directions;
constructing SIFT descriptor with scale invariance, 4 parameters and two-dimensional position informationxAndyscale and dominant direction, and finally, the generated SIFT descriptor is a 128-dimensional vector.
4. The unmanned aerial vehicle remote sensing image splicing method based on feature optimization as claimed in claim 1, wherein step S6 is to calculate the euclidean distance between two sets of feature descriptors using euclidean metric as the matching criterion, and match the two descriptors with the closest distance, and the specific operations are as follows:
Figure RE-DEST_PATH_IMAGE004
(2)
wherein
Figure RE-DEST_PATH_IMAGE005
The euclidean distance of the descriptor is represented,nrepresenting dimensions, coordinates, of the image
Figure RE-DEST_PATH_IMAGE006
Position, coordinate of descriptor representing reference map alpha
Figure RE-DEST_PATH_IMAGE007
And (4) representing the positions of the descriptors of the registration graph beta, matching the two descriptors with the Euclidean distance to finish feature matching one by one.
5. The unmanned aerial vehicle remote sensing image splicing method based on feature optimization as claimed in claim 1, wherein step S7 uses a random sample consensus algorithm, namely RANSAC algorithm, to remove a part with pairing errors, thereby improving the matching accuracy, and the specific steps are as follows:
a1: randomly extracting 4 pairs of matching points (at least 3 pairs of matching points are not coplanar) from experimental data to calculate parameters of a geometric transformation matrix;
a2: performing geometric transformation matrix calculation on the matching point pairs, wherein the obtained matching points with the distance between the point pairs smaller than an empirical threshold are 'inner points', and otherwise, the obtained matching points are 'outer points';
a3: completely scanning the matched point pairs, if the number of the points in the geometric transformation matrix is not less than the threshold valuedRecalculating a down-conversion matrix parameter by using the least squares method, otherwise, starting from the first step again when a given number of repetitions is reachedkThen the process is finished;
a4: evaluating the transformation matrix for certain times according to the parameter error rate of the interior points and the transformation matrix, and selecting a point set with the maximum number of the local interior points obtained after sampling as a final sample set;
a5: and recalculating the transformation model by using the final interior point sample set to be used as a final model.
6. The unmanned aerial vehicle remote sensing image splicing method based on feature optimization according to claim 1, wherein in the step S8, an image fusion algorithm of gradual in and gradual out is used for image fusion, so as to ensure that a smooth transition is realized in a superposition region, and a final splicing effect map is generated, and the specific steps are as follows:
the two images are first superimposed spatially, and the superimposed pixels can be represented as
Figure RE-DEST_PATH_IMAGE008
(3)
Figure RE-DEST_PATH_IMAGE009
(4)
In formula (3)
Figure RE-DEST_PATH_IMAGE010
Figure RE-DEST_PATH_IMAGE011
As a function of the image(s),
Figure RE-DEST_PATH_IMAGE012
Figure RE-DEST_PATH_IMAGE013
representing the occupied weight, the parameter being determined by the overlap ratio of the images;
at the overlapping portion of the first and second portions,
Figure RE-DEST_PATH_IMAGE014
will gradually decrease from 1 to 0
Figure RE-616985DEST_PATH_IMAGE013
Will gradually increase from 0 to 1, and ensure to realize the effect of the mixed gas in the overlapped part
Figure RE-DEST_PATH_IMAGE015
Make a smooth transition to
Figure RE-DEST_PATH_IMAGE016
Figure RE-908027DEST_PATH_IMAGE014
And
Figure RE-687764DEST_PATH_IMAGE013
as shown in equation (4), the abscissa of the current pixel is set to
Figure RE-DEST_PATH_IMAGE017
The coordinates of the boundary at both ends of the common part are
Figure RE-DEST_PATH_IMAGE018
And
Figure RE-DEST_PATH_IMAGE019
and is made of
Figure RE-DEST_PATH_IMAGE020
Figure RE-DEST_PATH_IMAGE021
Figure RE-DEST_PATH_IMAGE022
CN202210721911.8A 2022-06-24 2022-06-24 Unmanned aerial vehicle remote sensing image splicing method based on feature optimization Pending CN114897705A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210721911.8A CN114897705A (en) 2022-06-24 2022-06-24 Unmanned aerial vehicle remote sensing image splicing method based on feature optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210721911.8A CN114897705A (en) 2022-06-24 2022-06-24 Unmanned aerial vehicle remote sensing image splicing method based on feature optimization

Publications (1)

Publication Number Publication Date
CN114897705A true CN114897705A (en) 2022-08-12

Family

ID=82729626

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210721911.8A Pending CN114897705A (en) 2022-06-24 2022-06-24 Unmanned aerial vehicle remote sensing image splicing method based on feature optimization

Country Status (1)

Country Link
CN (1) CN114897705A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205558A (en) * 2022-08-16 2022-10-18 中国测绘科学研究院 Multi-mode image matching method and device with rotation and scale invariance
CN115311254A (en) * 2022-09-13 2022-11-08 万岩铁路装备(成都)有限责任公司 Steel rail contour matching method based on Harris-SIFT algorithm
CN116228539A (en) * 2023-03-10 2023-06-06 贵州师范大学 Unmanned aerial vehicle remote sensing image stitching method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115205558A (en) * 2022-08-16 2022-10-18 中国测绘科学研究院 Multi-mode image matching method and device with rotation and scale invariance
CN115311254A (en) * 2022-09-13 2022-11-08 万岩铁路装备(成都)有限责任公司 Steel rail contour matching method based on Harris-SIFT algorithm
CN116228539A (en) * 2023-03-10 2023-06-06 贵州师范大学 Unmanned aerial vehicle remote sensing image stitching method

Similar Documents

Publication Publication Date Title
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN110097093B (en) Method for accurately matching heterogeneous images
CN109903313B (en) Real-time pose tracking method based on target three-dimensional model
CN114897705A (en) Unmanned aerial vehicle remote sensing image splicing method based on feature optimization
CN111080529A (en) Unmanned aerial vehicle aerial image splicing method for enhancing robustness
CN105654421B (en) Based on the projective transformation image matching method for converting constant low-rank texture
CN112163995B (en) Splicing generation method and device for oversized aerial strip images
CN110569861B (en) Image matching positioning method based on point feature and contour feature fusion
CN102122359B (en) Image registration method and device
CN107240130B (en) Remote sensing image registration method, device and system
CN105160686B (en) A kind of low latitude various visual angles Remote Sensing Images Matching Method based on improvement SIFT operators
CN107862708A (en) A kind of SAR and visible light image registration method
CN109785370B (en) Weak texture image registration method based on space time sequence model
CN107862319B (en) Heterogeneous high-light optical image matching error eliminating method based on neighborhood voting
CN112017223A (en) Heterologous image registration method based on improved SIFT-Delaunay
Lee et al. Accurate registration using adaptive block processing for multispectral images
CN108550165A (en) A kind of image matching method based on local invariant feature
CN109376641A (en) A kind of moving vehicle detection method based on unmanned plane video
CN105869168A (en) Multi-source remote sensing image shape registering method based on polynomial fitting
CN111798453A (en) Point cloud registration method and system for unmanned auxiliary positioning
CN107689058A (en) A kind of image registration algorithm based on SURF feature extractions
CN113095385A (en) Multimode image matching method based on global and local feature description
CN110246165B (en) Method and system for improving registration speed of visible light image and SAR image
CN116612165A (en) Registration method for large-view-angle difference SAR image
CN113963174A (en) Bogie identification image feature extraction method based on fusion of multi-view intensity domain and frequency domain

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination