CN114241022B - Unmanned aerial vehicle image automatic registration method and system - Google Patents

Unmanned aerial vehicle image automatic registration method and system Download PDF

Info

Publication number
CN114241022B
CN114241022B CN202210184653.4A CN202210184653A CN114241022B CN 114241022 B CN114241022 B CN 114241022B CN 202210184653 A CN202210184653 A CN 202210184653A CN 114241022 B CN114241022 B CN 114241022B
Authority
CN
China
Prior art keywords
image
noise
registered
feature
reduced image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210184653.4A
Other languages
Chinese (zh)
Other versions
CN114241022A (en
Inventor
丁志平
梁治华
刘文达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Aisi Times Technology Co ltd
Original Assignee
Beijing Aisi Times Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Aisi Times Technology Co ltd filed Critical Beijing Aisi Times Technology Co ltd
Priority to CN202210184653.4A priority Critical patent/CN114241022B/en
Publication of CN114241022A publication Critical patent/CN114241022A/en
Application granted granted Critical
Publication of CN114241022B publication Critical patent/CN114241022B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to the technical field of image processing, and provides an unmanned aerial vehicle image automatic registration method and system, wherein the method comprises the following steps: respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image to obtain a plurality of first denoising images and a plurality of second denoising images; wherein the first noise-reduced image and the second noise-reduced image correspond to each other one by one; then, according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image, screening the feature points in each first noise-reduced image; and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered. Therefore, the accuracy of feature point extraction and feature point matching is improved, and the calculation efficiency is improved.

Description

Unmanned aerial vehicle image automatic registration method and system
Technical Field
The application relates to the technical field of image processing, in particular to an unmanned aerial vehicle image automatic registration method and system.
Background
Image registration refers to the process of matching and aligning two or more images from the same region acquired by the same sensor under different aerial conditions or different sensors. The image registration of the image acquired by the unmanned aerial vehicle is the basis of the image application of the unmanned aerial vehicle, and the precision of the image registration directly influences the application effect of the image of the unmanned aerial vehicle. In the prior art, the unmanned aerial vehicle image registration usually performs feature extraction on the whole image, and is affected by landform features and image noise, and the problems of low accuracy of feature point extraction, low calculation speed and the like exist in the registration process.
Therefore, there is a need to provide an improved solution to the above-mentioned deficiencies of the prior art.
Disclosure of Invention
An object of the application is to provide an unmanned aerial vehicle image automatic registration method and system to solve or alleviate the problems existing in the prior art.
In order to achieve the above purpose, the present application provides the following technical solutions:
the application provides an unmanned aerial vehicle image automatic registration method, which comprises the following steps:
respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image to obtain a plurality of first denoising images and a plurality of second denoising images; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
screening the feature points in each first noise-reduced image according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
Preferably, the denoising the plurality of block images of the image to be registered and the plurality of block images of the reference image respectively to obtain a plurality of first denoised images and a plurality of second denoised images correspondingly includes:
respectively blocking the image to be registered and the reference image to correspondingly obtain a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image;
and respectively conducting guide filtering on the plurality of block images of the image to be registered and the plurality of block images of the reference image through a guide filter, and correspondingly obtaining a plurality of first noise-reduced images and a plurality of second noise-reduced images.
Preferably, the screening the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image includes:
establishing a maximum index map of each of the first noise-reduced images and each of the second noise-reduced images;
obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image;
determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the second noise-reduced image corresponding to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptor of all the feature points in the second noise-reduced image corresponding to the feature descriptor;
performing primary screening on the feature points in each first noise-reduced image according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
determining the geographical position vector distance between each feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the corresponding feature point in the second noise-reduced image;
and carrying out secondary screening on the preliminarily screened feature points in each first noise-reduced image according to the geographical position vector distance of each preliminarily screened feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image.
Preferably, the establishing a maximum index map of each of the first noise-reduced images and each of the second noise-reduced images includes: respectively extracting the characteristic points of each first noise-reduced image and each second noise-reduced image based on a phase consistency method;
and respectively establishing a maximum index map of each first noise-reduced image and each second noise-reduced image according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
Preferably, the obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image includes:
respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method;
constructing a region image of the feature point of each first noise-reduced image in the maximum index map of each first noise-reduced image;
partitioning the regional image of the feature point of each first noise-reduced image to obtain a plurality of regional partitioned images;
and determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each region block image.
Preferably, the determining, based on a random sampling consistency algorithm, an optimal projection transformation model of the image to be registered according to the feature points screened from each of the first noise-reduced images to register the image to be registered includes:
based on a random sampling consistency algorithm, determining an optimal feature point set of the image to be registered according to the feature points screened in each first noise reduction image;
and determining an optimal projection transformation model of the image to be registered according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm so as to register the image to be registered.
Preferably, the determining an optimal feature point set of the image to be registered according to the feature points filtered in each first noise-reduced image based on the random sampling consistency algorithm includes:
selecting features from each of said first noise-reduced images by random samplingRandom extraction in feature points
Figure 834589DEST_PATH_IMAGE001
Grouping first feature point sets, and determining a first projection transformation model of each first noise-reduced image based on row-column coordinates of feature points in each first feature point set
Figure 530012DEST_PATH_IMAGE001
Corresponding to the first feature point set
Figure 37217DEST_PATH_IMAGE001
A first parameter matrix; wherein the content of the first and second substances,
Figure 453286DEST_PATH_IMAGE001
each group of the first characteristic point set comprises at least four characteristic point pairs;
based on
Figure 362336DEST_PATH_IMAGE001
Corresponding to said first parameter matrix
Figure 228661DEST_PATH_IMAGE001
The first projective transformation model is used for calculating a first cost function of each first feature point set in each first noise-reduced image;
and combining at least four feature points in the first feature point set corresponding to the smallest first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
Preferably, the determining, based on a random sampling consistency algorithm, an optimal projection transformation model of the image to be registered according to the optimal feature point set of the image to be registered to register the image to be registered includes:
based on a random sampling consistency algorithm, randomly extracting S groups of second feature point sets from the optimal feature point set, and determining S second parameter matrixes corresponding to the S groups of second feature point sets in a second projection transformation model of the image to be registered based on row-column coordinates of feature points in each group of second feature point sets; wherein S is a positive integer; each group of the second feature point set comprises at least four feature points;
respectively calculating a second cost function of each second feature point set in the image to be registered based on S second projective transformation models corresponding to S second parameter matrixes;
and determining the second projective transformation model corresponding to the minimum second cost function as the optimal projective transformation model so as to register the image to be registered.
The embodiment of the present application further provides an unmanned aerial vehicle image automatic registration system, including:
the noise reduction unit is configured to perform noise reduction on a plurality of block images of the image to be registered and a plurality of block images of the reference image respectively to obtain a plurality of first noise reduction images and a plurality of second noise reduction images correspondingly; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
the point screening unit is configured to screen the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and the model registration unit is configured to determine an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
Has the advantages that:
in the method, a plurality of first noise reduction images and a plurality of second noise reduction images are correspondingly obtained by respectively reducing the noise of a plurality of block images of an image to be registered and a plurality of block images of a reference image; the first noise reduction image and the second noise reduction image correspond to each other one by one; then, according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image, screening the feature points in each first noise-reduced image; and finally, based on a random sampling consistency algorithm, determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image so as to register the image to be registered. Therefore, the accuracy of feature point extraction and feature matching is improved, and meanwhile, the precision and the calculation efficiency of the unmanned aerial vehicle image registration method are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. Wherein:
fig. 1 is a schematic flow diagram of a method for automatic registration of images of a drone provided in accordance with some embodiments of the present application;
fig. 2 is a technical flow diagram of a method for automatic registration of images of a drone provided in accordance with some embodiments of the present application;
fig. 3 is a schematic structural diagram of an unmanned aerial vehicle image automatic registration system according to some embodiments of the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments with reference to the attached drawings. The various examples are provided by way of explanation of the application and are not limiting of the application. In fact, it will be apparent to those skilled in the art that modifications and variations can be made in the present application without departing from the scope or spirit of the application. For instance, features illustrated or described as part of one embodiment, can be used with another embodiment to yield a still further embodiment. It is therefore intended that the present application cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Exemplary method
Fig. 1 is a schematic flowchart of an automatic unmanned aerial vehicle image registration method according to some embodiments of the present application, and fig. 2 is a technical flowchart of an automatic unmanned aerial vehicle image registration method according to some embodiments of the present application, and as shown in fig. 1 and fig. 2, the automatic unmanned aerial vehicle image registration method includes:
s101, respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image to obtain a plurality of first denoising images and a plurality of second denoising images; and the first noise reduction image and the second noise reduction image correspond to each other one by one.
In the embodiment of the application, treat the image of registering for the image that unmanned aerial vehicle aerial photography obtained, because unmanned aerial vehicle is small, light in weight, it is great to be influenced by the air current, stability and anti-wind ability are relatively poor, even be equipped with autopilot and increase steady gyro device, flight attitude slope, the shake phenomenon still is difficult to avoid, these all can produce direct influence to the image that obtains, the image that leads to unmanned aerial vehicle aerial photography to obtain has the deformation of different degree, can't superpose with other image data in same region, the range of application of unmanned aerial vehicle aerial photography image has been restricted. In order to enable the images obtained by the unmanned aerial vehicle aerial photography to be overlapped with other existing data of the same area, image registration is needed.
Image registration refers to the process of matching and aligning two or more images from the same region acquired by the same sensor under different aerial conditions or different sensors. The image registration of the image acquired by the unmanned aerial vehicle is the basis of the image application of the unmanned aerial vehicle, and the precision of the image registration directly influences the application effect of the image of the unmanned aerial vehicle.
In the embodiment of the application, the unmanned aerial vehicle image is derived from a real earth surface with an agricultural landscape, such as a rice planting area, a wheat planting area, a corn planting area and the like. The unmanned aerial vehicle image under the agricultural landscape has single landform characteristics and more noises, and the single landform characteristics and a lot of noises interfere with the extraction of characteristic points in the registration process, so that the registration accuracy is low.
In particular, image registration is the process of aligning an image to be registered to a reference image. The reference image is also referred to as a base map, and may be an image acquired by the same sensor under different aerial photography conditions, or may be an image acquired by different sensors under different aerial photography conditions, for example, the reference image may be an image acquired by the same unmanned aerial vehicle, carrying the same sensor (for example, the same camera model), aerial photography under the same illumination condition, or an image acquired by different unmanned aerial vehicles, carrying different sensors by the same unmanned aerial vehicle, and different aerial photography conditions, or the reference image may be a high-resolution satellite image.
In a traditional image matching method, an unmanned aerial vehicle image is generally taken as a whole, feature extraction is carried out on the whole unmanned aerial vehicle image, and then feature matching is carried out, so that registration of an image to be registered and a reference image is completed. The unmanned aerial vehicle image feature extraction method has the advantages that the unmanned aerial vehicle image feature extraction method receives images with single landform features and multiple noises, features of the whole unmanned aerial vehicle image are extracted, redundant feature points are easily obtained, matching errors are caused, the registration accuracy is low, and the consumed time is long.
Different from the conventional image matching method in which the unmanned aerial vehicle image is taken as a whole, in the embodiment of the application, a plurality of block images of the image to be registered and a plurality of block images of the reference image are subjected to noise reduction respectively to obtain a plurality of first noise reduction images and a plurality of second noise reduction images correspondingly; and the first noise reduction image and the second noise reduction image correspond to each other one by one. Therefore, the image to be registered and the reference image are respectively blocked, and the one-to-one correspondence relationship between the plurality of blocked images of the image to be registered and the plurality of blocked images of the reference image is established, so that the accuracy of feature point extraction is improved, and the calculation efficiency is improved.
In an optional embodiment, the performing noise reduction on a plurality of block images of an image to be registered and a plurality of block images of a reference image respectively to obtain a plurality of first noise-reduced images and a plurality of second noise-reduced images correspondingly includes: respectively blocking the image to be registered and the reference image to correspondingly obtain a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image; and respectively conducting guide filtering on the plurality of block images of the image to be registered and the plurality of block images of the reference image through a guide filter, and correspondingly obtaining a plurality of first noise reduction images and a plurality of second noise reduction images.
In a specific example, the image to be registered and the reference image are respectively blocked, and a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image are obtained correspondingly, which is detailed as follows:
(1) and partitioning the image to be registered according to a preset size to obtain a plurality of partitioned images of the image to be registered. For example, each drone image to be registered is segmented intoM*NAnd a sub-region, wherein the upper left boundary point of each sub-region is taken to obtain the upper left boundary point set corresponding to the image to be registered, and the upper left boundary point set is recorded asC 1 、C 2 、C 3 ……And calculating row-column coordinates of the ith upper left boundary point in the upper left boundary point set, wherein the row-column coordinates are expressed as
Figure 98528DEST_PATH_IMAGE002
iIs a positive integer. Then, calculating to obtain the geographic coordinates corresponding to each upper left boundary point in the upper left boundary point set based on the transformation relation between the row and column coordinates in the image to be registered and the geographic coordinates
Figure 442922DEST_PATH_IMAGE003
(2) According to the geographic coordinates corresponding to each upper left boundary point
Figure 206479DEST_PATH_IMAGE003
Obtaining the row and column coordinates of the reference image by inverse calculation based on the transformation relation between the row and column coordinates and the geographic coordinates in the reference image
Figure 384650DEST_PATH_IMAGE004
Reference to the row and column coordinates of the image
Figure 600868DEST_PATH_IMAGE004
The corresponding point is taken as the upper left boundary point set of the reference image and is recorded asC’ 1 、C’ 2 、C’ 3 ……. And partitioning the reference image according to the upper-left boundary point set of the reference image to obtain a plurality of partitioned images of the reference image. Wherein a plurality of block images and parameters of the image to be registeredThere is a one-to-one correspondence between multiple block images of a reference image.
Due to factors such as sensor offset, electromagnetic interference, complex topographic information of the region, and the like, redundant or unnecessary interference information usually exists in the original image to be registered and the reference image, and the phenomenon becomes image noise. Image noise interferes with registration, and therefore, prior to registration, noise reduction is required for the image to be registered and the reference image.
In some optional embodiments, the multiple block images of the image to be registered and the multiple block images of the reference image are respectively subjected to guide filtering through a guide filter, so as to obtain multiple first noise-reduced images and multiple second noise-reduced images correspondingly.
The principle of the guided filter is: using the block image of the image to be registered or the block image of the reference image as the input imagepDefining an output imageqFor inputting imagespSubtracting the noise partnBy carrying out difference operation on the input image and the noise part, the effects of smoothing the input image and removing noise are achieved. To remove noise while outputting an imageqTo obtain a coincidence guide imageIGuiding the filter to output the image at the same timeqDefined as guide imageIIn which the image is guidedIAnd input imagepAre homologous images.
The mathematical model of the guide filter is expressed by formula (1) and formula (2), and formula (1) and formula (2) are as follows:
Figure 748952DEST_PATH_IMAGE005
(1)
Figure 367016DEST_PATH_IMAGE006
(2)
in the formula (I), the compound is shown in the specification,q j for outputting imagesqTo middlejThe value of each of the pixels is selected,p j for inputting imagespTo middlejThe value of each of the pixels is selected,n j is composed ofp j The noise component of (a) is,I j for guiding the imageITo (1)jThe value of each of the pixels is selected,a、bis a parameter for measuring the input weight.
Deriving according to the formula (1) and the formula (2) to obtain an expression of the noise part, and expressing the expression by the formula (3), wherein the formula (3) is as follows:
Figure 949044DEST_PATH_IMAGE007
(3)
to solve the parametera、bIn an input imagepTo the middle stagekWindowing is carried out by taking the pixel as a central point, a cost equation in the form of a formula (4) is defined, and the minimum value term of the cost equation is replaced by the minimum value equationa、 bAs a parametera、bEquation (4) is as follows:
Figure 918137DEST_PATH_IMAGE008
(4)
in the above formula, the first and second carbon atoms are,
Figure 604334DEST_PATH_IMAGE009
a value term representing a cost equation,ω k representing an input imagepTo the middle stagekEach pixel is a pane of the center point,mto representω k The serial number of the middle pixel(s),I m as panesω k Corresponding to the first in the guide imagemThe value of each of the pixels is selected,p m as panesω k Corresponding to the first in the input imagemThe value of each of the pixels is selected,a k 、b k are respectively a parametera、bCorresponding paneω k The result of the solution is obtained and,εis composed ofa k And punishment parameters when the value is overlarge.
As can be seen from equation (4), the value term of the cost equation is the input imagepEach of whichNoise part and parameters of individual pixelsaThe sum of the squares of (1) and (ii) can be understood, the cost equation-based cost term minimization principle is: finding the parameters a, b so as to output the imagepWhile minimizing the noise portion in the image, and guiding the imageIIn outputting the imageqThe influence in (2) is reduced.
Solving the formula (4) based on a linear regression method so as to obtain the parameters when the value term of the cost equation has the minimum solutiona k 、b k Expression:
Figure 217849DEST_PATH_IMAGE010
(5)
Figure 596877DEST_PATH_IMAGE011
(6)
in the formula (I), the compound is shown in the specification,p k representing an input imagepTo (1)kThe value of each of the pixels is selected,ωpresentation paneω k The number of pixels within the window of (a),μ k presentation paneω k The variance of the pixel values within the window of (1).
The noise of the image obtained by the unmanned aerial vehicle under the agricultural landscape is more, the extraction of the feature points is influenced insignificantly, and the traditional noise reduction method is not beneficial to the subsequent feature extraction aiming at the boundary information when the noise of the image of the unmanned aerial vehicle under the agricultural landscape is reduced. In the embodiment of the application, based on the guide filter, the block images of each image to be registered are respectively subjected to guide filtering to obtain a plurality of first noise reduction images, and the block images of each reference image are respectively subjected to guide filtering to obtain a plurality of second noise reduction images.
And S102, screening the feature points in each first noise-reduced image according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image.
In the embodiment of the present application, the screening of feature points in each first noise-reduced image according to a feature vector distance and a geographical location vector distance between each feature point in each first noise-reduced image and all feature points in a corresponding second noise-reduced image includes: establishing a maximum index map of each first noise-reduced image and each second noise-reduced image; obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image; determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all feature points in the corresponding second noise-reduced image; according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image, carrying out primary screening on the feature points in each first noise-reduced image; determining the geographical position vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the feature point in the corresponding second noise-reduced image; and carrying out secondary screening on the preliminarily screened feature points in each first noise-reduced image according to the geographical position vector distance between each preliminarily screened feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image.
In some alternative embodiments, establishing the maximum index map for each first noise-reduced image and each second noise-reduced image comprises: respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method; and respectively establishing a maximum index map of each first noise-reduced image and a maximum index map of each second noise-reduced image according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
In the embodiment of the application, the feature points of each first noise reduction image and each second noise reduction image are respectively extracted based on a phase consistency method. Specifically, the principle of the phase consistency method means that in the frequency domain of the images (the image to be matched and the reference image), the frequency components of the edge features (boundary information) of the images are in the same phase, and this concept can be applied to functions of different wavelengths. For example, the fourier decomposition of a square wave consists of a sinusoidal function with a frequency that is an odd multiple of the fundamental frequency, each sinusoidal component having a rising phase at the rising edge of the square wave, the phase having the greatest uniformity at the boundaries of the image, which is reflected in the image as a significantly varying edge. According to the principle of the phase consistency method, the boundary information of the image can be accurately extracted.
Here, each first noise-reduced image and each second noise-reduced image are regarded as one-dimensional signalsF(t)The fourier expansion is:
Figure 787687DEST_PATH_IMAGE012
(7)
in the formula (I), the compound is shown in the specification,A n is as followsnThe amplitude of the individual sinusoidal components is,
Figure 418520DEST_PATH_IMAGE013
in order to be the angular frequency of the frequency,
Figure 745596DEST_PATH_IMAGE014
is the firstnThe initial phase of the individual sinusoidal components,trepresenting the argument of the fourier transform.
Phase consistency is a measure for determining the phase similarity of each frequency domain component (sinusoidal component) of an image, and is expressed by equation (8), where equation (8) is as follows:
Figure 295526DEST_PATH_IMAGE015
(8)
in the formula (I), the compound is shown in the specification,
Figure 973632DEST_PATH_IMAGE016
is a one-dimensional signalF(t)Fourier expansion derivednThe phase of the individual sinusoidal components is,
Figure 142576DEST_PATH_IMAGE017
a weighted average of the phases is represented,A n is as followsnThe amplitude of each sinusoidal component.
Here, the fourier expansion can be simplified to energy formula (9) in complex form, and formula (9) is as follows:
Figure 855318DEST_PATH_IMAGE018
(9)
in the formula (I), the compound is shown in the specification,
Figure 310570DEST_PATH_IMAGE019
representing the real component of the fourier expansion,
Figure 849873DEST_PATH_IMAGE020
representing the imaginary component of the fourier expansion.
In conjunction with equations (8) and (9), the expression for phase consistency can be written as:
Figure 681563DEST_PATH_IMAGE021
(10)
based on a phase consistency method, the edge characteristics and the characteristic points of each first noise reduction image and each second noise reduction image are respectively extracted, the method is not influenced by the light and shade change of local light of the images, and information such as angles, lines, textures and the like in the first noise reduction image/the second noise reduction image can be reserved.
In some optional embodiments, the maximum index maps of each first noise-reduced image and each second noise-reduced image are respectively established according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
Specifically, each first noise-reduced image and each second noise-reduced image are input into a Log-Gabor filter, and a convolution sequence corresponding to each first noise-reduced image and each second noise-reduced image is established according to the preset number of convolution channels and the preset number of directions. And then, arranging convolution sequences obtained by the Log-Gabor filter, and constructing multi-channel convolution mapping of each first noise-reduced image and each second noise-reduced image so as to obtain multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image under the multi-channel convolution sequences. Finally, a maximum index map (MaxIndexMap, MIM for short) of each first noise-reduced image and each second noise-reduced image is constructed according to the multi-directional amplitude information of the images.
For example, for each first/second noise-reduced image
Figure 248810DEST_PATH_IMAGE022
Show that
Figure 874964DEST_PATH_IMAGE022
Carrying out convolution through a Log-Gabor filter to obtain a multichannel convolution sequence; the amplitudes at scale (channel) S and direction O are then calculated
Figure 403028DEST_PATH_IMAGE023
(ii) a Finally, the amplitudes at all scales are summed, represented by equation (11), equation (11) being as follows:
Figure 303988DEST_PATH_IMAGE024
(11)
in the formula (I), the compound is shown in the specification,
Figure 460163DEST_PATH_IMAGE025
for the sum of amplitudes at all scales, altogetherNSThe size of each of the plurality of scales,
Figure 398163DEST_PATH_IMAGE026
is a positive integer and is a non-zero integer,
Figure 272578DEST_PATH_IMAGE027
Figure 711650DEST_PATH_IMAGE023
amplitude in the dimension S and direction O.
After taking the amplitude summation under all scales
Figure 987910DEST_PATH_IMAGE025
Is used for each first noise-reduced image/each second noise-reduced image as the value of the maximum index map
Figure 96812DEST_PATH_IMAGE022
And performing Log-Gabor filtering to obtain a maximum index map corresponding to each first noise reduction image/each second noise reduction image.
In a specific scene, the preset convolution channel number may be 6, and the direction number may be 6, so as to obtain a convolution sequence of 6 channels corresponding to each first noise-reduced image and each second noise-reduced image, and then calculate a scale
Figure 724102DEST_PATH_IMAGE028
And direction
Figure 701286DEST_PATH_IMAGE029
And finally, summing the amplitudes in 6 scales, and taking the maximum value after amplitude summation to obtain a maximum index map.
Therefore, Log-Gabor filtering is carried out on each first noise reduction image/each second noise reduction image to obtain amplitude information under multiple scales, and the amplitudes under all scales are summed
Figure 205954DEST_PATH_IMAGE025
The maximum value of the image registration index map is used as the value of the maximum index map, the nonlinear radiation difference between the image to be registered and the reference image can be well resisted, the influence of different brightness conditions when the image to be registered and the reference image are shot is avoided, and the image registration effect is effectively improved.
In some optional embodiments, a feature descriptor of each feature point in each first noise-reduced image is obtained according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; and obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image.
Specifically, obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image includes: extracting the feature points of each first noise reduction image based on a phase consistency method; constructing a region image of the feature point of each first noise-reduced image in the maximum index map of each first noise-reduced image; partitioning the regional image of the feature point of each first noise-reduced image to obtain a plurality of regional partitioned images; and determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each regional block image. And obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image by using the same steps.
Further, extracting feature points of each first noise-reduced image based on a phase consistency method includes: based on a phase consistency method, extracting the edge feature of each first noise reduction image by calculating the phase consistency of each first noise reduction image, then generating the feature point of each first noise reduction image based on a FAST feature point detection algorithm, and finally obtaining the feature descriptor of each feature point in each first noise reduction image according to the feature point of each first noise reduction image. And extracting the characteristic point of each second noise reduction image by adopting the same steps and based on a phase consistency method.
Therefore, the feature points are detected and extracted based on the phase consistency, correct feature points can be extracted under the condition that the brightness span of the original image is large, meanwhile, the local illumination invariance is achieved, namely the feature point extraction result of the image is not affected by the illumination condition during aerial photography, and the accuracy of feature point extraction is further improved.
In an application scenario, obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image may include the following steps: each first noise-reduced image corresponds to a plurality of feature points, an area image is constructed on the maximum index map by taking the position of each feature point as the center, and the size of the constructed area image can be preset, for example, the size is the size
Figure 610391DEST_PATH_IMAGE030
A square area of (a). Then, the region image of the feature point of each first noise-reduced image is segmented, for example, each region image is further subdivided into
Figure 459398DEST_PATH_IMAGE031
And obtaining a plurality of regional block images by the regional block images. And finally, determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each regional block image. Here, a distribution histogram is established for each region block image, a statistical result of the distribution histogram of each region block image is expressed in a vector form, and then vectors corresponding to a plurality of region block images are combined to obtain a feature descriptor of each feature point.
In a specific example, a region image is constructed on the maximum index map with the position of each feature point as the center, and the size of the region image can be as large as
Figure 974693DEST_PATH_IMAGE032
The square-shaped area of the pixel,then dividing the mixture into 6 equal parts along two adjacent sides to obtain
Figure 100912DEST_PATH_IMAGE033
Each region block image is divided into blocks, a distribution histogram is established for each region block image, a histogram statistical result is expressed in a vector mode, and finally, each feature point is obtained
Figure 941829DEST_PATH_IMAGE034
And a feature vector with 216 dimensions is obtained, and the feature vector is a feature descriptor of each feature point.
It can be understood that, with the same processing, the feature descriptor of each feature point in each second noise-reduced image is obtained according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image.
In an alternative embodiment, the filtering the feature points in each first noise-reduced image includes: determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all feature points in the corresponding second noise-reduced image; performing primary screening on the feature points in each first noise-reduced image according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image; determining the geographical position vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the feature point in the corresponding second noise-reduced image; and carrying out secondary screening on the feature points after the primary screening in each first noise-reduction image according to the geographical position vector distance between each feature point after the primary screening in each first noise-reduction image and the feature point in the corresponding second noise-reduction image.
In practical application, for each first noise-reduced image, a corresponding second noise-reduced image exists, a plurality of feature points are extracted from each noise-reduced image (the first noise-reduced image/the second noise-reduced image), and the feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image is determined according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all feature points in the corresponding second noise-reduced image. And according to the characteristic vector distance between each characteristic point in each first noise-reduced image and all the corresponding characteristic points in the second noise-reduced image, taking the point with the minimum characteristic vector distance as the initial matching of the characteristic points to form a characteristic point pair. Then, two points with the minimum distance of the feature vector, namely a nearest neighbor point and a next neighbor point, are taken, the ratio between the nearest neighbor point and the next neighbor point is calculated, the ratio is compared with a preset first threshold value, when the ratio between the nearest neighbor point and the next neighbor point is larger than the preset first threshold value, the error of matching of the feature point is indicated to be overlarge, the feature point is removed, and therefore the preliminary screening of the feature points in each first noise-reduced image is completed. Here, the preset first threshold is preferably 0.5 to 0.6.
And determining the geographical position vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the feature point in the corresponding second noise-reduced image, comparing the geographical position vector distance with a preset second threshold value, and removing the feature point when the geographical position vector distance is greater than the preset second threshold value, thereby realizing secondary screening of the preliminarily screened feature points in each first noise-reduced image. In the embodiment of the present application, the preset value of the second threshold is related to the feature of the image to be registered, for example, 30 meters may be taken, that is, if the distance between one feature point in the first noise-reduced image and one feature point geographic position vector in the corresponding second noise-reduced image is greater than 30 meters, the feature point is considered to be in a wrong matching state, and the feature point is removed.
In a specific implementation process, firstly, the feature points are primarily screened according to the feature vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image, and then, the feature points after primary screening are secondarily screened according to the geographic coordinate of each feature point in each first noise-reduced image and the geographic position vector distance between the feature points in the corresponding second noise-reduced image; the characteristic points can be preliminarily screened according to the geographical position vector distance, and then the characteristic points after preliminary screening are secondarily screened according to the characteristic vector distance, so that the method is not limited.
The feature points are primarily screened according to the feature vector distance, and then the feature points after primary screening are secondarily screened according to the geographical position vector distance, so that the accuracy of feature point matching between the image to be registered and the reference image is improved.
Step S103, based on a random sampling consistency (Randac) algorithm, determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image, so as to register the image to be registered.
In some optional embodiments, based on a random sampling consistency algorithm, determining an optimal projection transformation model of the image to be registered according to the feature points screened in each first noise-reduced image, so as to register the image to be registered, including: based on a random sampling consistency algorithm, determining an optimal feature point set of the image to be registered according to the feature points screened in each first noise reduction image; and determining an optimal projection transformation model of the image to be registered according to the optimal characteristic point set of the image to be registered based on a random sampling consistency algorithm so as to register the image to be registered.
In the embodiment of the present application, based on a random sampling consistency algorithm, determining an optimal feature point set of an image to be registered according to the feature points screened in each first noise-reduced image, including: a random sampling method is adopted, K groups of first feature point sets are randomly extracted from the feature points screened from each first noise reduction image, and K first parameter matrixes corresponding to the K groups of first feature point sets in the first projection transformation model of each first noise reduction image are determined based on the row-column coordinates of the feature points in each group of first feature point sets; k is a positive integer, and each group of first feature point sets comprises at least four feature points; calculating a first cost function of each first feature point set in each first noise-reduced image based on K first projective transformation models corresponding to K first parameter matrixes; and combining at least four feature points in the first feature point set corresponding to the minimum first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
The principle of the random sampling consistency algorithm is that the optimal coordinate transformation relation is found so that the number of characteristic points meeting the coordinate transformation relation is the largest. In the embodiment of the application, the process of registration based on the random sampling consistency algorithm is to find the coordinate transformation relation between the image to be registered and the reference image, so that the number of the characteristic points meeting the coordinate transformation relation is the largest. Generally, the coordinate transformation relationship is represented by the following transformation model:
Figure 278132DEST_PATH_IMAGE035
(12)
in the formula, s represents a scale,
Figure 472484DEST_PATH_IMAGE036
coordinates representing pixel points in the first noise-reduced image,
Figure 577844DEST_PATH_IMAGE037
coordinates representing pixel points in the second noise-reduced image,
Figure 324083DEST_PATH_IMAGE038
a transformation parameter matrix representing a coordinate transformation relationship is usually taken
Figure 882103DEST_PATH_IMAGE039
To normalize the transformation parameter matrix.
It can be seen from the formula (12) that the transformation parameter matrix of the coordinate transformation relationship has 8 unknowns, at least 8 linear equations are required for solving, two linear equations can be listed in one group of characteristic point pairs, and at least 4 characteristic point pairs are required for solving the transformation parameter matrix.
In the embodiment of the present application, based on a random sampling consistency algorithm, local quality constraint is performed on each first noise-reduced image and the second noise-reduced image corresponding to the first noise-reduced image, specifically: and adopting a random sampling method to randomly extract K groups of first characteristic point sets from the characteristic point pairs of each first noise-reduced image and the corresponding second noise-reduced image, wherein each group of first characteristic point sets comprises at least 4 characteristic point pairs, and each selected characteristic point pair is not collinear. And determining K transformation parameter matrixes (first parameter matrixes) corresponding to K groups of first feature point sets in the first projection transformation model of each first noise-reduced image based on row-column coordinates of feature points in each group of first feature point sets, and determining K coordinate transformation relations (first projection transformation models) between the first noise-reduced image and a second noise-reduced image corresponding to the first noise-reduced image according to the transformation parameter matrixes. And respectively substituting all the feature point pairs into the K first projection transformation models, calculating the number and projection errors (first cost functions) of the feature point pairs meeting each first projection transformation model according to a preset third threshold, and taking the first projection transformation model corresponding to the minimum first cost function as a local optimal image transformation model, wherein at least 4 feature points corresponding to the local optimal image transformation model are local optimal feature points of the current first noise reduction image. And combining at least 4 feature points (namely local optimal feature points) in the first feature point set corresponding to the minimum first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
In an optional embodiment, based on a random sampling consistency algorithm, determining an optimal projection transformation model of an image to be registered according to an optimal feature point set of the image to be registered, so as to register the image to be registered, including: based on a random sampling consistency algorithm, randomly extracting S groups of second feature point sets from the optimal feature point set, and determining S second parameter matrixes corresponding to the S groups of second feature point sets in a second projection transformation model of the image to be registered based on row-column coordinates of feature points in each group of second feature point sets; wherein S is a positive integer; each group of the second feature point sets comprises at least four feature points; respectively calculating a second cost function of each second feature point set in the image to be registered based on S second projective transformation models corresponding to the S second parameter matrixes; and determining the second projective transformation model corresponding to the minimum second cost function as the optimal projective transformation model so as to register the image to be registered.
In the embodiment of the present application, the optimal feature point set of the image to be registered is composed of local optimal feature points of each first noise-reduced image, and an optimal projection transformation model of the image to be registered is determined according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm, specifically: and randomly sampling the optimal characteristic point set of the image to be registered to obtain S groups of second characteristic point sets, wherein each second characteristic point set comprises at least 4 characteristic points, and determining a transformation parameter matrix (second parameter matrix) corresponding to a coordinate transformation relation (second projection transformation model) between the image to be registered and the reference image based on each group of second characteristic point sets. And substituting all the feature points in the optimal feature point set into each second projection transformation model, calculating the projection error (second cost function) of each feature point, taking the second projection transformation model corresponding to the second cost function at the minimum, taking the second projection transformation model as the optimal projection transformation model, satisfying all the feature points of the optimal projection transformation model, calculating a global transformation matrix of the image to be registered and the reference image according to the global optimal feature point set, and finishing the registration.
Therefore, local quality constraint is carried out on the feature points of each first noise-reduced image to obtain an optimal feature point set of each first noise-reduced image, then feature point sets which are in accordance with the local quality constraint in all the block images are combined, and finally the overall quality constraint is carried out on the combined optimal feature point set, so that the accuracy of feature point matching is further improved.
In summary, in the present application, a plurality of block images of an image to be registered and a plurality of block images of a reference image are denoised respectively to obtain a plurality of first denoised images and a plurality of second denoised images correspondingly; the first noise reduction image and the second noise reduction image correspond to each other one by one; screening the feature points in each first noise-reduced image according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image; and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered. Therefore, the accuracy of feature point extraction and feature matching is improved, and the problems of insufficient precision and low calculation efficiency of the traditional unmanned aerial vehicle image registration method are solved.
Exemplary System
Fig. 3 is a schematic structural diagram of an unmanned aerial vehicle image automatic registration system according to some embodiments of the present application, as shown in fig. 3, the unmanned aerial vehicle image automatic registration system includes:
the noise reduction unit 201 is configured to perform noise reduction on the multiple block images of the image to be registered and the multiple block images of the reference image respectively to obtain multiple first noise reduction images and multiple second noise reduction images correspondingly; wherein the first noise-reduced image and the second noise-reduced image correspond to each other one by one;
a point screening unit 202, configured to screen feature points in each first noise-reduced image according to a feature vector distance and a geographic position vector distance between each feature point in each first noise-reduced image and all feature points in the second noise-reduced image corresponding to the feature point;
the model registration unit 203 is configured to determine an optimal projection transformation model of the image to be registered according to the feature points screened in each of the first noise-reduced images based on a random sampling consistency algorithm, so as to register the image to be registered.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (8)

1. An unmanned aerial vehicle image automatic registration method is characterized by comprising the following steps:
respectively blocking an image to be registered and a reference image, and correspondingly obtaining a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image;
wherein, the image to be registered and the reference image are respectively partitioned into blocks, specifically:
segmenting the image to be registered intoM*NObtaining a plurality of block images of the image to be registered in the sub-area, taking the upper left boundary point of the plurality of block images of each image to be registered, obtaining the upper left boundary point set corresponding to the image to be registered, and marking as the upper left boundary point setC 1 、C 2 、C 3 ……The row-column coordinates of the ith upper left boundary point in the upper left boundary point set are expressed as
Figure DEST_PATH_IMAGE001
iIs a positive integer; then, calculating to obtain the geographic coordinates corresponding to each upper left boundary point in the upper left boundary point set based on the transformation relation between the row and column coordinates in the image to be registered and the geographic coordinates
Figure 761522DEST_PATH_IMAGE002
According to the geographic coordinates corresponding to each upper left boundary point
Figure 955742DEST_PATH_IMAGE002
And reversely calculating to obtain the row and column coordinates of the reference image based on the transformation relation between the row and column coordinates and the geographic coordinates in the reference image
Figure DEST_PATH_IMAGE003
The row and column coordinates of the reference image
Figure 543719DEST_PATH_IMAGE003
The corresponding point is taken as the upper left edge of the reference imageSet of boundary points, denotedC’ 1 、C’ 2 、C 3 ……(ii) a According to the upper-left boundary point set of the reference image, partitioning the reference image to obtain a plurality of partitioned images of the reference image; the multiple block images of the image to be registered are in one-to-one correspondence with the multiple block images of the reference image;
respectively conducting guide filtering on the plurality of block images of the image to be registered and the plurality of block images of the reference image through a guide filter to correspondingly obtain a plurality of first noise reduction images and a plurality of second noise reduction images; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
screening the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
2. The unmanned aerial vehicle image automatic registration method according to claim 1, wherein the screening the feature points in each first noise-reduced image according to a feature vector distance and a geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image comprises:
establishing a maximum index map of each of the first noise-reduced images and each of the second noise-reduced images;
obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image;
determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all the feature points in the corresponding second noise-reduced image;
performing primary screening on the feature points in each first noise-reduced image according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
determining the geographical position vector distance between each feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the corresponding feature point in the second noise-reduced image;
and carrying out secondary screening on the preliminarily screened feature points in each first noise-reduced image according to the geographical position vector distance of each preliminarily screened feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image.
3. The unmanned aerial vehicle image automatic registration method of claim 2, wherein the establishing a maximum index map for each of the first and second noise-reduced images comprises:
respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method;
and respectively establishing a maximum index map of each first noise-reduced image and each second noise-reduced image according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
4. The unmanned aerial vehicle image automatic registration method of claim 3, wherein the obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image comprises:
respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method;
constructing a region image of the feature point of each first noise-reduced image in the maximum index map of each first noise-reduced image;
partitioning the regional image of the feature point of each first noise-reduced image to obtain a plurality of regional partitioned images;
and determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each region block image.
5. The unmanned aerial vehicle image automatic registration method according to claim 1, wherein the determining an optimal projection transformation model of the image to be registered according to the filtered feature points in each of the first noise-reduced images based on a random sampling consistency algorithm to register the image to be registered comprises:
based on a random sampling consistency algorithm, determining an optimal feature point set of the image to be registered according to the feature points screened in each first noise reduction image;
and determining an optimal projection transformation model of the image to be registered according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm so as to register the image to be registered.
6. The unmanned aerial vehicle image automatic registration method of claim 5, wherein the determining an optimal feature point set of the image to be registered according to the feature points filtered in each of the first noise-reduced images based on a random sampling consistency algorithm comprises:
a random sampling method is adopted, and the method,randomly extracting feature points screened from each first noise-reduced image
Figure 356954DEST_PATH_IMAGE004
Grouping a first feature point set, and determining a first projective transformation model of each first noise-reduced image according to row-column coordinates of feature points in each first feature point set
Figure 44812DEST_PATH_IMAGE004
Corresponding to the first feature point set
Figure 488562DEST_PATH_IMAGE004
A first parameter matrix; wherein the content of the first and second substances,
Figure 298256DEST_PATH_IMAGE004
each group of the first characteristic point set comprises at least four characteristic point pairs;
based on
Figure 367712DEST_PATH_IMAGE004
Corresponding to said first parameter matrix
Figure 392299DEST_PATH_IMAGE004
The first projective transformation model is used for calculating a first cost function of each first characteristic point set in each first noise reduction image;
and combining at least four feature points in the first feature point set corresponding to the smallest first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
7. The unmanned aerial vehicle image automatic registration method of claim 6, wherein the determining an optimal projective transformation model of the image to be registered according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm to register the image to be registered comprises:
randomly extracting from the optimal feature point set based on a random sampling consistency algorithm
Figure DEST_PATH_IMAGE005
Grouping a second feature point set, and determining the line coordinates of the feature points in each second feature point set in the second projective transformation model of the image to be registered
Figure 256219DEST_PATH_IMAGE005
Corresponding to said second set of feature points
Figure 884034DEST_PATH_IMAGE005
A second parameter matrix; wherein the content of the first and second substances,
Figure 976755DEST_PATH_IMAGE005
is a positive integer; each group of the second feature point set comprises at least four feature points;
based on
Figure 636275DEST_PATH_IMAGE005
Corresponding to said second parameter matrix
Figure 608779DEST_PATH_IMAGE005
The second projective transformation model respectively calculates a second cost function of each second feature point set in the image to be registered;
and determining the second projective transformation model corresponding to the minimum second cost function as the optimal projective transformation model so as to register the image to be registered.
8. An unmanned aerial vehicle image automatic registration system, comprising:
the noise reduction unit is configured to respectively block an image to be registered and a reference image to correspondingly obtain a plurality of block images of the image to be registered and a plurality of block images of the reference image;
wherein, the image to be registered and the reference image are respectively partitioned into blocks, specifically:
segmenting the image to be registered intoM*NObtaining a plurality of block images of the image to be registered in the sub-area, taking the upper left boundary point of the plurality of block images of each image to be registered, obtaining the upper left boundary point set corresponding to the image to be registered, and marking as the upper left boundary point setC 1 、C 2 、C 3 ……The row-column coordinates of the ith upper left boundary point in the upper left boundary point set are expressed as
Figure 471693DEST_PATH_IMAGE001
iIs a positive integer; then, calculating to obtain the geographic coordinates corresponding to each upper left boundary point in the upper left boundary point set based on the transformation relation between the row and column coordinates in the image to be registered and the geographic coordinates
Figure 617373DEST_PATH_IMAGE002
According to the geographic coordinates corresponding to each upper left boundary point
Figure 616553DEST_PATH_IMAGE002
Inversely calculating to obtain the coordinates of the rows and the columns of the reference image based on the transformation relation between the coordinates of the rows and the columns in the reference image and the geographic coordinates
Figure 875803DEST_PATH_IMAGE003
The row and column coordinates of the reference image
Figure 960433DEST_PATH_IMAGE003
The corresponding point is taken as the upper left boundary point set of the reference image and is recorded asC’ 1 、C’ 2 、C 3 ……(ii) a According to the upper left boundary point set of the reference image, partitioning the reference image to obtainA plurality of block images of the reference image; the multiple block images of the image to be registered are in one-to-one correspondence with the multiple block images of the reference image;
respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image through a guide filter to correspondingly obtain a plurality of first denoising images and a plurality of second denoising images; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
the point screening unit is configured to screen the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and the model registration unit is configured to determine an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
CN202210184653.4A 2022-02-28 2022-02-28 Unmanned aerial vehicle image automatic registration method and system Active CN114241022B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210184653.4A CN114241022B (en) 2022-02-28 2022-02-28 Unmanned aerial vehicle image automatic registration method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210184653.4A CN114241022B (en) 2022-02-28 2022-02-28 Unmanned aerial vehicle image automatic registration method and system

Publications (2)

Publication Number Publication Date
CN114241022A CN114241022A (en) 2022-03-25
CN114241022B true CN114241022B (en) 2022-06-03

Family

ID=80748254

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210184653.4A Active CN114241022B (en) 2022-02-28 2022-02-28 Unmanned aerial vehicle image automatic registration method and system

Country Status (1)

Country Link
CN (1) CN114241022B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024025487A1 (en) * 2022-07-25 2024-02-01 Ozyegin Universitesi A system for processing images acquired by an air vehicle

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8326088B1 (en) * 2009-05-26 2012-12-04 The United States Of America As Represented By The Secretary Of The Air Force Dynamic image registration
CN103839265A (en) * 2014-02-26 2014-06-04 西安电子科技大学 SAR image registration method based on SIFT and normalized mutual information
CN106960449A (en) * 2017-03-14 2017-07-18 西安电子科技大学 The heterologous method for registering constrained based on multiple features
CN110211058A (en) * 2019-05-15 2019-09-06 南京极目大数据技术有限公司 A kind of data enhancement methods of medical image
CN113409369A (en) * 2021-05-25 2021-09-17 西安电子科技大学 Multi-mode remote sensing image registration method based on improved RIFT
CN113643334A (en) * 2021-07-09 2021-11-12 西安电子科技大学 Different-source remote sensing image registration method based on structural similarity

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279739A (en) * 2015-09-08 2016-01-27 哈尔滨工程大学 Self-adaptive fog-containing digital image defogging method
CN108021857B (en) * 2017-08-21 2021-12-21 哈尔滨工程大学 Building detection method based on unmanned aerial vehicle aerial image sequence depth recovery
CN109101995A (en) * 2018-07-06 2018-12-28 航天星图科技(北京)有限公司 A kind of quick unmanned plane image matching method based on fusion local feature
CN112102379B (en) * 2020-08-28 2022-11-04 电子科技大学 Unmanned aerial vehicle multispectral image registration method
CN113538290A (en) * 2021-07-30 2021-10-22 沭阳翔玮生态农业开发有限公司 Agricultural aerial image processing method and system based on artificial intelligence

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8326088B1 (en) * 2009-05-26 2012-12-04 The United States Of America As Represented By The Secretary Of The Air Force Dynamic image registration
CN103839265A (en) * 2014-02-26 2014-06-04 西安电子科技大学 SAR image registration method based on SIFT and normalized mutual information
CN106960449A (en) * 2017-03-14 2017-07-18 西安电子科技大学 The heterologous method for registering constrained based on multiple features
CN110211058A (en) * 2019-05-15 2019-09-06 南京极目大数据技术有限公司 A kind of data enhancement methods of medical image
CN113409369A (en) * 2021-05-25 2021-09-17 西安电子科技大学 Multi-mode remote sensing image registration method based on improved RIFT
CN113643334A (en) * 2021-07-09 2021-11-12 西安电子科技大学 Different-source remote sensing image registration method based on structural similarity

Also Published As

Publication number Publication date
CN114241022A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN111583110B (en) Splicing method of aerial images
CN104574347B (en) Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data
CN103247029B (en) A kind of high spectrum image geometrical registration method generated for spliced detector
CN108564532B (en) Large-scale ground distance satellite-borne SAR image mosaic method
CN107203973A (en) A kind of sub-pixel positioning method of three-dimensional laser scanning system center line laser center
CN104899888B (en) A kind of image sub-pixel edge detection method based on Legendre squares
CN107146200B (en) Unmanned aerial vehicle remote sensing image splicing method based on image splicing quality evaluation
GB2557398A (en) Method and system for creating images
CN102855628B (en) Automatic matching method for multisource multi-temporal high-resolution satellite remote sensing image
CN108961286B (en) Unmanned aerial vehicle image segmentation method considering three-dimensional and edge shape characteristics of building
CN106096497B (en) A kind of house vectorization method for polynary remotely-sensed data
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
US11367213B2 (en) Method and apparatus with location estimation
CN112669280B (en) Unmanned aerial vehicle inclination aerial photography right-angle image control point target detection method based on LSD algorithm
CN114241022B (en) Unmanned aerial vehicle image automatic registration method and system
CN107341781A (en) Based on the SAR image correcting methods for improving the matching of phase equalization characteristic vector base map
JP2021086616A (en) Method for extracting effective region of fisheye image based on random sampling consistency
CN115951350A (en) Permanent scatterer point extraction method, device, equipment and medium
CN114897676A (en) Unmanned aerial vehicle remote sensing multispectral image splicing method, device and medium
CN117152272B (en) Viewing angle tracking method, device, equipment and storage medium based on holographic sand table
CN109084675A (en) Center of circle positioning device and method based on Embedded geometrical characteristic in conjunction with Zernike square
Liu et al. Match selection and refinement for highly accurate two-view structure from motion
Zhang et al. An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images
Boerner et al. Brute force matching between camera shots and synthetic images from point clouds
CN114565653A (en) Heterogeneous remote sensing image matching method with rotation change and scale difference

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant