CN114241022B - Unmanned aerial vehicle image automatic registration method and system - Google Patents
Unmanned aerial vehicle image automatic registration method and system Download PDFInfo
- Publication number
- CN114241022B CN114241022B CN202210184653.4A CN202210184653A CN114241022B CN 114241022 B CN114241022 B CN 114241022B CN 202210184653 A CN202210184653 A CN 202210184653A CN 114241022 B CN114241022 B CN 114241022B
- Authority
- CN
- China
- Prior art keywords
- image
- noise
- registered
- feature
- reduced image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 52
- 230000009466 transformation Effects 0.000 claims abstract description 74
- 239000013598 vector Substances 0.000 claims abstract description 59
- 238000005070 sampling Methods 0.000 claims abstract description 35
- 238000012216 screening Methods 0.000 claims abstract description 27
- 230000009467 reduction Effects 0.000 claims description 63
- 239000011159 matrix material Substances 0.000 claims description 13
- 238000001914 filtration Methods 0.000 claims description 9
- 238000000638 solvent extraction Methods 0.000 claims description 7
- 230000000903 blocking effect Effects 0.000 claims description 3
- 239000000126 substance Substances 0.000 claims description 3
- 101100149325 Escherichia coli (strain K12) setC gene Proteins 0.000 claims 2
- 238000000605 extraction Methods 0.000 abstract description 15
- 238000004364 calculation method Methods 0.000 abstract description 6
- 238000012545 processing Methods 0.000 abstract description 3
- 230000008569 process Effects 0.000 description 7
- 150000001875 compounds Chemical class 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 238000005286 illumination Methods 0.000 description 3
- 230000000630 rising effect Effects 0.000 description 2
- 240000007594 Oryza sativa Species 0.000 description 1
- 235000007164 Oryza sativa Nutrition 0.000 description 1
- 241000209140 Triticum Species 0.000 description 1
- 235000021307 Triticum Nutrition 0.000 description 1
- 240000008042 Zea mays Species 0.000 description 1
- 235000005824 Zea mays ssp. parviglumis Nutrition 0.000 description 1
- 235000002017 Zea mays subsp mays Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 125000004432 carbon atom Chemical group C* 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 235000005822 corn Nutrition 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 235000009566 rice Nutrition 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20021—Dividing image into blocks, subimages or windows
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20048—Transform domain processing
- G06T2207/20056—Discrete and fast Fourier transform, [DFT, FFT]
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The application relates to the technical field of image processing, and provides an unmanned aerial vehicle image automatic registration method and system, wherein the method comprises the following steps: respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image to obtain a plurality of first denoising images and a plurality of second denoising images; wherein the first noise-reduced image and the second noise-reduced image correspond to each other one by one; then, according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image, screening the feature points in each first noise-reduced image; and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered. Therefore, the accuracy of feature point extraction and feature point matching is improved, and the calculation efficiency is improved.
Description
Technical Field
The application relates to the technical field of image processing, in particular to an unmanned aerial vehicle image automatic registration method and system.
Background
Image registration refers to the process of matching and aligning two or more images from the same region acquired by the same sensor under different aerial conditions or different sensors. The image registration of the image acquired by the unmanned aerial vehicle is the basis of the image application of the unmanned aerial vehicle, and the precision of the image registration directly influences the application effect of the image of the unmanned aerial vehicle. In the prior art, the unmanned aerial vehicle image registration usually performs feature extraction on the whole image, and is affected by landform features and image noise, and the problems of low accuracy of feature point extraction, low calculation speed and the like exist in the registration process.
Therefore, there is a need to provide an improved solution to the above-mentioned deficiencies of the prior art.
Disclosure of Invention
An object of the application is to provide an unmanned aerial vehicle image automatic registration method and system to solve or alleviate the problems existing in the prior art.
In order to achieve the above purpose, the present application provides the following technical solutions:
the application provides an unmanned aerial vehicle image automatic registration method, which comprises the following steps:
respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image to obtain a plurality of first denoising images and a plurality of second denoising images; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
screening the feature points in each first noise-reduced image according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
Preferably, the denoising the plurality of block images of the image to be registered and the plurality of block images of the reference image respectively to obtain a plurality of first denoised images and a plurality of second denoised images correspondingly includes:
respectively blocking the image to be registered and the reference image to correspondingly obtain a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image;
and respectively conducting guide filtering on the plurality of block images of the image to be registered and the plurality of block images of the reference image through a guide filter, and correspondingly obtaining a plurality of first noise-reduced images and a plurality of second noise-reduced images.
Preferably, the screening the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image includes:
establishing a maximum index map of each of the first noise-reduced images and each of the second noise-reduced images;
obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image;
determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the second noise-reduced image corresponding to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptor of all the feature points in the second noise-reduced image corresponding to the feature descriptor;
performing primary screening on the feature points in each first noise-reduced image according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
determining the geographical position vector distance between each feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the corresponding feature point in the second noise-reduced image;
and carrying out secondary screening on the preliminarily screened feature points in each first noise-reduced image according to the geographical position vector distance of each preliminarily screened feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image.
Preferably, the establishing a maximum index map of each of the first noise-reduced images and each of the second noise-reduced images includes: respectively extracting the characteristic points of each first noise-reduced image and each second noise-reduced image based on a phase consistency method;
and respectively establishing a maximum index map of each first noise-reduced image and each second noise-reduced image according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
Preferably, the obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image includes:
respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method;
constructing a region image of the feature point of each first noise-reduced image in the maximum index map of each first noise-reduced image;
partitioning the regional image of the feature point of each first noise-reduced image to obtain a plurality of regional partitioned images;
and determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each region block image.
Preferably, the determining, based on a random sampling consistency algorithm, an optimal projection transformation model of the image to be registered according to the feature points screened from each of the first noise-reduced images to register the image to be registered includes:
based on a random sampling consistency algorithm, determining an optimal feature point set of the image to be registered according to the feature points screened in each first noise reduction image;
and determining an optimal projection transformation model of the image to be registered according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm so as to register the image to be registered.
Preferably, the determining an optimal feature point set of the image to be registered according to the feature points filtered in each first noise-reduced image based on the random sampling consistency algorithm includes:
selecting features from each of said first noise-reduced images by random samplingRandom extraction in feature pointsGrouping first feature point sets, and determining a first projection transformation model of each first noise-reduced image based on row-column coordinates of feature points in each first feature point setCorresponding to the first feature point setA first parameter matrix; wherein the content of the first and second substances,each group of the first characteristic point set comprises at least four characteristic point pairs;
based onCorresponding to said first parameter matrixThe first projective transformation model is used for calculating a first cost function of each first feature point set in each first noise-reduced image;
and combining at least four feature points in the first feature point set corresponding to the smallest first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
Preferably, the determining, based on a random sampling consistency algorithm, an optimal projection transformation model of the image to be registered according to the optimal feature point set of the image to be registered to register the image to be registered includes:
based on a random sampling consistency algorithm, randomly extracting S groups of second feature point sets from the optimal feature point set, and determining S second parameter matrixes corresponding to the S groups of second feature point sets in a second projection transformation model of the image to be registered based on row-column coordinates of feature points in each group of second feature point sets; wherein S is a positive integer; each group of the second feature point set comprises at least four feature points;
respectively calculating a second cost function of each second feature point set in the image to be registered based on S second projective transformation models corresponding to S second parameter matrixes;
and determining the second projective transformation model corresponding to the minimum second cost function as the optimal projective transformation model so as to register the image to be registered.
The embodiment of the present application further provides an unmanned aerial vehicle image automatic registration system, including:
the noise reduction unit is configured to perform noise reduction on a plurality of block images of the image to be registered and a plurality of block images of the reference image respectively to obtain a plurality of first noise reduction images and a plurality of second noise reduction images correspondingly; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
the point screening unit is configured to screen the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and the model registration unit is configured to determine an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
Has the advantages that:
in the method, a plurality of first noise reduction images and a plurality of second noise reduction images are correspondingly obtained by respectively reducing the noise of a plurality of block images of an image to be registered and a plurality of block images of a reference image; the first noise reduction image and the second noise reduction image correspond to each other one by one; then, according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image, screening the feature points in each first noise-reduced image; and finally, based on a random sampling consistency algorithm, determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image so as to register the image to be registered. Therefore, the accuracy of feature point extraction and feature matching is improved, and meanwhile, the precision and the calculation efficiency of the unmanned aerial vehicle image registration method are improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, illustrate embodiments of the application and, together with the description, serve to explain the application and are not intended to limit the application. Wherein:
fig. 1 is a schematic flow diagram of a method for automatic registration of images of a drone provided in accordance with some embodiments of the present application;
fig. 2 is a technical flow diagram of a method for automatic registration of images of a drone provided in accordance with some embodiments of the present application;
fig. 3 is a schematic structural diagram of an unmanned aerial vehicle image automatic registration system according to some embodiments of the present application.
Detailed Description
The present application will be described in detail below with reference to the embodiments with reference to the attached drawings. The various examples are provided by way of explanation of the application and are not limiting of the application. In fact, it will be apparent to those skilled in the art that modifications and variations can be made in the present application without departing from the scope or spirit of the application. For instance, features illustrated or described as part of one embodiment, can be used with another embodiment to yield a still further embodiment. It is therefore intended that the present application cover the modifications and variations of this invention provided they come within the scope of the appended claims and their equivalents.
Exemplary method
Fig. 1 is a schematic flowchart of an automatic unmanned aerial vehicle image registration method according to some embodiments of the present application, and fig. 2 is a technical flowchart of an automatic unmanned aerial vehicle image registration method according to some embodiments of the present application, and as shown in fig. 1 and fig. 2, the automatic unmanned aerial vehicle image registration method includes:
s101, respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image to obtain a plurality of first denoising images and a plurality of second denoising images; and the first noise reduction image and the second noise reduction image correspond to each other one by one.
In the embodiment of the application, treat the image of registering for the image that unmanned aerial vehicle aerial photography obtained, because unmanned aerial vehicle is small, light in weight, it is great to be influenced by the air current, stability and anti-wind ability are relatively poor, even be equipped with autopilot and increase steady gyro device, flight attitude slope, the shake phenomenon still is difficult to avoid, these all can produce direct influence to the image that obtains, the image that leads to unmanned aerial vehicle aerial photography to obtain has the deformation of different degree, can't superpose with other image data in same region, the range of application of unmanned aerial vehicle aerial photography image has been restricted. In order to enable the images obtained by the unmanned aerial vehicle aerial photography to be overlapped with other existing data of the same area, image registration is needed.
Image registration refers to the process of matching and aligning two or more images from the same region acquired by the same sensor under different aerial conditions or different sensors. The image registration of the image acquired by the unmanned aerial vehicle is the basis of the image application of the unmanned aerial vehicle, and the precision of the image registration directly influences the application effect of the image of the unmanned aerial vehicle.
In the embodiment of the application, the unmanned aerial vehicle image is derived from a real earth surface with an agricultural landscape, such as a rice planting area, a wheat planting area, a corn planting area and the like. The unmanned aerial vehicle image under the agricultural landscape has single landform characteristics and more noises, and the single landform characteristics and a lot of noises interfere with the extraction of characteristic points in the registration process, so that the registration accuracy is low.
In particular, image registration is the process of aligning an image to be registered to a reference image. The reference image is also referred to as a base map, and may be an image acquired by the same sensor under different aerial photography conditions, or may be an image acquired by different sensors under different aerial photography conditions, for example, the reference image may be an image acquired by the same unmanned aerial vehicle, carrying the same sensor (for example, the same camera model), aerial photography under the same illumination condition, or an image acquired by different unmanned aerial vehicles, carrying different sensors by the same unmanned aerial vehicle, and different aerial photography conditions, or the reference image may be a high-resolution satellite image.
In a traditional image matching method, an unmanned aerial vehicle image is generally taken as a whole, feature extraction is carried out on the whole unmanned aerial vehicle image, and then feature matching is carried out, so that registration of an image to be registered and a reference image is completed. The unmanned aerial vehicle image feature extraction method has the advantages that the unmanned aerial vehicle image feature extraction method receives images with single landform features and multiple noises, features of the whole unmanned aerial vehicle image are extracted, redundant feature points are easily obtained, matching errors are caused, the registration accuracy is low, and the consumed time is long.
Different from the conventional image matching method in which the unmanned aerial vehicle image is taken as a whole, in the embodiment of the application, a plurality of block images of the image to be registered and a plurality of block images of the reference image are subjected to noise reduction respectively to obtain a plurality of first noise reduction images and a plurality of second noise reduction images correspondingly; and the first noise reduction image and the second noise reduction image correspond to each other one by one. Therefore, the image to be registered and the reference image are respectively blocked, and the one-to-one correspondence relationship between the plurality of blocked images of the image to be registered and the plurality of blocked images of the reference image is established, so that the accuracy of feature point extraction is improved, and the calculation efficiency is improved.
In an optional embodiment, the performing noise reduction on a plurality of block images of an image to be registered and a plurality of block images of a reference image respectively to obtain a plurality of first noise-reduced images and a plurality of second noise-reduced images correspondingly includes: respectively blocking the image to be registered and the reference image to correspondingly obtain a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image; and respectively conducting guide filtering on the plurality of block images of the image to be registered and the plurality of block images of the reference image through a guide filter, and correspondingly obtaining a plurality of first noise reduction images and a plurality of second noise reduction images.
In a specific example, the image to be registered and the reference image are respectively blocked, and a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image are obtained correspondingly, which is detailed as follows:
(1) and partitioning the image to be registered according to a preset size to obtain a plurality of partitioned images of the image to be registered. For example, each drone image to be registered is segmented intoM*NAnd a sub-region, wherein the upper left boundary point of each sub-region is taken to obtain the upper left boundary point set corresponding to the image to be registered, and the upper left boundary point set is recorded asC 1 、C 2 、C 3 ……And calculating row-column coordinates of the ith upper left boundary point in the upper left boundary point set, wherein the row-column coordinates are expressed as,iIs a positive integer. Then, calculating to obtain the geographic coordinates corresponding to each upper left boundary point in the upper left boundary point set based on the transformation relation between the row and column coordinates in the image to be registered and the geographic coordinates。
(2) According to the geographic coordinates corresponding to each upper left boundary pointObtaining the row and column coordinates of the reference image by inverse calculation based on the transformation relation between the row and column coordinates and the geographic coordinates in the reference imageReference to the row and column coordinates of the imageThe corresponding point is taken as the upper left boundary point set of the reference image and is recorded asC’ 1 、C’ 2 、C’ 3 ……. And partitioning the reference image according to the upper-left boundary point set of the reference image to obtain a plurality of partitioned images of the reference image. Wherein a plurality of block images and parameters of the image to be registeredThere is a one-to-one correspondence between multiple block images of a reference image.
Due to factors such as sensor offset, electromagnetic interference, complex topographic information of the region, and the like, redundant or unnecessary interference information usually exists in the original image to be registered and the reference image, and the phenomenon becomes image noise. Image noise interferes with registration, and therefore, prior to registration, noise reduction is required for the image to be registered and the reference image.
In some optional embodiments, the multiple block images of the image to be registered and the multiple block images of the reference image are respectively subjected to guide filtering through a guide filter, so as to obtain multiple first noise-reduced images and multiple second noise-reduced images correspondingly.
The principle of the guided filter is: using the block image of the image to be registered or the block image of the reference image as the input imagepDefining an output imageqFor inputting imagespSubtracting the noise partnBy carrying out difference operation on the input image and the noise part, the effects of smoothing the input image and removing noise are achieved. To remove noise while outputting an imageqTo obtain a coincidence guide imageIGuiding the filter to output the image at the same timeqDefined as guide imageIIn which the image is guidedIAnd input imagepAre homologous images.
The mathematical model of the guide filter is expressed by formula (1) and formula (2), and formula (1) and formula (2) are as follows:
in the formula (I), the compound is shown in the specification,q j for outputting imagesqTo middlejThe value of each of the pixels is selected,p j for inputting imagespTo middlejThe value of each of the pixels is selected,n j is composed ofp j The noise component of (a) is,I j for guiding the imageITo (1)jThe value of each of the pixels is selected,a、bis a parameter for measuring the input weight.
Deriving according to the formula (1) and the formula (2) to obtain an expression of the noise part, and expressing the expression by the formula (3), wherein the formula (3) is as follows:
to solve the parametera、bIn an input imagepTo the middle stagekWindowing is carried out by taking the pixel as a central point, a cost equation in the form of a formula (4) is defined, and the minimum value term of the cost equation is replaced by the minimum value equationa、 bAs a parametera、bEquation (4) is as follows:
in the above formula, the first and second carbon atoms are,a value term representing a cost equation,ω k representing an input imagepTo the middle stagekEach pixel is a pane of the center point,mto representω k The serial number of the middle pixel(s),I m as panesω k Corresponding to the first in the guide imagemThe value of each of the pixels is selected,p m as panesω k Corresponding to the first in the input imagemThe value of each of the pixels is selected,a k 、b k are respectively a parametera、bCorresponding paneω k The result of the solution is obtained and,εis composed ofa k And punishment parameters when the value is overlarge.
As can be seen from equation (4), the value term of the cost equation is the input imagepEach of whichNoise part and parameters of individual pixelsaThe sum of the squares of (1) and (ii) can be understood, the cost equation-based cost term minimization principle is: finding the parameters a, b so as to output the imagepWhile minimizing the noise portion in the image, and guiding the imageIIn outputting the imageqThe influence in (2) is reduced.
Solving the formula (4) based on a linear regression method so as to obtain the parameters when the value term of the cost equation has the minimum solutiona k 、b k Expression:
in the formula (I), the compound is shown in the specification,p k representing an input imagepTo (1)kThe value of each of the pixels is selected,ωpresentation paneω k The number of pixels within the window of (a),μ k presentation paneω k The variance of the pixel values within the window of (1).
The noise of the image obtained by the unmanned aerial vehicle under the agricultural landscape is more, the extraction of the feature points is influenced insignificantly, and the traditional noise reduction method is not beneficial to the subsequent feature extraction aiming at the boundary information when the noise of the image of the unmanned aerial vehicle under the agricultural landscape is reduced. In the embodiment of the application, based on the guide filter, the block images of each image to be registered are respectively subjected to guide filtering to obtain a plurality of first noise reduction images, and the block images of each reference image are respectively subjected to guide filtering to obtain a plurality of second noise reduction images.
And S102, screening the feature points in each first noise-reduced image according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image.
In the embodiment of the present application, the screening of feature points in each first noise-reduced image according to a feature vector distance and a geographical location vector distance between each feature point in each first noise-reduced image and all feature points in a corresponding second noise-reduced image includes: establishing a maximum index map of each first noise-reduced image and each second noise-reduced image; obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image; determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all feature points in the corresponding second noise-reduced image; according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image, carrying out primary screening on the feature points in each first noise-reduced image; determining the geographical position vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the feature point in the corresponding second noise-reduced image; and carrying out secondary screening on the preliminarily screened feature points in each first noise-reduced image according to the geographical position vector distance between each preliminarily screened feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image.
In some alternative embodiments, establishing the maximum index map for each first noise-reduced image and each second noise-reduced image comprises: respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method; and respectively establishing a maximum index map of each first noise-reduced image and a maximum index map of each second noise-reduced image according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
In the embodiment of the application, the feature points of each first noise reduction image and each second noise reduction image are respectively extracted based on a phase consistency method. Specifically, the principle of the phase consistency method means that in the frequency domain of the images (the image to be matched and the reference image), the frequency components of the edge features (boundary information) of the images are in the same phase, and this concept can be applied to functions of different wavelengths. For example, the fourier decomposition of a square wave consists of a sinusoidal function with a frequency that is an odd multiple of the fundamental frequency, each sinusoidal component having a rising phase at the rising edge of the square wave, the phase having the greatest uniformity at the boundaries of the image, which is reflected in the image as a significantly varying edge. According to the principle of the phase consistency method, the boundary information of the image can be accurately extracted.
Here, each first noise-reduced image and each second noise-reduced image are regarded as one-dimensional signalsF(t)The fourier expansion is:
in the formula (I), the compound is shown in the specification,A n is as followsnThe amplitude of the individual sinusoidal components is,in order to be the angular frequency of the frequency,is the firstnThe initial phase of the individual sinusoidal components,trepresenting the argument of the fourier transform.
Phase consistency is a measure for determining the phase similarity of each frequency domain component (sinusoidal component) of an image, and is expressed by equation (8), where equation (8) is as follows:
in the formula (I), the compound is shown in the specification,is a one-dimensional signalF(t)Fourier expansion derivednThe phase of the individual sinusoidal components is,a weighted average of the phases is represented,A n is as followsnThe amplitude of each sinusoidal component.
Here, the fourier expansion can be simplified to energy formula (9) in complex form, and formula (9) is as follows:
in the formula (I), the compound is shown in the specification,representing the real component of the fourier expansion,representing the imaginary component of the fourier expansion.
In conjunction with equations (8) and (9), the expression for phase consistency can be written as:
based on a phase consistency method, the edge characteristics and the characteristic points of each first noise reduction image and each second noise reduction image are respectively extracted, the method is not influenced by the light and shade change of local light of the images, and information such as angles, lines, textures and the like in the first noise reduction image/the second noise reduction image can be reserved.
In some optional embodiments, the maximum index maps of each first noise-reduced image and each second noise-reduced image are respectively established according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
Specifically, each first noise-reduced image and each second noise-reduced image are input into a Log-Gabor filter, and a convolution sequence corresponding to each first noise-reduced image and each second noise-reduced image is established according to the preset number of convolution channels and the preset number of directions. And then, arranging convolution sequences obtained by the Log-Gabor filter, and constructing multi-channel convolution mapping of each first noise-reduced image and each second noise-reduced image so as to obtain multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image under the multi-channel convolution sequences. Finally, a maximum index map (MaxIndexMap, MIM for short) of each first noise-reduced image and each second noise-reduced image is constructed according to the multi-directional amplitude information of the images.
For example, for each first/second noise-reduced imageShow thatCarrying out convolution through a Log-Gabor filter to obtain a multichannel convolution sequence; the amplitudes at scale (channel) S and direction O are then calculated(ii) a Finally, the amplitudes at all scales are summed, represented by equation (11), equation (11) being as follows:
in the formula (I), the compound is shown in the specification,for the sum of amplitudes at all scales, altogetherNSThe size of each of the plurality of scales,is a positive integer and is a non-zero integer,,amplitude in the dimension S and direction O.
After taking the amplitude summation under all scalesIs used for each first noise-reduced image/each second noise-reduced image as the value of the maximum index mapAnd performing Log-Gabor filtering to obtain a maximum index map corresponding to each first noise reduction image/each second noise reduction image.
In a specific scene, the preset convolution channel number may be 6, and the direction number may be 6, so as to obtain a convolution sequence of 6 channels corresponding to each first noise-reduced image and each second noise-reduced image, and then calculate a scaleAnd directionAnd finally, summing the amplitudes in 6 scales, and taking the maximum value after amplitude summation to obtain a maximum index map.
Therefore, Log-Gabor filtering is carried out on each first noise reduction image/each second noise reduction image to obtain amplitude information under multiple scales, and the amplitudes under all scales are summedThe maximum value of the image registration index map is used as the value of the maximum index map, the nonlinear radiation difference between the image to be registered and the reference image can be well resisted, the influence of different brightness conditions when the image to be registered and the reference image are shot is avoided, and the image registration effect is effectively improved.
In some optional embodiments, a feature descriptor of each feature point in each first noise-reduced image is obtained according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; and obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image.
Specifically, obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image includes: extracting the feature points of each first noise reduction image based on a phase consistency method; constructing a region image of the feature point of each first noise-reduced image in the maximum index map of each first noise-reduced image; partitioning the regional image of the feature point of each first noise-reduced image to obtain a plurality of regional partitioned images; and determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each regional block image. And obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image by using the same steps.
Further, extracting feature points of each first noise-reduced image based on a phase consistency method includes: based on a phase consistency method, extracting the edge feature of each first noise reduction image by calculating the phase consistency of each first noise reduction image, then generating the feature point of each first noise reduction image based on a FAST feature point detection algorithm, and finally obtaining the feature descriptor of each feature point in each first noise reduction image according to the feature point of each first noise reduction image. And extracting the characteristic point of each second noise reduction image by adopting the same steps and based on a phase consistency method.
Therefore, the feature points are detected and extracted based on the phase consistency, correct feature points can be extracted under the condition that the brightness span of the original image is large, meanwhile, the local illumination invariance is achieved, namely the feature point extraction result of the image is not affected by the illumination condition during aerial photography, and the accuracy of feature point extraction is further improved.
In an application scenario, obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image may include the following steps: each first noise-reduced image corresponds to a plurality of feature points, an area image is constructed on the maximum index map by taking the position of each feature point as the center, and the size of the constructed area image can be preset, for example, the size is the sizeA square area of (a). Then, the region image of the feature point of each first noise-reduced image is segmented, for example, each region image is further subdivided intoAnd obtaining a plurality of regional block images by the regional block images. And finally, determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each regional block image. Here, a distribution histogram is established for each region block image, a statistical result of the distribution histogram of each region block image is expressed in a vector form, and then vectors corresponding to a plurality of region block images are combined to obtain a feature descriptor of each feature point.
In a specific example, a region image is constructed on the maximum index map with the position of each feature point as the center, and the size of the region image can be as large asThe square-shaped area of the pixel,then dividing the mixture into 6 equal parts along two adjacent sides to obtainEach region block image is divided into blocks, a distribution histogram is established for each region block image, a histogram statistical result is expressed in a vector mode, and finally, each feature point is obtainedAnd a feature vector with 216 dimensions is obtained, and the feature vector is a feature descriptor of each feature point.
It can be understood that, with the same processing, the feature descriptor of each feature point in each second noise-reduced image is obtained according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image.
In an alternative embodiment, the filtering the feature points in each first noise-reduced image includes: determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all feature points in the corresponding second noise-reduced image; performing primary screening on the feature points in each first noise-reduced image according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image; determining the geographical position vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the feature point in the corresponding second noise-reduced image; and carrying out secondary screening on the feature points after the primary screening in each first noise-reduction image according to the geographical position vector distance between each feature point after the primary screening in each first noise-reduction image and the feature point in the corresponding second noise-reduction image.
In practical application, for each first noise-reduced image, a corresponding second noise-reduced image exists, a plurality of feature points are extracted from each noise-reduced image (the first noise-reduced image/the second noise-reduced image), and the feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image is determined according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all feature points in the corresponding second noise-reduced image. And according to the characteristic vector distance between each characteristic point in each first noise-reduced image and all the corresponding characteristic points in the second noise-reduced image, taking the point with the minimum characteristic vector distance as the initial matching of the characteristic points to form a characteristic point pair. Then, two points with the minimum distance of the feature vector, namely a nearest neighbor point and a next neighbor point, are taken, the ratio between the nearest neighbor point and the next neighbor point is calculated, the ratio is compared with a preset first threshold value, when the ratio between the nearest neighbor point and the next neighbor point is larger than the preset first threshold value, the error of matching of the feature point is indicated to be overlarge, the feature point is removed, and therefore the preliminary screening of the feature points in each first noise-reduced image is completed. Here, the preset first threshold is preferably 0.5 to 0.6.
And determining the geographical position vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the feature point in the corresponding second noise-reduced image, comparing the geographical position vector distance with a preset second threshold value, and removing the feature point when the geographical position vector distance is greater than the preset second threshold value, thereby realizing secondary screening of the preliminarily screened feature points in each first noise-reduced image. In the embodiment of the present application, the preset value of the second threshold is related to the feature of the image to be registered, for example, 30 meters may be taken, that is, if the distance between one feature point in the first noise-reduced image and one feature point geographic position vector in the corresponding second noise-reduced image is greater than 30 meters, the feature point is considered to be in a wrong matching state, and the feature point is removed.
In a specific implementation process, firstly, the feature points are primarily screened according to the feature vector distance between each feature point in each first noise-reduced image and the feature point in the corresponding second noise-reduced image, and then, the feature points after primary screening are secondarily screened according to the geographic coordinate of each feature point in each first noise-reduced image and the geographic position vector distance between the feature points in the corresponding second noise-reduced image; the characteristic points can be preliminarily screened according to the geographical position vector distance, and then the characteristic points after preliminary screening are secondarily screened according to the characteristic vector distance, so that the method is not limited.
The feature points are primarily screened according to the feature vector distance, and then the feature points after primary screening are secondarily screened according to the geographical position vector distance, so that the accuracy of feature point matching between the image to be registered and the reference image is improved.
Step S103, based on a random sampling consistency (Randac) algorithm, determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image, so as to register the image to be registered.
In some optional embodiments, based on a random sampling consistency algorithm, determining an optimal projection transformation model of the image to be registered according to the feature points screened in each first noise-reduced image, so as to register the image to be registered, including: based on a random sampling consistency algorithm, determining an optimal feature point set of the image to be registered according to the feature points screened in each first noise reduction image; and determining an optimal projection transformation model of the image to be registered according to the optimal characteristic point set of the image to be registered based on a random sampling consistency algorithm so as to register the image to be registered.
In the embodiment of the present application, based on a random sampling consistency algorithm, determining an optimal feature point set of an image to be registered according to the feature points screened in each first noise-reduced image, including: a random sampling method is adopted, K groups of first feature point sets are randomly extracted from the feature points screened from each first noise reduction image, and K first parameter matrixes corresponding to the K groups of first feature point sets in the first projection transformation model of each first noise reduction image are determined based on the row-column coordinates of the feature points in each group of first feature point sets; k is a positive integer, and each group of first feature point sets comprises at least four feature points; calculating a first cost function of each first feature point set in each first noise-reduced image based on K first projective transformation models corresponding to K first parameter matrixes; and combining at least four feature points in the first feature point set corresponding to the minimum first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
The principle of the random sampling consistency algorithm is that the optimal coordinate transformation relation is found so that the number of characteristic points meeting the coordinate transformation relation is the largest. In the embodiment of the application, the process of registration based on the random sampling consistency algorithm is to find the coordinate transformation relation between the image to be registered and the reference image, so that the number of the characteristic points meeting the coordinate transformation relation is the largest. Generally, the coordinate transformation relationship is represented by the following transformation model:
in the formula, s represents a scale,coordinates representing pixel points in the first noise-reduced image,coordinates representing pixel points in the second noise-reduced image,a transformation parameter matrix representing a coordinate transformation relationship is usually takenTo normalize the transformation parameter matrix.
It can be seen from the formula (12) that the transformation parameter matrix of the coordinate transformation relationship has 8 unknowns, at least 8 linear equations are required for solving, two linear equations can be listed in one group of characteristic point pairs, and at least 4 characteristic point pairs are required for solving the transformation parameter matrix.
In the embodiment of the present application, based on a random sampling consistency algorithm, local quality constraint is performed on each first noise-reduced image and the second noise-reduced image corresponding to the first noise-reduced image, specifically: and adopting a random sampling method to randomly extract K groups of first characteristic point sets from the characteristic point pairs of each first noise-reduced image and the corresponding second noise-reduced image, wherein each group of first characteristic point sets comprises at least 4 characteristic point pairs, and each selected characteristic point pair is not collinear. And determining K transformation parameter matrixes (first parameter matrixes) corresponding to K groups of first feature point sets in the first projection transformation model of each first noise-reduced image based on row-column coordinates of feature points in each group of first feature point sets, and determining K coordinate transformation relations (first projection transformation models) between the first noise-reduced image and a second noise-reduced image corresponding to the first noise-reduced image according to the transformation parameter matrixes. And respectively substituting all the feature point pairs into the K first projection transformation models, calculating the number and projection errors (first cost functions) of the feature point pairs meeting each first projection transformation model according to a preset third threshold, and taking the first projection transformation model corresponding to the minimum first cost function as a local optimal image transformation model, wherein at least 4 feature points corresponding to the local optimal image transformation model are local optimal feature points of the current first noise reduction image. And combining at least 4 feature points (namely local optimal feature points) in the first feature point set corresponding to the minimum first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
In an optional embodiment, based on a random sampling consistency algorithm, determining an optimal projection transformation model of an image to be registered according to an optimal feature point set of the image to be registered, so as to register the image to be registered, including: based on a random sampling consistency algorithm, randomly extracting S groups of second feature point sets from the optimal feature point set, and determining S second parameter matrixes corresponding to the S groups of second feature point sets in a second projection transformation model of the image to be registered based on row-column coordinates of feature points in each group of second feature point sets; wherein S is a positive integer; each group of the second feature point sets comprises at least four feature points; respectively calculating a second cost function of each second feature point set in the image to be registered based on S second projective transformation models corresponding to the S second parameter matrixes; and determining the second projective transformation model corresponding to the minimum second cost function as the optimal projective transformation model so as to register the image to be registered.
In the embodiment of the present application, the optimal feature point set of the image to be registered is composed of local optimal feature points of each first noise-reduced image, and an optimal projection transformation model of the image to be registered is determined according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm, specifically: and randomly sampling the optimal characteristic point set of the image to be registered to obtain S groups of second characteristic point sets, wherein each second characteristic point set comprises at least 4 characteristic points, and determining a transformation parameter matrix (second parameter matrix) corresponding to a coordinate transformation relation (second projection transformation model) between the image to be registered and the reference image based on each group of second characteristic point sets. And substituting all the feature points in the optimal feature point set into each second projection transformation model, calculating the projection error (second cost function) of each feature point, taking the second projection transformation model corresponding to the second cost function at the minimum, taking the second projection transformation model as the optimal projection transformation model, satisfying all the feature points of the optimal projection transformation model, calculating a global transformation matrix of the image to be registered and the reference image according to the global optimal feature point set, and finishing the registration.
Therefore, local quality constraint is carried out on the feature points of each first noise-reduced image to obtain an optimal feature point set of each first noise-reduced image, then feature point sets which are in accordance with the local quality constraint in all the block images are combined, and finally the overall quality constraint is carried out on the combined optimal feature point set, so that the accuracy of feature point matching is further improved.
In summary, in the present application, a plurality of block images of an image to be registered and a plurality of block images of a reference image are denoised respectively to obtain a plurality of first denoised images and a plurality of second denoised images correspondingly; the first noise reduction image and the second noise reduction image correspond to each other one by one; screening the feature points in each first noise-reduced image according to the feature vector distance and the geographical position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image; and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered. Therefore, the accuracy of feature point extraction and feature matching is improved, and the problems of insufficient precision and low calculation efficiency of the traditional unmanned aerial vehicle image registration method are solved.
Exemplary System
Fig. 3 is a schematic structural diagram of an unmanned aerial vehicle image automatic registration system according to some embodiments of the present application, as shown in fig. 3, the unmanned aerial vehicle image automatic registration system includes:
the noise reduction unit 201 is configured to perform noise reduction on the multiple block images of the image to be registered and the multiple block images of the reference image respectively to obtain multiple first noise reduction images and multiple second noise reduction images correspondingly; wherein the first noise-reduced image and the second noise-reduced image correspond to each other one by one;
a point screening unit 202, configured to screen feature points in each first noise-reduced image according to a feature vector distance and a geographic position vector distance between each feature point in each first noise-reduced image and all feature points in the second noise-reduced image corresponding to the feature point;
the model registration unit 203 is configured to determine an optimal projection transformation model of the image to be registered according to the feature points screened in each of the first noise-reduced images based on a random sampling consistency algorithm, so as to register the image to be registered.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.
Claims (8)
1. An unmanned aerial vehicle image automatic registration method is characterized by comprising the following steps:
respectively blocking an image to be registered and a reference image, and correspondingly obtaining a plurality of blocked images of the image to be registered and a plurality of blocked images of the reference image;
wherein, the image to be registered and the reference image are respectively partitioned into blocks, specifically:
segmenting the image to be registered intoM*NObtaining a plurality of block images of the image to be registered in the sub-area, taking the upper left boundary point of the plurality of block images of each image to be registered, obtaining the upper left boundary point set corresponding to the image to be registered, and marking as the upper left boundary point setC 1 、C 2 、C 3 ……The row-column coordinates of the ith upper left boundary point in the upper left boundary point set are expressed as,iIs a positive integer; then, calculating to obtain the geographic coordinates corresponding to each upper left boundary point in the upper left boundary point set based on the transformation relation between the row and column coordinates in the image to be registered and the geographic coordinates;
According to the geographic coordinates corresponding to each upper left boundary pointAnd reversely calculating to obtain the row and column coordinates of the reference image based on the transformation relation between the row and column coordinates and the geographic coordinates in the reference imageThe row and column coordinates of the reference imageThe corresponding point is taken as the upper left edge of the reference imageSet of boundary points, denotedC’ 1 、C’ 2 、C ’ 3 ……(ii) a According to the upper-left boundary point set of the reference image, partitioning the reference image to obtain a plurality of partitioned images of the reference image; the multiple block images of the image to be registered are in one-to-one correspondence with the multiple block images of the reference image;
respectively conducting guide filtering on the plurality of block images of the image to be registered and the plurality of block images of the reference image through a guide filter to correspondingly obtain a plurality of first noise reduction images and a plurality of second noise reduction images; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
screening the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and determining an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
2. The unmanned aerial vehicle image automatic registration method according to claim 1, wherein the screening the feature points in each first noise-reduced image according to a feature vector distance and a geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image comprises:
establishing a maximum index map of each of the first noise-reduced images and each of the second noise-reduced images;
obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image; obtaining a feature descriptor of each feature point in each second noise-reduced image according to the maximum index map of each second noise-reduced image and the feature point of each second noise-reduced image;
determining a feature vector distance between each feature point in each first noise-reduced image and all feature points in the corresponding second noise-reduced image according to the feature descriptor of each feature point in each first noise-reduced image and the feature descriptors of all the feature points in the corresponding second noise-reduced image;
performing primary screening on the feature points in each first noise-reduced image according to the feature vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
determining the geographical position vector distance between each feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image according to the geographical coordinates of each feature point in each first noise-reduced image and the geographical coordinates of the corresponding feature point in the second noise-reduced image;
and carrying out secondary screening on the preliminarily screened feature points in each first noise-reduced image according to the geographical position vector distance of each preliminarily screened feature point in each first noise-reduced image and the corresponding feature point in the second noise-reduced image.
3. The unmanned aerial vehicle image automatic registration method of claim 2, wherein the establishing a maximum index map for each of the first and second noise-reduced images comprises:
respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method;
and respectively establishing a maximum index map of each first noise-reduced image and each second noise-reduced image according to the multi-directional amplitude information of each first noise-reduced image and each second noise-reduced image through a Log-Gabor filter.
4. The unmanned aerial vehicle image automatic registration method of claim 3, wherein the obtaining a feature descriptor of each feature point in each first noise-reduced image according to the maximum index map of each first noise-reduced image and the feature point of each first noise-reduced image comprises:
respectively extracting the characteristic points of each first noise reduction image and each second noise reduction image based on a phase consistency method;
constructing a region image of the feature point of each first noise-reduced image in the maximum index map of each first noise-reduced image;
partitioning the regional image of the feature point of each first noise-reduced image to obtain a plurality of regional partitioned images;
and determining a feature descriptor of each feature point in each first noise-reduced image according to the distribution histogram of each region block image.
5. The unmanned aerial vehicle image automatic registration method according to claim 1, wherein the determining an optimal projection transformation model of the image to be registered according to the filtered feature points in each of the first noise-reduced images based on a random sampling consistency algorithm to register the image to be registered comprises:
based on a random sampling consistency algorithm, determining an optimal feature point set of the image to be registered according to the feature points screened in each first noise reduction image;
and determining an optimal projection transformation model of the image to be registered according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm so as to register the image to be registered.
6. The unmanned aerial vehicle image automatic registration method of claim 5, wherein the determining an optimal feature point set of the image to be registered according to the feature points filtered in each of the first noise-reduced images based on a random sampling consistency algorithm comprises:
a random sampling method is adopted, and the method,randomly extracting feature points screened from each first noise-reduced imageGrouping a first feature point set, and determining a first projective transformation model of each first noise-reduced image according to row-column coordinates of feature points in each first feature point setCorresponding to the first feature point setA first parameter matrix; wherein the content of the first and second substances,each group of the first characteristic point set comprises at least four characteristic point pairs;
based onCorresponding to said first parameter matrixThe first projective transformation model is used for calculating a first cost function of each first characteristic point set in each first noise reduction image;
and combining at least four feature points in the first feature point set corresponding to the smallest first cost function in each first noise-reduced image to obtain an optimal feature point set of the image to be registered.
7. The unmanned aerial vehicle image automatic registration method of claim 6, wherein the determining an optimal projective transformation model of the image to be registered according to the optimal feature point set of the image to be registered based on a random sampling consistency algorithm to register the image to be registered comprises:
randomly extracting from the optimal feature point set based on a random sampling consistency algorithmGrouping a second feature point set, and determining the line coordinates of the feature points in each second feature point set in the second projective transformation model of the image to be registeredCorresponding to said second set of feature pointsA second parameter matrix; wherein the content of the first and second substances,is a positive integer; each group of the second feature point set comprises at least four feature points;
based onCorresponding to said second parameter matrixThe second projective transformation model respectively calculates a second cost function of each second feature point set in the image to be registered;
and determining the second projective transformation model corresponding to the minimum second cost function as the optimal projective transformation model so as to register the image to be registered.
8. An unmanned aerial vehicle image automatic registration system, comprising:
the noise reduction unit is configured to respectively block an image to be registered and a reference image to correspondingly obtain a plurality of block images of the image to be registered and a plurality of block images of the reference image;
wherein, the image to be registered and the reference image are respectively partitioned into blocks, specifically:
segmenting the image to be registered intoM*NObtaining a plurality of block images of the image to be registered in the sub-area, taking the upper left boundary point of the plurality of block images of each image to be registered, obtaining the upper left boundary point set corresponding to the image to be registered, and marking as the upper left boundary point setC 1 、C 2 、C 3 ……The row-column coordinates of the ith upper left boundary point in the upper left boundary point set are expressed as,iIs a positive integer; then, calculating to obtain the geographic coordinates corresponding to each upper left boundary point in the upper left boundary point set based on the transformation relation between the row and column coordinates in the image to be registered and the geographic coordinates;
According to the geographic coordinates corresponding to each upper left boundary pointInversely calculating to obtain the coordinates of the rows and the columns of the reference image based on the transformation relation between the coordinates of the rows and the columns in the reference image and the geographic coordinatesThe row and column coordinates of the reference imageThe corresponding point is taken as the upper left boundary point set of the reference image and is recorded asC’ 1 、C’ 2 、C ’ 3 ……(ii) a According to the upper left boundary point set of the reference image, partitioning the reference image to obtainA plurality of block images of the reference image; the multiple block images of the image to be registered are in one-to-one correspondence with the multiple block images of the reference image;
respectively denoising a plurality of block images of an image to be registered and a plurality of block images of a reference image through a guide filter to correspondingly obtain a plurality of first denoising images and a plurality of second denoising images; wherein the first noise-reduced image and the second noise-reduced image correspond one to one;
the point screening unit is configured to screen the feature points in each first noise-reduced image according to the feature vector distance and the geographic position vector distance between each feature point in each first noise-reduced image and all the corresponding feature points in the second noise-reduced image;
and the model registration unit is configured to determine an optimal projection transformation model of the image to be registered according to the screened feature points in each first noise-reduced image based on a random sampling consistency algorithm so as to register the image to be registered.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210184653.4A CN114241022B (en) | 2022-02-28 | 2022-02-28 | Unmanned aerial vehicle image automatic registration method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210184653.4A CN114241022B (en) | 2022-02-28 | 2022-02-28 | Unmanned aerial vehicle image automatic registration method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114241022A CN114241022A (en) | 2022-03-25 |
CN114241022B true CN114241022B (en) | 2022-06-03 |
Family
ID=80748254
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210184653.4A Active CN114241022B (en) | 2022-02-28 | 2022-02-28 | Unmanned aerial vehicle image automatic registration method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114241022B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2024025487A1 (en) * | 2022-07-25 | 2024-02-01 | Ozyegin Universitesi | A system for processing images acquired by an air vehicle |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8326088B1 (en) * | 2009-05-26 | 2012-12-04 | The United States Of America As Represented By The Secretary Of The Air Force | Dynamic image registration |
CN103839265A (en) * | 2014-02-26 | 2014-06-04 | 西安电子科技大学 | SAR image registration method based on SIFT and normalized mutual information |
CN106960449A (en) * | 2017-03-14 | 2017-07-18 | 西安电子科技大学 | The heterologous method for registering constrained based on multiple features |
CN110211058A (en) * | 2019-05-15 | 2019-09-06 | 南京极目大数据技术有限公司 | A kind of data enhancement methods of medical image |
CN113409369A (en) * | 2021-05-25 | 2021-09-17 | 西安电子科技大学 | Multi-mode remote sensing image registration method based on improved RIFT |
CN113643334A (en) * | 2021-07-09 | 2021-11-12 | 西安电子科技大学 | Different-source remote sensing image registration method based on structural similarity |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105279739A (en) * | 2015-09-08 | 2016-01-27 | 哈尔滨工程大学 | Self-adaptive fog-containing digital image defogging method |
CN108021857B (en) * | 2017-08-21 | 2021-12-21 | 哈尔滨工程大学 | Building detection method based on unmanned aerial vehicle aerial image sequence depth recovery |
CN109101995A (en) * | 2018-07-06 | 2018-12-28 | 航天星图科技(北京)有限公司 | A kind of quick unmanned plane image matching method based on fusion local feature |
CN112102379B (en) * | 2020-08-28 | 2022-11-04 | 电子科技大学 | Unmanned aerial vehicle multispectral image registration method |
CN113538290A (en) * | 2021-07-30 | 2021-10-22 | 沭阳翔玮生态农业开发有限公司 | Agricultural aerial image processing method and system based on artificial intelligence |
-
2022
- 2022-02-28 CN CN202210184653.4A patent/CN114241022B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8326088B1 (en) * | 2009-05-26 | 2012-12-04 | The United States Of America As Represented By The Secretary Of The Air Force | Dynamic image registration |
CN103839265A (en) * | 2014-02-26 | 2014-06-04 | 西安电子科技大学 | SAR image registration method based on SIFT and normalized mutual information |
CN106960449A (en) * | 2017-03-14 | 2017-07-18 | 西安电子科技大学 | The heterologous method for registering constrained based on multiple features |
CN110211058A (en) * | 2019-05-15 | 2019-09-06 | 南京极目大数据技术有限公司 | A kind of data enhancement methods of medical image |
CN113409369A (en) * | 2021-05-25 | 2021-09-17 | 西安电子科技大学 | Multi-mode remote sensing image registration method based on improved RIFT |
CN113643334A (en) * | 2021-07-09 | 2021-11-12 | 西安电子科技大学 | Different-source remote sensing image registration method based on structural similarity |
Also Published As
Publication number | Publication date |
---|---|
CN114241022A (en) | 2022-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111583110B (en) | Splicing method of aerial images | |
CN104574347B (en) | Satellite in orbit image geometry positioning accuracy evaluation method based on multi- source Remote Sensing Data data | |
CN103247029B (en) | A kind of high spectrum image geometrical registration method generated for spliced detector | |
CN108564532B (en) | Large-scale ground distance satellite-borne SAR image mosaic method | |
CN107203973A (en) | A kind of sub-pixel positioning method of three-dimensional laser scanning system center line laser center | |
CN104899888B (en) | A kind of image sub-pixel edge detection method based on Legendre squares | |
CN107146200B (en) | Unmanned aerial vehicle remote sensing image splicing method based on image splicing quality evaluation | |
GB2557398A (en) | Method and system for creating images | |
CN102855628B (en) | Automatic matching method for multisource multi-temporal high-resolution satellite remote sensing image | |
CN108961286B (en) | Unmanned aerial vehicle image segmentation method considering three-dimensional and edge shape characteristics of building | |
CN106096497B (en) | A kind of house vectorization method for polynary remotely-sensed data | |
CN110084743B (en) | Image splicing and positioning method based on multi-flight-zone initial flight path constraint | |
US11367213B2 (en) | Method and apparatus with location estimation | |
CN112669280B (en) | Unmanned aerial vehicle inclination aerial photography right-angle image control point target detection method based on LSD algorithm | |
CN114241022B (en) | Unmanned aerial vehicle image automatic registration method and system | |
CN107341781A (en) | Based on the SAR image correcting methods for improving the matching of phase equalization characteristic vector base map | |
JP2021086616A (en) | Method for extracting effective region of fisheye image based on random sampling consistency | |
CN115951350A (en) | Permanent scatterer point extraction method, device, equipment and medium | |
CN114897676A (en) | Unmanned aerial vehicle remote sensing multispectral image splicing method, device and medium | |
CN117152272B (en) | Viewing angle tracking method, device, equipment and storage medium based on holographic sand table | |
CN109084675A (en) | Center of circle positioning device and method based on Embedded geometrical characteristic in conjunction with Zernike square | |
Liu et al. | Match selection and refinement for highly accurate two-view structure from motion | |
Zhang et al. | An enhanced multi-view vertical line locus matching algorithm of object space ground primitives based on positioning consistency for aerial and space images | |
Boerner et al. | Brute force matching between camera shots and synthetic images from point clouds | |
CN114565653A (en) | Heterogeneous remote sensing image matching method with rotation change and scale difference |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |