CN110111250B - Robust automatic panoramic unmanned aerial vehicle image splicing method and device - Google Patents

Robust automatic panoramic unmanned aerial vehicle image splicing method and device Download PDF

Info

Publication number
CN110111250B
CN110111250B CN201910289082.9A CN201910289082A CN110111250B CN 110111250 B CN110111250 B CN 110111250B CN 201910289082 A CN201910289082 A CN 201910289082A CN 110111250 B CN110111250 B CN 110111250B
Authority
CN
China
Prior art keywords
unmanned aerial
aerial vehicle
images
matching
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910289082.9A
Other languages
Chinese (zh)
Other versions
CN110111250A (en
Inventor
罗林波
许权
陈珺
龚文平
程卓
罗大鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China University of Geosciences
Original Assignee
China University of Geosciences
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China University of Geosciences filed Critical China University of Geosciences
Priority to CN201910289082.9A priority Critical patent/CN110111250B/en
Publication of CN110111250A publication Critical patent/CN110111250A/en
Application granted granted Critical
Publication of CN110111250B publication Critical patent/CN110111250B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a robust automatic panoramic unmanned aerial vehicle image splicing method and a device, in the traditional automatic panoramic image splicing method, a camera is assumed to rotate around the optical center of the camera accurately, the transformation between source images is a group of special homography matrixes, the ideal data is difficult to obtain actually, and especially a remote sensing image obtained by an unmanned aerial vehicle cannot meet the ideal condition that the image may not be on a plane (containing parallax) and even may suffer from non-rigid change, so that the poor splicing result is caused. In order to overcome the challenges, the invention introduces a non-rigid matching algorithm into the splicing system, generates accurate feature matching on the remote sensing image, and provides a new differential transformation and overall adjustment strategy at the stage of registration, so that the splicing system is suitable for the panoramic image splicing effect of the unmanned aerial vehicle. The experimental results show that our method is superior to the traditional method and some latest methods in terms of visual effect.

Description

Robust automatic panoramic unmanned aerial vehicle image splicing method and device
Technical Field
The invention relates to the field of image processing, in particular to the field of image splicing, and more particularly relates to a robust automatic panoramic unmanned aerial vehicle image splicing method and device.
Background
The remote sensing technology of Unmanned Aerial Vehicles (UAVs) is a low-altitude technology, and is now an important means for information acquisition. The method has the advantages of high acquisition speed, convenient operation, good safety, low investment and the like, and is widely applied to various ground measurement applications. However, due to the low flying height and limited focal length of the camera, the drone image has a small view of the scene, and therefore it is difficult to capture a relatively complete target area. In order to obtain a complete scene of the desired object, the present invention requires combining multiple images of the same object by technical means. The purpose of image stitching is to combine overlapping portions of multiple images to form a panoramic image.
The traditional image splicing comprises feature matching, image matching, integral adjustment, automatic panoramic straightening, gain compensation and multiband fusion. The system may look perfect but stitching errors can occur when the image contains a large amount of non-ideal data. For example, ghosting and even incorrect matching may occur when stitching UAV images using the system. Several efforts have been made to reduce these errors, for example, seam cutting methods optimize the pixel selection between overlapping images to minimize visible seams; laplacian pyramid fusion and Poisson image fusion can minimize blurring due to misalignment or exposure differences. However, the effect is still unsatisfactory and the problem may be more severe when a non-rigid changing drone is involved in filming a scene.
When the conventional stitching system is applied to unmanned aerial vehicle image stitching, two main challenges exist. On the one hand, the geometric relationship between drone images is often complex due to variations in ground relief, variations in imaging viewpoints and low altitude shots, most existing methods where the image pairs cannot be matched exactly using parametric transformation models (e.g., affine or homography). Image matching is a key prerequisite for image stitching, which aims at geometrically superimposing two images of the same scene. Image matching can be classified as rigid and non-rigid, depending on the type of data given. In conventional stitching matching, there is an implicit constraint that a given image is rigid. However, for drone images, non-rigid matching is crucial, as these images usually contain some local deformations, which cannot be solved by rigid matching. On the other hand, remote sensing images captured by the UAV are from low altitude shooting, and the obtained images cannot be approximated to a plane, so that a certain parallax exists. In conventional stitching systems, a single-plane perspective transformation (homography matrix) is used together with global adjustment for optimization, which leads to ghost errors. To overcome this limitation, Chin et al propose image stitching (APAP) of gridded images, each grid aligned with a homography, suitable for unmanned aerial vehicle image stitching. However, a prerequisite for this method is that the images are accurately aligned.
Disclosure of Invention
To solve the above-mentioned challenges, the present invention introduces a non-rigid matching algorithm based on motion field interpolation, called Vector Field Consensus (VFC), into the stitching system to generate an accurate feature matching of the remotely sensed images. In the global adjustment, the invention proposes a new strategy, which improves the original relation of the homography conversion and uses the local homography response to perform the global adjustment, which is robust to the image splicing of the invention.
The robust automatic panoramic unmanned aerial vehicle image splicing method mainly comprises a non-rigid matching algorithm, local transformation description, overall adjustment and aerial image splicing.
Non-rigid map matching
The image matching stage is a key step of image stitching. In the matching phase, the ability to minimize registration errors plays an important role in the subsequent steps of the system. The goal of the matching problem here is to align the overlapping areas of the two images pixel by pixel. This requires a pixel-by-pixel dense matching of the overlap region for better stitching. When an unknown complex relationship between images cannot be accurately modeled by a particular model, especially when there are non-rigid variations in the images, it is difficult for the images to achieve accurate pixel-by-pixel registration.
Generally, SIFT feature points are extracted to record the relationship between two images. It uses sparse feature point matching to guide dense pixel-by-pixel registration. Estimating the spatial transformation between images using a general linear model, such as RANSAC and its various variants MLESAC, LO-RANSAC, PROSAC, can cope with most feature points. This seems to be a good solution. But this series of algorithms relies on a specific geometric parametric model. For images with non-rigid variations, the transformations cannot be applied anymore, since they cannot be modeled in a parametric way.
In order to realize non-rigid matching, Li and Hu propose an Identification Correspondence Function (ICF) algorithm based on a non-parametric model, but the algorithm has a disadvantage that matching accuracy is drastically reduced when there are many abnormal values. Another strategy to solve the matching problem is to estimate the corresponding matrix of the two point sets in combination with parametric or non-parametric geometric constraints. Unlike previous independent estimation point correspondences and transformations, the method based on estimating the corresponding matrix jointly estimates the correspondences between the point sets and the transformations. Representative methods for finding correspondence matrices between images, such as Iterative Closest Point (ICP), Coherent Point Drift (CPD), and Local Preserving Matching (LPM), have also been proposed for matching non-rigid cases, but such methods are generally intolerant of excessive outliers. Furthermore, such algorithms typically impose penalties on unmatched points for robustness purposes. The point set matching problem can also be solved by solving the graph matching problem. These methods construct an affinity matrix between a set of points and then perform spectral analysis to obtain an ordered characterization of the set of points. These methods include Double Decomposition (DD), Spectral Matching (SM), and map shifting (GS). However, these methods have the disadvantage that the computational complexity is usually very high and cannot be applied to solve the large-scale real-time matching task. The VFC algorithm proposed by Ma is also based on a non-parametric model. The method converts the point set matching problem into vector field interpolation, can well popularize sparse matching to dense matching, and is suitable for unmanned aerial vehicle image matching.
Local matching
After the image matching relationship is obtained, the conversion relationship of the overlapping image alignment will be estimated. The aligned images are then combined into a common plane. In the conventional stitching method, a homography matrix is used to represent the transformation relationship. The goal is to minimize alignment errors between overlapping pixels by uniform variation. Homography is a common transformation because it preserves the most flexible transformation of all lines and the resulting panoramic image does not have much linear distortion. This works very well for all images in a plane. However, for the drone images studied herein by the present invention, this ideal condition is not usually true.
To solve this problem, Liu et al propose a content preserving transform that minimizes registration errors and preserves the rigidity of the scene, since homographies do not align the pixels of the overlapping parts well. However, in image stitching, there are large differences in rotation and translation between different views. Interpolation of this method is not flexible enough due to stiffness constraints. Gao et al divides a scene into a background plane and a foreground plane and aligns the two planes with two homographies and then aligns the planes to align the images. This approach is more flexible than using a single homography; but for complex cases it is still insufficient. Lin et al use a smoothly varying affine transform to align images, local deformability and alignment capability are stronger, and it has the ability to handle parallax. But fundamentally, the use of affine regularization is applicable to interpolation, which may not be optimal for extrapolation, and affinity is insufficient to complete the perspective transformation. Chin et al propose a transformation that is as predictable as possible using a method that is as deformable as possible. The purpose of the deformation is to perform a global perspective transformation and allow a local non-projective transformation, sampling the local transformation model of the image stitching. The local model has higher degree of freedom and more flexibility, can better process local transformation and reduce the ghost image problem. Subsequent methods adaptive transforms use global similarity to mitigate projection distortion in non-overlapping regions, such as half-projections that preserve shape. It adds constraints, combines homographic and similarity transformations, corrects the shape of the stitched image and reduces projection distortion. The adaptation also uses the global similarity transform to correct the shape, but it can adaptively determine the angle to better correct the image shape.
Integral adjustment
Given a set of overlapping images, the goal is to project all the images onto a common surface. This process inevitably accumulates errors. To achieve a better stitching effect, the present invention must simultaneously optimize these errors. The current approach is to optimize the focal lengths of all views and the relative rotation of the camera pose by global adjustment, and then align the series of images. The global adjustment may be based on the projection of all points in the image while extracting the 3D point coordinates describing the scene structure, the relative motion parameters and the optical parameters of the camera.
Global adaptation is typically used for feature-based 3D scene reconstruction algorithms as a last step. It is based on 3D structure and perspective parameters (camera position, orientation, intrinsic alignment and radial distortion). Providing an initial estimate, the global adjustment refines both motion and structural parameters by minimizing the projection error between the observed and predicted image points. The best reconstruction results are obtained under the assumption that some noise is present in the obtained image features.
The final goal of global adjustment is to reduce the position projective transformation (re-projection) error between the points of the observed image and the points of the reference image (predicted image) by using the least squares algorithm. The most successful strategy is the Levenberg-Marquardt algorithm, which has the advantage of being easy to implement and capable of converging quickly on various initial estimates.
Aerial image stitching
The subject of the invention is about the stitching of unmanned aerial vehicle images. In this section, the present invention introduces a related work in aerial image stitching.
Sangho et al propose a hierarchical multi-level image stitching algorithm for autonomous navigation by unmanned aerial vehicles. The algorithm prevents the accumulation of errors propagating along the frame by gradually building long-time splices in flight, which are composed of short-time splice hierarchies. It meets the real-time processing requirements in autonomous navigation. In particular, the system can automatically adapt to scene changes of the images to be spliced, and can automatically select a more appropriate feature selection method. This is the key point for autonomous navigation of the drone. The proposed system is a causal system, suitable for real-time applications. Ghosh et al propose a super-resolution stitching system based on spatial domain and its evaluation result. The algorithm combines an image splicing algorithm and a super-resolution reconstruction algorithm, and has the characteristics of robustness and simplicity in calculation. The authors use spatial domain based super-resolution to make it practical in real-time applications such as surveillance and remote sensing. Ghosh et al also propose some indicators for quantitatively evaluating image stitching in multiple scene categories, which also remedies the drawbacks in this respect.
The robust automatic panoramic unmanned aerial vehicle image splicing method has the following beneficial effects: in the research, the invention provides an image splicing method based on robust feature matching and a local transformation and overall adjustment strategy of a new differential thought. The use of strong non-rigid feature matching can make the results more accurate and reduce the pressure on subsequent steps of the system. When the transformation relation between the images is calculated, the global homography matrix is differentiated, so that the effect is more perfect. The results show that the method of the invention provides a more natural panorama, no parallax is visible in the overlapping areas, and the ghost effect produced by stitching is significantly reduced. The system is more suitable for unmanned aerial vehicle images, and has wider application range compared with the traditional image splicing method.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a flow chart of a robust automatic panoramic unmanned aerial vehicle image stitching method of the present invention;
FIG. 2 is a diagram of a set of one-dimensional matched point sets generated by projecting a two-dimensional point cloud onto two one-dimensional image planes;
FIG. 3 is a graph of the stitching results;
FIG. 4 is a qualitative comparison of unmanned aerial vehicle image stitching;
FIG. 5 is a qualitative comparison of unmanned aerial vehicle image stitching;
fig. 6 is a source image and a stitched image obtained using the conventional method and the method of the present invention, respectively.
Detailed Description
For a more clear understanding of the technical features, objects and effects of the present invention, embodiments of the present invention will now be described in detail with reference to the accompanying drawings.
To facilitate an understanding of the invention, the principles of the invention are described below.
Given a set of drone images, the task of the invention is to place them in a system to obtain a stitched panorama. Because the particularity of unmanned aerial vehicle image, traditional concatenation system can't produce good effect. Therefore, the system is modified by the method, so that the system has robustness to the unmanned aerial vehicle image.
1. Non-rigid matching with vector field consistency algorithm
Inferring a set of hypothetical matches for two images to be registered using a feature detection method
Figure GDA0002565384020000071
Figure GDA0002565384020000072
xnAnd ynAnd respectively representing two-dimensional column vectors of the space positions of the feature points in the two images to be registered, wherein N represents the total number of the matching pairs in the supposition matching set, X is the whole input space, the dimension is D, and Y is the whole output space. Assume that the match set S contains both false matches and correct matches, where a correct match is determined from the geometric transformation relationship f between the two images to be matched, i.e. if (x)n,yn) Is a correct match, then yn=f(xn) Is a correct match, the object of the invention is to interpolate a vector field
Figure GDA0002565384020000073
The correct matching points in the sample set are fitted, and then the correct matching points and the wrong matching points can be distinguished. Here, the
Figure GDA0002565384020000074
Figure GDA0002565384020000075
Is the Hilbert space (RKHS).
The present invention assumes that the correct match point noise is isotropic white gaussian noise and the standard deviation is σ. The error points obey a uniform distribution 1/a, where a is the volume of the output space of the error point, for each pair of samples (x)n,yn) The invention relates to a hidden variable znE {0,1}, where zn1 represents the sample point as the correct point, znAnd 0 represents that the sample point is an error point. By letting the matrices X and Y represent input and output data, respectively, the invention makes it possible to obtain a likelihood function for a hybrid model:
Figure GDA0002565384020000076
Figure GDA0002565384020000081
where θ is { f, σ ═ f2γ is an unknown parameter, and γ is a blending coefficient that specifies the edge distribution of the hidden variable.
Considering the constraint of smoothing, the prior of f has the form:
Figure GDA0002565384020000082
using Bayesian criteria, p (θ | X, Y) · p (Y | X, θ) P (f), the present invention estimates a maximum posterior solution to θ, e.g., θ*
Figure GDA0002565384020000083
From the optimal solution theta*The vector field f can be directly obtained. The invention adopts EM algorithm to solve, and can obtain the following logarithm posterior of the complete data:
Figure GDA0002565384020000084
p in this casen=P(zn=1|xn,ynold) Can be substituted bynData that is missing in the mixture model is considered, thus maximizing it. The upper label old indicates before update, new indicates after update, and the same is applied below.
E, step E: let P be diag (P)1,…,pN) Is a diagonal matrix, where pnThe Bayesian criterion can be adopted to obtain:
Figure GDA0002565384020000085
posterior probability pnIs aA soft assignment indicating how well the nth sample fits the current estimated vector field f.
And M: considering that P is a diagonal matrix, the invention order
Figure GDA0002565384020000086
Respectively to sigma2And γ are derived and made zero, the invention yields:
Figure GDA0002565384020000091
Figure GDA0002565384020000092
in this step, the present invention uses the above formula to pair γ and σ2Is updated and then uses the formula
Figure GDA0002565384020000093
Figure GDA0002565384020000094
To correct the parameter thetanew. Wherein, in the M step, by σ2And γ, f, specific calculation methods references: ma, j.; zhao, j.; tian, J.; yuille, a.l.; tu, Z.RobustPoint Matchinvivira VectorField Consensus. IEEETrans. image Process.2014,23, 1706-1721.
After the EM algorithm is completed, the final solution of this vector field f can be solved in M steps.
After the vector field f is determined, matching points can be precisely determined in images containing non-rigid variations.
2. Local transformation and global adjustment
And after the two images are matched, projecting the source image onto the target image through the homography matrix. However, when stitching multiple images, errors can accumulate and magnify, especially in multiple overlapping regions. The invention can simultaneously optimize the projection function to register the panoramic image under the framework of the minimum mean square error, thereby reducing the accumulated error.
When stitching multiple images, the first step is to find a reference plane onto which all images are projected by a basic homographic transformation. The invention selects an image from the input images as a reference plane, and identifies all overlapped images by using the characteristic matching of the panoramic image. The invention selects the image with the most overlapping part as the reference plane, so that other images can be projected onto the reference plane through homographic transformation. Conventional methods use a Direct Linear Transformation (DLT) method to compute a single response between images. The registration of the drone images cannot be changed simply by a response. Inspired by Chin et al, the invention effectively reduces registration errors caused by single response by differentiating the images, and each differentiation corresponds to a basic homography matrix.
Given two images I and I' and their matching points xi=(xi,yi)T,yi=(x'i,y'i)TI is 1, …, N. A global transformation relationship describing two images is as follows:
Figure GDA0002565384020000101
here, the
Figure GDA0002565384020000102
Representing the form of x in homogeneous coordinates, representing the same scale, H is a 3 x 3 matrix.
In 2D projective transformation, DLT is a basic homography matrix solving method. The present invention quantizes H into a vector H (assuming that the element of H is a)ijI denotes the number of rows and j denotes the number of columns, the vector h ═ a11a12a13a21a22a23a31a32a33) The same is true of the principle of changing from vector to matrix). Let aiTwo rows of a matrix of two matching points. Given an estimated value h, | | aih | is the algebraic error. DLT minimizes the sum of squares of algebraic errors:
Figure GDA0002565384020000103
by mixing aiVertically stacked into a 2N x 9 matrix, the problem can be described as:
Figure GDA0002565384020000104
the solution is the least significant right singular vector of a. By estimated H (from
Figure GDA0002565384020000105
Reconstructed) from the source image I, in a two-dimensional projective transformation, an arbitrary pixel x in the source image I*Y transformed to the target image I*On the upper part
Figure GDA0002565384020000106
For unmanned aerial vehicle images, the global homography matrix cannot well match the two images. Here, the present invention introduces a local homography matrix
Figure GDA0002565384020000107
Each pixel x*Corresponding to a local homography matrix H*,x*Can be evaluated as a weight problem
Figure GDA0002565384020000108
The invention defines a scalar weight
Figure GDA0002565384020000109
Figure GDA00025653840200001010
Here a scaling parameter. The exact solving step is the same as before.
When the invention has a series of local homography matrixes
Figure GDA0002565384020000111
(K is the total number of pixels in a single picture), each of which
Figure GDA0002565384020000112
Corresponding to a pair of matching points of two images, the accumulated error in the process of splicing the multiple images should be considered. The present invention addresses this problem by adjusting this step as a whole in order to minimize cumulative errors. At this time, the present invention minimizes the energy function
Figure GDA0002565384020000113
Here, the
Figure GDA0002565384020000114
Is a series of parameters, f (p, H) is the projective transformation function:
Figure GDA0002565384020000115
where r is1,r2,r3Is three row vectors of a matrix H, the invention introduces a parameter xiik∈{0,1},ξik1 indicates that this corresponding matching point exists, and otherwise does not exist, and p indicates the coordinates of the feature point on the reference plane.
When the image has parallax variation, the traditional homography conversion brings some bad effects. In the step, the invention introduces the idea of differentiation, can well process parallax, matches two unmanned aerial vehicle images, and has more robustness. The subsequent overall adjustment of the invention is also changed accordingly, and the conventional method uses this step to optimize the relative rotation between the superimposed images, where the invention only requires mixing the average intensities of the aligned images, which also greatly reduces the ghosting effect of the stitching result.
Based on the above principle, the technical solution of the present invention is specifically the following solution, and fig. 1 can be specifically referred.
The robust automatic panoramic unmanned aerial vehicle image stitching method comprises the following steps:
s1, acquiring a group of unmanned aerial vehicle images to be spliced, wherein the group of unmanned aerial vehicle images comprises at least two overlapped areas;
s2, carrying out feature point matching on the set of unmanned aerial vehicle images, wherein the matching method between any two unmanned aerial vehicle images is as follows:
s21 obtaining a set of hypothetical matches of the two images to be registered
Figure GDA0002565384020000121
Figure GDA0002565384020000122
xnAnd ynRespectively representing two-dimensional column vectors of the space positions of the feature points in the two images to be registered, wherein N represents the total number of the matching pairs in the supposition matching set, X is the whole input space, the dimension is D, and Y is the whole output space; wherein the hypothetical match sets of the two images
Figure GDA0002565384020000123
Deducing by adopting a characteristic detection method;
s22, executing an EM algorithm, obtaining a final vector field f when the EM algorithm obtains an optimal solution, matching feature points of the two unmanned aerial vehicle images through the final vector field f,
Figure GDA0002565384020000124
Figure GDA0002565384020000125
is the Hilbert space; wherein, the step E and the step M in the EM algorithm are respectively as follows:
e, step E: updating p using Bayesian criterionn
Figure GDA0002565384020000126
Posterior probability pnA soft assignment indicating how well the nth sample fits the current estimated vector field f;
and M: updating gamma and sigma by the following formula2Then through a formula
Figure GDA0002565384020000127
Figure GDA0002565384020000128
To correct the parameter thetanew
Figure GDA0002565384020000129
Figure GDA00025653840200001210
Wherein θ ═ { f, σ ═ f2γ, θ has an initial value when performing the EM algorithm; gamma is a mixing coefficient, the superscript new represents the value after update, and the superscript old represents the value before update; the definition of σ and a is: assuming that the noise of the correct matching point is isotropic white gaussian noise, the standard deviation is sigma, the error point obeys uniform distribution 1/a, and a is the volume of the output space of the error point;
s3, carrying out local transformation on the set of unmanned aerial vehicle images, wherein the local transformation steps between any two unmanned aerial vehicle images are as follows:
s31, calculating scalar weight
Figure GDA0002565384020000131
Figure GDA0002565384020000132
Wherein the subscript i represents the number of the characteristic points, x*Representing a pixel in the image, which is a preset scaling parameter;
s32, according to the followingCalculating the weight vector h by the formula*
Figure GDA0002565384020000133
Wherein, aiTwo rows of a matrix of two matching points;
s33, weighting vector h*Conversion into corresponding matrix H*Wherein H is*The elements of each line are sequentially arranged in a line according to the original sequence to form a weight vector h*,h*Is 1 x 9, matrix H*Is 3 x 3;
s34, calculating all pixels by adopting steps S31-S33
Figure GDA0002565384020000134
Of (2) matrix
Figure GDA0002565384020000135
k is a pixel x*K is 1, 2, …, K being the total number of pixels in a single picture;
and S4, obtaining an optimal solution by minimizing the energy function, adjusting the whole set of unmanned aerial vehicle images, and completing the splicing of the panoramic unmanned aerial vehicle images. The energy function is:
Figure GDA0002565384020000136
wherein the parameter xiik∈{0,1},ξik1 means that this corresponding matching point exists, otherwise it does not exist,
Figure GDA0002565384020000137
Figure GDA0002565384020000138
is a series of parameters, f (p, H) is the projective transformation function:
Figure GDA0002565384020000139
in the formula, r1,r2,r3Is a matrix H*P denotes coordinates of a feature point on the reference plane, and T denotes transposition.
The invention also discloses a robust automatic panoramic unmanned aerial vehicle image splicing device which is provided with a computer storage medium, wherein computer executable instructions are stored in the computer storage medium and are used for realizing the robust automatic panoramic unmanned aerial vehicle image splicing.
The method compares the method with the traditional image splicing method. In the experiment, the drone image has non-rigid variations and cannot be considered as a plane. In the following, the present invention first introduces some feature matching results and then provides image stitching results.
Removing mismatches on non-rigid images
In the process of splicing the remote sensing images, due to the high mismatching rate of the remote sensing images, a non-rigid matching algorithm must be added. By adding the VFC algorithm, the wrong matching points of the unmanned aerial vehicle image can be effectively removed. This is clearly shown in the subsequent stitching result graph.
Global differential and local transformation
Referring to fig. 2 and 3, wherein fig. 2 projects a two-dimensional point cloud onto two one-dimensional image "planes" to generate a one-dimensional analogy to image stitching, generating a set of one-dimensional matched point sets
Figure GDA0002565384020000141
Here, the two views differ in rotation and translation, the first one representing that the matching points are transformed by global homography, and the local deviation of the data cannot be modeled; the second method is to use local homography to bend the fitting point and flexibly interpolate the local deviation. Fig. 3 shows the stitching result, the first line being the two source images, the second line and the third line being the result of the global homography and the local homography stitching of the present invention.
In projective transformation, the conventional method (Autostitch) performs transformation by using a homography matrix. This conversion can have some undesirable effects when there is some parallax variation in the image. To solve these problems, the present invention adopts the idea of differentiation. The method can tolerate parallax in the image. The invention gives a simple example in fig. 2 to illustrate the role of this idea in projective transformation. Especially when the two images are not in one plane (the two views differ due to rotation and translation), the local and global transformations will have different effects in the stitching. In fig. 2, the homography matrix can keep the rigidity of the image varying well, but some matching points are left as a matter of course. In contrast, the differentiation method of the present invention may include most of the matching points, or even all of the matching points. Although the rigidity of the image may be sacrificed, the effect is acceptable. Meanwhile, the invention also carries out experiments, and the result is shown in figure 3. In the drawings, it is difficult to align these images strictly by the conventional method. There are some significant ghosts in the stitched image because the homography matrix does not match the objects in the two images. The process of the present invention can greatly improve this situation. The image in the third row of the figure is the stitching result of the present invention. Clearly, ghosting is less. The present invention uses white boxes to highlight the portions of the ghost. The present invention compares the difference in the effect of the two strategies. Clearly, the method of the invention is more locally homographic.
The differentiation is robust to local non-rigid transformations of the image and can solve the problem that the image is not on a plane. Therefore, the problem of local projection mismatching (ghost error in image splicing) can be reduced, and the performance of subsequent overall adjustment is improved. The invention also carried out a series of experiments to verify the effectiveness of the method of the invention. The present invention compares this not only with the traditional method (Autostich), but also with other most advanced methods, including AANAP, SPHP, and Parallax Tolerance (PT).
Referring to fig. 4 and 5, 4 is a qualitative comparison of unmanned aerial vehicle image stitching. The first line is the two images to be stitched, the second and third lines are the stitching results of Autostitch, AANAP, SPHP, and PT, and the results of the present invention are shown in the last line. The white boxes highlight the ghosting effect. Fig. 4 is a qualitative comparison of unmanned aerial vehicle image stitching. The first line is the two images to be stitched, the second and third lines are the stitching results of Autostitch, AANAP, SPHP, and PT, and the results of the present invention are shown in the last line. The white boxes highlight the ghosting effect.
The results are reported in fig. 4 and 5. In fig. 4, there is some variation in the ground surface that is not rigid, and many mismatches are easily formed in the conventional method. Many small stones in the white frame form ghost errors because they do not match well. The method of the invention is more robust to this situation. In fig. 5, these images all have a certain parallax. From these results, it can be seen that the method of the present invention is more robust to stitching unmanned aerial vehicle images. The ghosting effect is very evident in Autostich. Such a stitching result is not intended to be seen by the present invention. Other advanced methods may mitigate these ghosts more or less. The process of the invention can give better results than these processes. The stitching of the panoramic image is based on the stitching of two images. The invention herein achieves better results, which relieves the pressure of subsequent panorama stitching. The result of the invention still has some drawbacks, and the ghosting of some parts cannot be completely eliminated, which can be further improved in the binding adjustment phase.
Overall adjustment and panoramic image stitching
At this stage, the method of the present invention has the same capabilities as Autostich and can achieve stitching in a set of arbitrary images. This means that the source image can be any image of the scene. In Autostictch, only one global homography is used for stitching, so the overall adjustment can optimize the relative rotation and homography between a set of overlapping images. At this stage of the invention, it is only necessary to blend the aligned images with the average intensity. The results show that the stitching system of the present invention is more effective than the conventional stitching system, the ghosting effect is greatly reduced, and the results are reported in fig. 6. Wherein, FIG. 6 is the upper five images which are the source images, and the lower two spliced images are respectively the result obtained by the traditional method and the result obtained by the method of the invention
As can be seen from fig. 2 and 6, when the panoramic images are stitched by the conventional method (Autostitch), errors are cumulatively enlarged. The ghosting effect is not very apparent when it stitches the two images. Binding adjustments do not work well to eliminate accumulated errors as more images are added to the system. In contrast, the method of the present invention can eliminate the accumulated error well in the overall tuning phase.
The method of the invention sacrifices the rigidity of the spliced image. When two images are stitched, the effect is good. When stitching multiple images, the result is some distortion. This can be seen from the stitching results in fig. 6, but the overall effect is acceptable. For most drone images, the stitching results obtained with the system of the present invention are acceptable. Only when the parallax of the source image is particularly large, the distortion affects the image effect.
In the research, the invention provides an image splicing method based on robust feature matching and a local transformation and overall adjustment strategy of a new differential thought. The use of strong non-rigid feature matching can make the results more accurate and reduce the pressure on subsequent steps of the system. When the transformation relation between the images is calculated, the global homography matrix is differentiated, so that the effect is more perfect. The results show that the method of the invention provides a more natural panorama, no parallax is visible in the overlapping areas, and the ghost effect produced by stitching is significantly reduced. The system is more suitable for unmanned aerial vehicle images, and has wider application range compared with the traditional image splicing method.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (3)

1. A robust automatic panoramic unmanned aerial vehicle image stitching method is characterized by comprising the following steps:
s1, acquiring a group of unmanned aerial vehicle images to be spliced, wherein the group of unmanned aerial vehicle images comprises at least two overlapped areas;
s2, carrying out feature point matching on the set of unmanned aerial vehicle images, wherein the matching method between any two unmanned aerial vehicle images is as follows:
s21 obtaining a set of hypothetical matches of the two images to be registered
Figure FDA0002565384010000011
Figure FDA0002565384010000012
xnAnd ynRespectively representing two-dimensional column vectors of the space positions of the feature points in the two images to be registered, wherein N represents the total number of the matching pairs in the supposition matching set, X is the whole input space, the dimension is D, and Y is the whole output space;
s22, executing an EM algorithm, obtaining a final vector field f when the EM algorithm obtains an optimal solution, matching feature points of the two unmanned aerial vehicle images through the final vector field f, wherein f belongs to H,
Figure FDA0002565384010000017
is the Hilbert space; wherein, the step E and the step M in the EM algorithm are respectively as follows:
e, step E: updating p using Bayesian criterionn
Figure FDA0002565384010000013
Posterior probability pnIs a soft assignment indicating how well the nth sample fits the current estimated vector field f, f (x)n) Is xnThe corresponding value of f;
and M: updating gamma and sigma by the following formula2Then through a formula
Figure FDA0002565384010000014
Figure FDA0002565384010000015
To correct the parameter thetanew
Figure FDA0002565384010000016
Figure FDA0002565384010000021
Wherein,
Figure FDA0002565384010000022
is a complete data logarithm posterior, θ ═ f, σ2γ, θ has an initial value when performing the EM algorithm; gamma is a mixing coefficient, the superscript new represents the value after update, and the superscript old represents the value before update; the definition of σ and a is: assuming that the noise of the correct matching point is isotropic white gaussian noise, the standard deviation is sigma, the error point obeys uniform distribution 1/a, and a is the volume of the output space of the error point;
s3, carrying out local transformation on the set of unmanned aerial vehicle images, wherein the local transformation steps between any two unmanned aerial vehicle images are as follows:
s31, calculating scalar weight
Figure FDA0002565384010000023
Figure FDA0002565384010000024
Wherein the subscript i represents the number of the characteristic points, x*Representing a pixel in the image, which is a preset scaling parameter;
s32, calculating a weight vector h according to the following formula*
Figure FDA0002565384010000025
Where H is the vector form of the matrix H, aiTwo rows of a matrix of two matching points;
s33, weighting vectorh*Conversion into corresponding matrix H*Wherein H is*The elements of each line are sequentially arranged in a line according to the original sequence to form a weight vector h*,h*Is 1 x 9, matrix H*Is 3 x 3;
s34, calculating all pixels by adopting steps S31-S33
Figure FDA0002565384010000027
Of (2) matrix
Figure FDA0002565384010000028
k is a pixel x*K is 1, 2, …, K being the total number of pixels in a single picture;
s4, the optimal solution is obtained by minimizing an energy function, the whole set of unmanned aerial vehicle images are adjusted, and the panoramic unmanned aerial vehicle images are spliced, wherein the energy function is as follows:
Figure FDA0002565384010000026
wherein the parameter xiik∈{0,1},ξik1 means that this corresponding matching point exists, otherwise it does not exist,
Figure FDA0002565384010000034
is a series of parameters, f (p, H) is the projective transformation function:
Figure FDA0002565384010000031
in each of the formulas
Figure FDA0002565384010000033
A pair of matching points, r, corresponding to two images1,r2,r3Is a matrix H*P denotes coordinates of a feature point on the reference plane, and T denotes transposition.
2. The robust automatic panoramic unmanned aerial vehicle image stitching method according to claim 1, wherein in step S21, the hypothesis matching sets of the two images
Figure FDA0002565384010000032
The method is obtained by inference by a characteristic detection method.
3. A robust automatic panoramic unmanned aerial vehicle image stitching device having a computer storage medium having computer-executable instructions stored therein for implementing the robust automatic panoramic unmanned aerial vehicle image stitching of any one of claims 1-2.
CN201910289082.9A 2019-04-11 2019-04-11 Robust automatic panoramic unmanned aerial vehicle image splicing method and device Active CN110111250B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910289082.9A CN110111250B (en) 2019-04-11 2019-04-11 Robust automatic panoramic unmanned aerial vehicle image splicing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910289082.9A CN110111250B (en) 2019-04-11 2019-04-11 Robust automatic panoramic unmanned aerial vehicle image splicing method and device

Publications (2)

Publication Number Publication Date
CN110111250A CN110111250A (en) 2019-08-09
CN110111250B true CN110111250B (en) 2020-10-30

Family

ID=67484073

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910289082.9A Active CN110111250B (en) 2019-04-11 2019-04-11 Robust automatic panoramic unmanned aerial vehicle image splicing method and device

Country Status (1)

Country Link
CN (1) CN110111250B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110728296B (en) * 2019-09-03 2022-04-05 华东师范大学 Two-step random sampling consistency method and system for accelerating feature point matching
CN111062866A (en) * 2019-11-07 2020-04-24 广西科技大学鹿山学院 Transformation matrix-based panoramic image splicing method
CN111260597B (en) * 2020-01-10 2021-12-03 大连理工大学 Parallax image fusion method of multiband stereo camera
CN111598848B (en) * 2020-04-28 2023-03-24 浙江宁海抽水蓄能有限公司 AI-based rolling rock-fill dam construction scene digital reconstruction method
CN112233154B (en) * 2020-11-02 2024-08-30 影石创新科技股份有限公司 Color difference eliminating method, device and equipment for spliced image and readable storage medium
CN112465881B (en) * 2020-11-11 2024-06-04 常州码库数据科技有限公司 Improved robust point registration method and system
CN112365406B (en) * 2021-01-13 2021-06-25 芯视界(北京)科技有限公司 Image processing method, device and readable storage medium
CN114754779B (en) * 2022-04-27 2023-02-14 镁佳(北京)科技有限公司 Positioning and mapping method and device and electronic equipment
CN115115593B (en) * 2022-06-28 2024-09-10 先临三维科技股份有限公司 Scanning processing method and device, electronic equipment and storage medium
CN117094895B (en) * 2023-09-05 2024-03-26 杭州一隅千象科技有限公司 Image panorama stitching method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN106204507A (en) * 2015-05-28 2016-12-07 长沙维纳斯克信息技术有限公司 A kind of unmanned plane image split-joint method
CN108171791A (en) * 2017-12-27 2018-06-15 清华大学 Dynamic scene real-time three-dimensional method for reconstructing and device based on more depth cameras
CN108765292A (en) * 2018-05-30 2018-11-06 中国人民解放军军事科学院国防科技创新研究院 Image split-joint method based on the fitting of space triangular dough sheet

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8462209B2 (en) * 2009-06-26 2013-06-11 Keyw Corporation Dual-swath imaging system
US9762795B2 (en) * 2013-09-04 2017-09-12 Gyeongil Kweon Method and apparatus for obtaining rectilinear images using rotationally symmetric wide-angle lens
CN107545538B (en) * 2016-06-24 2020-06-02 清华大学深圳研究生院 Panoramic image splicing method and device based on unmanned aerial vehicle
CN107123090A (en) * 2017-04-25 2017-09-01 无锡中科智能农业发展有限责任公司 It is a kind of that farmland panorama system and method are automatically synthesized based on image mosaic technology
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN108734657B (en) * 2018-04-26 2022-05-03 重庆邮电大学 Image splicing method with parallax processing capability
CN109389555B (en) * 2018-09-14 2023-03-31 复旦大学 Panoramic image splicing method and device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN106204507A (en) * 2015-05-28 2016-12-07 长沙维纳斯克信息技术有限公司 A kind of unmanned plane image split-joint method
CN108171791A (en) * 2017-12-27 2018-06-15 清华大学 Dynamic scene real-time three-dimensional method for reconstructing and device based on more depth cameras
CN108765292A (en) * 2018-05-30 2018-11-06 中国人民解放军军事科学院国防科技创新研究院 Image split-joint method based on the fitting of space triangular dough sheet

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
航空图像超分辨率重建关键技术研究;何林阳;《中国博士学位论文全文数据库信息科技辑(月刊)》;20160815(第 08 期);I138-34 *

Also Published As

Publication number Publication date
CN110111250A (en) 2019-08-09

Similar Documents

Publication Publication Date Title
CN110111250B (en) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
Kumar et al. Fisheyedistancenet: Self-supervised scale-aware distance estimation using monocular fisheye camera for autonomous driving
US10469828B2 (en) Three-dimensional dense structure from motion with stereo vision
Kumar et al. Unrectdepthnet: Self-supervised monocular depth estimation using a generic framework for handling common camera distortion models
US6173087B1 (en) Multi-view image registration with application to mosaicing and lens distortion correction
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
US11568516B2 (en) Depth-based image stitching for handling parallax
US8593524B2 (en) Calibrating a camera system
WO2015154601A1 (en) Non-feature extraction-based dense sfm three-dimensional reconstruction method
CN109064404A (en) It is a kind of based on polyphaser calibration panorama mosaic method, panoramic mosaic system
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN110717936B (en) Image stitching method based on camera attitude estimation
CN104463859B (en) A kind of real-time video joining method based on tracking specified point
CN107767339B (en) Binocular stereo image splicing method
Guo et al. Mapping crop status from an unmanned aerial vehicle for precision agriculture applications
CN110246161B (en) Method for seamless splicing of 360-degree panoramic images
CN112862683B (en) Adjacent image splicing method based on elastic registration and grid optimization
CN105574838A (en) Image registration and splicing method of multi-view camera and device thereof
CN116958437A (en) Multi-view reconstruction method and system integrating attention mechanism
CN113160048A (en) Suture line guided image splicing method
CN111553845A (en) Rapid image splicing method based on optimized three-dimensional reconstruction
CN110580715B (en) Image alignment method based on illumination constraint and grid deformation
US10346949B1 (en) Image registration
CN113793266A (en) Multi-view machine vision image splicing method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190809

Assignee: Kunming Baosheng Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023420000122

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230522

Application publication date: 20190809

Assignee: Kunming New World Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023420000121

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230522

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190809

Assignee: Yunnan Changdian Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023980035961

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230530

Application publication date: 20190809

Assignee: New Yunteng Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023420000132

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230530

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190809

Assignee: Yunnan Jiayi Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023980036739

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230621

Application publication date: 20190809

Assignee: Yunnan Quanyan Technology Information Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023980036738

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230620

Application publication date: 20190809

Assignee: Yunnan Beian surveying and mapping Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023980036740

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230620

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190809

Assignee: Yunnan Youfu Information Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023420000224

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230706

EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20190809

Assignee: Yunnan Shenma Technology Co.,Ltd.

Assignor: CHINA University OF GEOSCIENCES (WUHAN CITY)

Contract record no.: X2023420000238

Denomination of invention: A Robust Automatic Panoramic Unmanned Aerial Vehicle Image Mosaic Method and Device

Granted publication date: 20201030

License type: Common License

Record date: 20230712

EE01 Entry into force of recordation of patent licensing contract