CN112862683A - Adjacent image splicing method based on elastic registration and grid optimization - Google Patents

Adjacent image splicing method based on elastic registration and grid optimization Download PDF

Info

Publication number
CN112862683A
CN112862683A CN202110174293.5A CN202110174293A CN112862683A CN 112862683 A CN112862683 A CN 112862683A CN 202110174293 A CN202110174293 A CN 202110174293A CN 112862683 A CN112862683 A CN 112862683A
Authority
CN
China
Prior art keywords
image
scale
point
adjacent
grid
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110174293.5A
Other languages
Chinese (zh)
Other versions
CN112862683B (en
Inventor
孙长银
耿凡
董璐
葛泉波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110174293.5A priority Critical patent/CN112862683B/en
Publication of CN112862683A publication Critical patent/CN112862683A/en
Application granted granted Critical
Publication of CN112862683B publication Critical patent/CN112862683B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • G06T5/30Erosion or dilatation, e.g. thinning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an adjacent image splicing method based on elastic registration and grid optimization, which comprises the steps of firstly, using an SIFT algorithm to extract and match features, obtaining feature matching interior points through a sequence RANSAC algorithm, and calculating multi-plane homography; then, grid division is carried out on the unmanned aerial vehicle images, and registration is carried out on adjacent unmanned aerial vehicle images by using an elastic model-based method; then, four constraint terms are constructed according to the grid vertex coordinate set to establish an energy function, and grid optimization is carried out through the minimized energy function to obtain a deformed grid vertex; and finally, obtaining a high-resolution unmanned aerial vehicle image splicing result through processing steps of triangular texture mapping, an optimal suture line and a multi-channel fusion algorithm. The experimental result of the invention shows that compared with the traditional method, the invention can effectively eliminate splicing double images and misalignment, has certain parallax tolerance, can reduce distortion generated by multi-image splicing, keeps the image shape and has natural impression.

Description

Adjacent image splicing method based on elastic registration and grid optimization
Technical Field
The invention relates to an adjacent image splicing method based on elastic registration and grid optimization, and belongs to the technical field of image intelligent processing.
Background
At present, along with the development of unmanned aerial vehicle technique, unmanned aerial vehicle is more and more extensive in the application under all kinds of occasions. The unmanned aerial vehicle remote sensing is low-altitude remote sensing, has the advantages of rapid image acquisition, accurate positioning, simple operation and the like, and has higher spatial resolution and lower cost compared with space flight and aviation remote sensing. Because of the limitation of the aerial height, the focal length and the visual angle of the unmanned aerial vehicle, a single unmanned aerial vehicle image hardly reflects the condition of the whole area to be measured, and in order to enlarge the visual field, a plurality of aerial images need to be fused into a panoramic image with a wide visual angle and the ground resolution.
Image stitching is the process of stitching multiple images into a panorama with a wider field of view. The image splicing comprises three stages of feature extraction and matching, image registration and image synthesis. The traditional image stitching method aligns two images by using global homography, such as affine or projection transformation, and the representative method is AutoStitch. This method assumes that the shot scenes are in the same plane, or that the images are shot rotated around the center of the camera projection, i.e., there is little or no parallax between the input images. Under this assumption, global homography works well, but misalignment artifacts are likely to occur when the above imaging assumption is violated. When this happens, these methods attempt to hide the misplaced regions using post-processing image blending, but when parallax is present or large, such methods still fail.
In order to overcome the above-mentioned drawbacks of the conventional global stitching algorithm, the prior art adopts a local transformation model, such as a smooth-varying affine Stitching (SVA), an as-projected-as-possible moving direct linear transformation (APAP) algorithm, a robust elastic transformation (REW) algorithm, and the like. The methods are based on grid deformation, and can process a certain parallax scene by utilizing local homography to carry out registration on images. On the basis, different constraints are applied to the grids, so that different effects can be achieved, such as a conformal semi-projection transformation Stitching (SPHP) algorithm, an image stitching (AANAP) algorithm which is as natural as possible, and a natural image stitching (NISwGSP) algorithm with global similarity, different limits are applied to the grids by the algorithms, such as multi-plane homography, linearity and the like, the effect of reducing projection distortion is achieved, and the natural stitching degree of the images is increased. However, for unmanned aerial vehicle image stitching, due to the fact that the unmanned aerial vehicle image stitching is high in resolution, complex in terrain and the like, stitching irregularity, terrain distortion and the like are prone to occurring, and the expected effect cannot be achieved by directly applying the prior art, so that a new image stitching method needs to be designed urgently by a person skilled in the art.
Disclosure of Invention
The purpose is as follows: in order to overcome the defects in the prior art, the invention provides an adjacent image splicing method based on elastic registration and grid optimization.
The technical scheme is as follows: in order to solve the technical problems, the technical scheme adopted by the invention is as follows:
a method for splicing adjacent images based on elastic registration and grid optimization comprises the following steps:
the method comprises the steps of down-sampling an original adjacent image to a seam _ scale to obtain a seam _ scale image, solving a ratio of the seam _ scale and a work _ scale to obtain a seam _ work _ aspect, wherein the seam _ scale is smaller than the work _ scale;
multiplying the deformed mesh vertex by the sea _ work _ aspect to obtain a new mesh vertex coordinate of the sea _ scale image, and performing texture mapping on the sea _ scale image by using a triangular affine transformation method according to the new mesh vertex coordinate of the sea _ scale image to obtain a transformed image;
executing an optimal suture line algorithm based on a graph cutting method on the transformed image to obtain suture line masks of the images on two sides of the suture line;
sampling an original adjacent image to a composition _ scale to obtain a composition _ scale image, and obtaining a ratio of the composition _ scale and a work _ scale to obtain a composition _ work _ aspect, wherein the composition _ scale is larger than the work _ scale;
multiplying the deformed grid vertex by the composition _ work _ aspect to obtain a new grid vertex coordinate of the composition _ scale image, and performing texture mapping on the composition _ scale image by using a triangular affine transformation method according to the new grid vertex coordinate of the composition _ scale image to obtain a high-resolution transformed image;
and expanding and amplifying the suture mask to a composition _ scale, and executing a multi-channel fusion algorithm on the composition _ scale according to the high-resolution transformed image and the expanded and amplified suture mask to obtain a splicing result image.
Preferably, the step of obtaining the vertices of the deformed mesh is as follows:
optimizing and solving the constructed grid optimization energy function E (V) by using a sparse linear solver to obtain the deformation grid vertex coordinates of each image relative to a reference image, and then obtaining the deformation grid vertex of each image through normalization;
the grid optimization energy function E (V) is calculated as follows:
E(V)=Ea(V)+λlsEls(V)+Egs(V)+λlEl(V)
Figure BDA0002940422410000031
Figure BDA0002940422410000032
Figure BDA0002940422410000033
Figure BDA0002940422410000034
wherein E isaTo align item, ElsFor local similarity terms, EgsAs global similarity term, ElFor straight line hold term, λlsAnd λlIs a corresponding weight coefficient, V isA grid vertex coordinate set with images, wherein N is the number of all the images; j is the adjacency between adjacent images; mijA grid matching point set is obtained;
alignment item Ea(V) in the above-mentioned step (V),
Figure BDA0002940422410000035
as mesh vertices
Figure BDA0002940422410000036
Or mesh corresponding points
Figure BDA0002940422410000037
Bilinear interpolation of four vertexes of the mesh in which the interpolation is positioned;
local similarity term Els(V) and a global similarity term EgsIn (V), EiIs IiThe set of all of the edges of (a),
Figure BDA0002940422410000038
and
Figure BDA0002940422410000039
representing a certain edge of the original image and the deformed edge thereof;
Figure BDA00029404224100000310
is an edge
Figure BDA00029404224100000311
The similarity transformation that is undergone is changed in such a way that,
Figure BDA00029404224100000312
and
Figure BDA00029404224100000313
for similarity transformation
Figure BDA00029404224100000314
Is expressed as a linear combination of vertex variables, si、θiFor the best dimension and the best rotation angle,
Figure BDA00029404224100000315
is an edge
Figure BDA00029404224100000316
A weighting function of;
straight line hold term ElIn (V), LiRepresenting an image IiSet of straight lines in (1)u、lv、lkAre respectively straight line
Figure BDA00029404224100000317
U is a 1-dimensional coordinate under a linear local coordinate system,
Figure BDA00029404224100000318
the bilinear interpolation is performed on four vertexes of a grid where a straight line starting point, an end point and an intermediate sampling point are located.
Preferably, s isi、θiThe steps of obtaining the optimal dimension and the optimal rotation angle are as follows:
estimating initial values of focal lengths from the multi-plane homography matrix H and forming Ii、IjIs given by the internal reference matrix Ki、KjI is obtained by the following formulaiAnd IjIn 3D of RijInitial estimation of (2):
Figure BDA0002940422410000041
wherein, Ii、IjR represents the parameters of the 3D rotation matrix for two adjacent images;
after initialization, all K's are processediAnd 3D rotation RijObtaining each adjacent image I by performing beam adjustment on initial valuesiRefined focal length fiAnd 3D rotation Ri
The optimal scale calculation formula for each adjacent image is as follows:
si=f1/fi
wherein f is1Is the focal length of the reference image;
using LSD to detect the straight lines of the images, two adjacent images I can be obtained by elastic registrationiAnd IjThe line correspondence between the adjacent images, the relative rotation angle is uniquely determined for each pair of line segments, and the optimal rotation angle theta of each adjacent image is obtained by voting and screening according to the RANSAC algorithmi
Preferably, M isijThe grid matching point set acquisition steps are as follows:
matching interior point sets according to features
Figure BDA0002940422410000042
And a multi-plane homography transformation matrix H, I is calculated by the following two formulasjCharacteristic target point q ofiIn IiCorresponding projected point q 'on'iAnd projected point q'iAnd a characteristic source point piDeviation r ofi
Figure BDA0002940422410000043
Figure BDA0002940422410000044
Wherein the content of the first and second substances,
Figure BDA0002940422410000045
and
Figure BDA0002940422410000046
is projected point q'iThe components in the X-direction and the Y-direction,
Figure BDA0002940422410000047
and
Figure BDA0002940422410000048
as error r of projected pointiComponents in the X and Y directions; i is the ith featureMatching interior points are proved;
constructing an energy function for calculating an optimal deformation such that the image plane I can be fitted by the projection point errorsiArbitrary pixel coordinate x ═ (x, y)TThe deformation g (X, Y) in the X direction and the deformation h (X, Y) in the Y direction, X, Y representing the coordinates in the X direction and the coordinates in the Y direction. For deformation g (X, y) in the X direction, the energy function is as follows:
Jλ=JD+λJS
Figure BDA0002940422410000049
Figure BDA00029404224100000410
wherein, JλIs a function of deformation energy, JDIs an alignment term, JSIs a smoothing term, λ is a weight coefficient;
Figure BDA00029404224100000411
represents the projected point q'iThe deformation experienced in the X-direction,
Figure BDA00029404224100000412
as error r of projected pointiThe component in the X direction. i is the ith feature matching interior point, and n represents the number of the feature matching interior points;
similarly, for a deformation h (x, Y) in the Y direction, the energy function is as follows:
Jλ=JD+λJS
Figure BDA0002940422410000051
Figure BDA0002940422410000052
wherein, JλIs a function of deformation energy, JDIs an alignment term, JSIs a smoothing term, λ is a weight coefficient;
Figure BDA0002940422410000053
represents the projected point q'iThe deformation experienced in the Y direction is,
Figure BDA0002940422410000054
as error r of projected pointiThe component in the Y direction.
According to the thin-plate spline theory, by minimizing JλUnique analytical solutions for the deformation functions g (x, y) and h (x, y) were obtained:
Figure BDA0002940422410000055
wherein d isi=‖x-q′i|, denotes any pixel coordinate x ═ x, yTAnd the ith projection point q'iN is the number of projection points;
the above formula has 2(n +3) coefficients to be solved
Figure BDA0002940422410000056
α=(α1,α2,α3)T、β=(β1,β23)TThe method is obtained by solving the following matrix equation:
Figure BDA0002940422410000057
wherein the content of the first and second substances,
Figure BDA0002940422410000058
dij=‖q′j-q′iII, C is a weight coefficient, I is an identity matrix; q ═ Q'1,…,q′n)TRepresenting homogeneous proxels, n being the number of proxels,
Figure BDA0002940422410000059
and
Figure BDA00029404224100000510
components of the proxel error in the X and Y directions;
after the deformation function is found, for IiVertex of upper mesh
Figure BDA00029404224100000511
By adding deformation and then performing multi-plane homography transformation, the method is obtainedjCorresponding point on
Figure BDA00029404224100000512
Figure BDA00029404224100000513
If the mesh vertex viAnd its corresponding point vi' falling under IiAnd IjIn the overlapping region of (c), then v is collectediForming a set M of grid matching pointsij,MijMiddle element
Figure BDA00029404224100000514
As the vertices of the mesh, the mesh is,
Figure BDA00029404224100000515
as mesh vertices
Figure BDA00029404224100000516
In IjAnd (4) projecting points.
Preferably, the feature matches the interior point set
Figure BDA00029404224100000517
The acquisition steps are as follows:
1-1) a pair of adjacent images I in the pair of adjacent relations JiAnd IjDetecting the characteristics of each adjacent image by SIFT algorithmMatching by using a 2NN algorithm to obtain an initial matching point set of each adjacent image;
1-2) estimating an interior point set corresponding to a homography matrix from the initial matching point set by using a RANSAC algorithm;
1-3) extracting an interior point set corresponding to the homography matrix from the initial matching point set, and estimating an interior point set corresponding to another homography matrix by performing RANSAC algorithm on the remaining matching points again;
1-4) repeating the step 1-3) for a plurality of times until the number of the remaining matching points is less than a set threshold value;
1-5) combining the inner point sets corresponding to the homography matrix estimated and extracted in each step into a feature matching inner point set
Figure BDA0002940422410000061
Wherein p isiIs IiI-th feature source point of (1), qiIs IjThe ith feature target point above, n is the number of feature matching inliers.
As a preferred scheme, the multi-plane homography matrix H is obtained as follows:
matching inner point sets by features
Figure BDA0002940422410000062
And calculating a multi-plane homography transformation matrix H between adjacent images by using a direct linear transformation method.
Preferably, the step of acquiring the mesh vertex coordinate sets of all the images V is as follows:
inputting N adjacent images I1,I2,...,INThe number N of images is more than or equal to 2; acquiring an adjacency relation J between adjacent images;
constructing a global matching graph by adjacent images and adjacent relations between the adjacent images
The method comprises the steps of conducting down-sampling on adjacent images according to the work _ scale, dividing grids on the down-sampled images according to the size of divided pixels, and obtaining grid vertex coordinates of each image
Figure BDA0002940422410000063
Wherein v isiRepresenting a vertex coordinate of the ith picture, wherein m is the number of vertexes;
the grid vertex coordinate set of all pictures is V ═ V (V)1,…,VN)。
Preferably, the reference image is the first image of a set of contiguous images.
Preferably, the adjacent images are obtained by unmanned aerial vehicle shooting.
Has the advantages that: according to the adjacent image splicing method based on elastic registration and grid optimization, registration of adjacent images is carried out through an elastic model based on thin plate splines, and alignment accuracy of an overlapped area is improved; and establishing a grid optimization framework by constructing constraint items such as alignment items, similar items, straight line maintaining items and the like, and maintaining the original shape of the image. Experimental results show that the method has the characteristics of robustness and high efficiency, and is suitable for unmanned aerial vehicle image splicing scenes.
Drawings
FIG. 1 is a schematic diagram of the overall algorithm framework of the present invention.
Fig. 2 is a schematic diagram of the elastic registration process of the present invention.
FIG. 3 is a schematic representation of the results of comparative experiments of the present invention.
Detailed Description
The present invention will be further described with reference to the following examples.
As shown in fig. 1, a method for joining adjacent images based on elastic registration and mesh optimization mainly includes the following steps:
(1) inputting an image and an adjacency graph;
(2) detecting and matching features;
(3) elastic model registration generates grid matching points;
(4) global similarity estimation;
(5) constructing constraint item grid optimization;
(6) the texture maps the composite image.
S1, acquiring a group of adjacent images and the adjacent relation between the images, and selecting a reference image. Carrying out down-sampling on the adjacent images, and dividing the grids to obtain a grid vertex coordinate set;
s2, extracting features of each adjacent image by using an SIFT algorithm, matching the features to obtain an initial matching point set, filtering out error matching points of the initial matching point set by using a sequence RANSAC algorithm to obtain a feature matching inner point set, and calculating a multi-plane homography transformation matrix between the adjacent images by using the feature matching inner point set;
s3, registering adjacent images by using an elastic model based on thin plate splines, accurately aligning the adjacent images, and generating a grid matching point set;
s4, estimating the focal length and the 3D rotation of each adjacent image by using the multi-plane homography transformation matrix obtained in the step S2 and the grid matching points obtained in the step S3, and selecting the optimal scale and rotation of each image relative to the reference image;
s5, constructing a grid vertex energy function aiming at the grid vertex coordinate set, wherein the energy function comprises four constraint items: aligning terms, local similar terms, global similar terms and straight line holding terms, and carrying out grid optimization solving through a sparse linear solver to obtain a deformed grid vertex relative to a reference image;
and S6, obtaining a final splicing result by utilizing the deformed mesh vertex obtained in the step S5 through processing steps of triangular texture mapping, optimal suture line and multi-channel fusion.
Example 1:
in the step (1), N adjacent images I shot by an unmanned aerial vehicle are input1,I2,...,INThe number N of images is more than or equal to 2; acquiring an adjacency relation J between adjacent images, wherein the adjacency relation J can be acquired by an unmanned aerial vehicle route planning method, and a global matching graph is constructed through the adjacency relation between the adjacent images; without loss of generality, use I1As a reference image. Because the resolution of the original image of the unmanned aerial vehicle is high, the adjacent image is down-sampled by 800 x 600 pixels, the down-sampled image is divided into grids by 40 x 40 pixels, and the grid vertex coordinates of each picture are obtained
Figure BDA0002940422410000081
Wherein v isiAnd (5) one vertex coordinate of the ith picture is shown, and m is the number of vertexes. The grid vertex coordinate set of all pictures is V ═ V (V)1,…,VN). The down-scaled scale of the down-sampled image relative to the original image is word scale. During the next steps (1) - (5), the image is operated on this work _ scale.
In the step (2), a pair of adjacent images I in the adjacent relation J is pairediAnd IjAnd detecting the characteristics of each adjacent image through an SIFT algorithm, and matching through a 2NN algorithm to obtain an initial matching point set of each adjacent image. Because the classical RANSAC algorithm only calculates the homography of a plane in an adjacent image when filtering out an error matching point and sacrifices a large number of correct feature matching interior points, the sequence RANSAC algorithm is used for acquiring a feature matching interior point set, and the specific method comprises the following steps:
(2-1) estimating an interior point set corresponding to a homography matrix from the initial matching point set by using a RANSAC algorithm;
(2-2) extracting an interior point set corresponding to the homography matrix from the initial matching point set, and estimating an interior point set corresponding to another homography matrix by performing RANSAC algorithm on the remaining matching points again;
(2-3) repeating the step (2-2) for a plurality of times until the number of the remaining matching points is less than a set threshold value which is set to be 40;
(2-4) combining the inner point set corresponding to the homography matrix estimated and extracted in each step into a feature matching inner point set
Figure BDA0002940422410000082
Wherein p isiIs IiI-th feature source point of (1), qiIs IjThe ith feature target point above, n is the number of feature matching inliers.
By the finally obtained feature matching interior point set
Figure BDA0002940422410000083
Calculating adjacent images by direct linear transformationA multi-planar homography transformation matrix H in between.
As shown in FIG. 2, step (3) is a process of performing adjacent image registration based on the elastic model of thin-plate spline, and it is still difficult to align I due to multi-plane homographyiAnd IjAll pixel locations of (a) are aligned, and therefore the purpose of this step is to register two images more accurately using a thin-plate spline-based elastic model, as follows:
(3-1) feature matching interior point set obtained according to the step (2)
Figure BDA0002940422410000084
And a multi-plane homography transformation matrix H, I is calculated by the following two formulasjCharacteristic target point q ofiIn IiCorresponding projected point q 'on'iAnd projected point q'iAnd a characteristic source point piDeviation r ofi
Figure BDA0002940422410000091
Figure BDA0002940422410000092
Wherein the content of the first and second substances,
Figure BDA0002940422410000093
and
Figure BDA0002940422410000094
is projected point q'iThe components in the X-direction and the Y-direction,
Figure BDA0002940422410000095
and
Figure BDA0002940422410000096
as error r of projected pointiThe components in the X and Y directions. i is the ith feature matching inlier.
(3-2) construction for calculating the optimum variablesEnergy function of the shape such that the image plane I can be fitted by the projection point erroriArbitrary pixel coordinate x ═ (x, y)TThe deformation g (X, Y) in the X direction and the deformation h (X, Y) in the Y direction, X, Y representing the coordinates in the X direction and the coordinates in the Y direction. For deformation g (X, y) in the X direction, the energy function is as follows:
Jλ=JD+λJS
Figure BDA0002940422410000097
Figure BDA0002940422410000098
wherein, JλIs a function of deformation energy, JDIs an alignment term, JSIs a smoothing term and λ is a weighting factor to control the degree of smoothing.
Figure BDA0002940422410000099
Represents the projected point q'iThe deformation experienced in the X-direction,
Figure BDA00029404224100000910
as error r of projected pointiThe component in the X direction. i is the ith feature matching inlier, and n represents the number of feature matching inliers.
Similarly, for a deformation h (x, Y) in the Y direction, the energy function is as follows:
Jλ=JD+λJS
Figure BDA00029404224100000911
Figure BDA00029404224100000912
wherein, JλIs a function of deformation energy, JDIs an alignment term, JSIs a smoothing term and λ is a weighting factor to control the degree of smoothing.
Figure BDA00029404224100000913
Represents the projected point q'iThe deformation experienced in the Y direction is,
Figure BDA00029404224100000914
as error r of projected pointiThe component in the Y direction.
(3-3) according to the thin plate spline theory, by minimizing JλUnique analytical solutions for the deformation functions g (x, y) and h (x, y) were obtained:
Figure BDA0002940422410000101
wherein d isi=‖x-q′i|, denotes any pixel coordinate x ═ x, yTAnd the ith projection point q'iN is the number of proxels.
The above formula has 2(n +3) coefficients to be solved
Figure BDA0002940422410000102
α=(α123)T、β=(β123)TThis can be obtained by solving the following matrix equation:
Figure BDA0002940422410000103
wherein the content of the first and second substances,
Figure BDA0002940422410000104
dij=‖q′j-q′iII, C is a weight coefficient, I is an identity matrix; q ═ Q'1,…,q′n)TRepresenting homogeneous proxels, n being the number of proxels,
Figure BDA0002940422410000105
and
Figure BDA0002940422410000106
are the components of the proxel error in the X and Y directions.
(3-4) after obtaining the deformation function, for IiVertex of upper mesh
Figure BDA0002940422410000107
By adding deformation and then performing multi-plane homography transformation, the method is obtainedjCorresponding point on
Figure BDA0002940422410000108
Figure BDA0002940422410000109
If the mesh vertex viAnd its corresponding point vi' falling under IiAnd IjIn the overlapping region of (c), then v is collectediForming mesh vertices
Figure BDA00029404224100001010
As a set of grid matching points MijAnd (5) medium element. Mesh vertices
Figure BDA00029404224100001011
In IjThe upper corresponding point is
Figure BDA00029404224100001012
Referring to FIG. 2, (a) is I obtained in step (2-4)iAnd IjThe feature matching inner point set of (a), (b) is the I calculated in the step (3-1)jIn IiThe projected point error is obtained, (c) is the grid deformation quantity fitted according to the projected point error in the step (3-3), and (d) is the uniformly distributed grid matching points obtained by calculation in the step (3-4).
In step (4), this step is designed to choose the best scale and rotation of each image relative to the reference image, since the appropriate scale and rotation can preserve the image shape. The method comprises the following specific steps:
(4-1) estimating a focal length and a 3D rotation. Estimating initial values of focal lengths from the multi-plane homography matrix H calculated in the step (2) and respectively forming Ii、IjIs given by the internal reference matrix Ki、KjI is obtained by the following formulaiAnd IjIn 3D of RijInitial estimation of (2):
Figure BDA00029404224100001013
wherein R represents a parameter of the 3D rotation matrix. This equation can be solved by SVD decomposition. Compared with the initialization of autostarth, the formula uses better initialization homography transformation and more evenly distributed grid matching points. After initialization, all K' siAnd 3D rotation RijObtaining each adjacent image I by performing beam adjustment on initial valuesiRefined focal length fiAnd 3D rotation Ri
(4-2) selection of the dimension si. The dimensions of each adjoining image may be set to:
si=f1/fi (9)
wherein f is1Is the focal length of the reference image.
(4-3) selection of rotation thetai. Using LSD to detect the straight lines of the images, two adjacent images I can be obtained by elastic registrationiAnd IjThe line correspondence between the adjacent images, the relative rotation angle is uniquely determined for each pair of line segments, and the optimal rotation angle theta of each adjacent image is obtained by voting and screening according to the RANSAC algorithmi
The network vertex energy function in the step (5) is expressed by the formulas (10) to (14):
E(V)=Ea(V)+λlsEls(V)+Egs(V)+λlEl(V) (10)
Figure BDA0002940422410000111
Figure BDA0002940422410000112
Figure BDA0002940422410000113
Figure BDA0002940422410000114
wherein E isaTo align item, ElsFor local similarity terms, EgsAs global similarity term, ElFor straight line hold term, λlsAnd λlV is a grid vertex coordinate set of all pictures, and N is the number of all pictures;
alignment item Ea(V) in the above-mentioned step (V),
Figure BDA0002940422410000115
for mesh vertices or mesh vertex corresponding points
Figure BDA0002940422410000116
Or
Figure BDA0002940422410000117
Bilinear interpolation of four vertexes of the mesh in which the interpolation is positioned;
local similarity term Els(V) and a global similarity term EgsIn (V), EiIs IiThe set of all of the edges of (a),
Figure BDA0002940422410000118
and
Figure BDA0002940422410000119
representing a certain edge of the original picture and its deformed edge.
Figure BDA00029404224100001110
Is an edge
Figure BDA00029404224100001111
The similarity transformation that is undergone is changed in such a way that,
Figure BDA0002940422410000121
and
Figure BDA0002940422410000122
for similarity transformation
Figure BDA0002940422410000123
The corresponding element in (1) can be expressed as a linear combination of vertex variables, si、θiFor the optimal scale and rotation resulting from step S4,
Figure BDA0002940422410000124
is an edge
Figure BDA0002940422410000125
A weighting function of;
straight line hold term ElIn (V), LiRepresenting an image IiSet of straight lines in (1)u、lv、lkAre respectively straight line
Figure BDA0002940422410000126
U is a 1-dimensional coordinate under a linear local coordinate system,
Figure BDA0002940422410000127
is bilinear interpolation of four vertexes of the grid where the straight-line sampling points are located. In the process of sampling the straight lines, based on a grid optimization algorithm, the length of the selected straight lines is larger than 60 pixels. The line extracted by the LSD is possibly fragmented, the invention also provides an interactive mode, and the line to be protected can be manually extracted by a user;
and (3) carrying out optimization solution on the constructed grid optimization energy function E (V) by using a sparse linear solver to obtain the deformation grid vertex coordinates of each image relative to the reference image, and then obtaining the deformation grid vertex of each image through normalization.
The image operations in the above steps (1) to (5) are all performed on a work _ scale of 800 × 600 pixels, and after the vertex coordinates of the deformed mesh in the scale are obtained, the following processing steps in step (6) are also performed:
(6-1) down-sampling the original adjacent image to a seam _ scale to obtain a seam _ scale image, wherein the seam _ scale is smaller than a work _ scale to obtain a ratio of the seam _ scale to the work _ scale, and then obtaining new grid vertex coordinates of the seam _ scale image by multiplying the deformed grid vertex result obtained in the step (5) by the seam _ work _ aspect, and then carrying out texture mapping on the seam _ scale image by using a triangle affine transformation method according to the new grid vertex coordinates of the seam _ scale image to obtain a transformed image; executing an optimal suture line algorithm based on a graph cutting method on the transformed image to obtain suture line masks of the images on two sides of the suture line;
(6-2) sampling the original adjacent image to a composition _ scale to obtain a composition _ scale image, wherein the composition _ scale is larger than a work _ scale to obtain a ratio of composition _ scale to work _ scale, multiplying a deformation grid vertex result obtained in the step (5) by composition _ work _ aspect to obtain a new grid vertex coordinate of the composition _ scale image, and performing texture mapping on the composition _ scale image by using a triangular affine transformation method according to the new grid vertex coordinate of the composition _ scale image to obtain a high-resolution transformation image; and (4) expanding and amplifying the suture mask obtained in the step (6-1) to a composition _ scale, and executing a multi-channel fusion algorithm on the composition _ scale according to the high-resolution transformed image and the expanded and amplified suture mask to obtain a final complete and clear unmanned aerial vehicle splicing result image with high resolution.
A method for splicing adjacent images based on elastic registration and grid optimization is shown in FIG. 3. Wherein, the graph (a) is the splicing result obtained by the AutoStitch method, the graph (b) is the splicing result obtained by the invention, and the graph (c) is the deformed mesh vertex obtained by mesh optimization. The beneficial effects are as follows: images can be accurately aligned in the overlapping area, and splicing fracture and ghost phenomena are removed; the image maintains a natural shape over the global extent.
The experimental result of the invention shows that compared with the traditional method, the invention can effectively eliminate splicing double images and misalignment, has certain parallax tolerance, can reduce distortion generated by multi-image splicing, keeps the image shape and has natural impression.
The above description is only of the preferred embodiments of the present invention, and it should be noted that: it will be apparent to those skilled in the art that various modifications and adaptations can be made without departing from the principles of the invention and these are intended to be within the scope of the invention.

Claims (9)

1. An adjacent image stitching method based on elastic registration and grid optimization is characterized in that: the method comprises the following steps:
the method comprises the steps of down-sampling an original adjacent image to a seam _ scale to obtain a seam _ scale image, solving a ratio of the seam _ scale and a work _ scale to obtain a seam _ work _ aspect, wherein the seam _ scale is smaller than the work _ scale;
multiplying the deformed mesh vertex by the sea _ work _ aspect to obtain a new mesh vertex coordinate of the sea _ scale image, and performing texture mapping on the sea _ scale image by using a triangular affine transformation method according to the new mesh vertex coordinate of the sea _ scale image to obtain a transformed image;
executing an optimal suture line algorithm based on a graph cutting method on the transformed image to obtain suture line masks of the images on two sides of the suture line;
sampling an original adjacent image to a composition _ scale to obtain a composition _ scale image, and obtaining a ratio of the composition _ scale and a work _ scale to obtain a composition work _ aspect, wherein the composition _ scale is larger than the work _ scale;
multiplying the deformed grid vertex by the composition _ work _ aspect to obtain a new grid vertex coordinate of the composition _ scale image, and performing texture mapping on the composition _ scale image by using a triangular affine transformation method according to the new grid vertex coordinate of the composition _ scale image to obtain a high-resolution transformed image;
and expanding and amplifying the suture mask to a composition _ scale, and executing a multi-channel fusion algorithm on the composition _ scale according to the high-resolution transformed image and the expanded and amplified suture mask to obtain a splicing result image.
2. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 1, wherein: the method for acquiring the deformed mesh vertexes comprises the following steps:
optimizing and solving the constructed grid optimization energy function E (V) by using a sparse linear solver to obtain the deformation grid vertex coordinates of each image relative to a reference image, and then obtaining the deformation grid vertex of each image through normalization;
the grid optimization energy function E (V) is calculated as follows:
E(V)=Ea(V)+λlsEls(V)+Egs(V)+λlEl(V)
Figure FDA0002940422400000021
Figure FDA0002940422400000022
Figure FDA0002940422400000023
Figure FDA0002940422400000024
wherein E isaTo align item, ElsFor local similarity terms, EgsAs global similarity term, ElFor straight line hold term, λlsAnd λlV is a grid vertex coordinate set of all the images, and N is the number of all the images; j is the adjacency between adjacent images; mijA grid matching point set is obtained;
alignment item Ea(V) in the above-mentioned step (V),
Figure FDA0002940422400000025
as mesh vertices
Figure FDA0002940422400000026
Or mesh corresponding points
Figure FDA0002940422400000027
Bilinear interpolation of four vertexes of the mesh in which the interpolation is positioned;
local similarity term Els(V) and a global similarity term EgsIn (V), EiIs IiThe set of all of the edges of (a),
Figure FDA0002940422400000028
and
Figure FDA0002940422400000029
representing a certain edge of the original image and the deformed edge thereof;
Figure FDA00029404224000000210
is an edge
Figure FDA00029404224000000211
The similarity transformation that is undergone is changed in such a way that,
Figure FDA00029404224000000212
Figure FDA00029404224000000213
and
Figure FDA00029404224000000214
for similarity transformation
Figure FDA00029404224000000215
Is expressed as a linear combination of vertex variables, si、θiFor the best dimension and the best rotation angle,
Figure FDA00029404224000000216
is an edge
Figure FDA00029404224000000217
A weighting function of;
straight line hold term ElIn (V), LiRepresenting an image IiSet of straight lines in (1)u、lv、lkAre respectively straight line
Figure FDA00029404224000000219
U is a 1-dimensional coordinate under a linear local coordinate system,
Figure FDA00029404224000000218
the bilinear interpolation is performed on four vertexes of a grid where a straight line starting point, an end point and an intermediate sampling point are located.
3. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 2, wherein: s isi、θiThe steps of obtaining the optimal dimension and the optimal rotation angle are as follows:
estimating initial values of focal lengths from the multi-plane homography matrix H and forming Ii、IjIs given by the internal reference matrix Ki、KjI is obtained by the following formulaiAnd IjIn 3D of RijInitial estimation of (2):
Figure FDA0002940422400000031
wherein, Ii、IjR represents the parameters of the 3D rotation matrix for two adjacent images;
after initialization, all K's are processediAnd 3D rotation RijObtaining each adjacent image I by performing beam adjustment on initial valuesiRefined focal length fiAnd 3D rotation Ri
The optimal scale calculation formula for each adjacent image is as follows:
si=f1/fi
wherein f is1Is the focal length of the reference image;
using LSD to detect the straight lines of the images, two adjacent images I can be obtained by elastic registrationiAnd IjThe line correspondence between the adjacent images, the relative rotation angle is uniquely determined for each pair of line segments, and the optimal rotation angle theta of each adjacent image is obtained by voting and screening according to the RANSAC algorithmi
4. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 2, wherein: the M isijThe grid matching point set acquisition steps are as follows:
matching interior point sets according to features
Figure FDA0002940422400000032
And a multi-plane homography transformation matrix H, I is calculated by the following two formulasjCharacteristic target point q ofiIn IiCorresponding projected point q 'on'iAnd projected point q'iAnd a characteristic source point piDeviation r ofi
Figure FDA0002940422400000033
Figure FDA0002940422400000034
Wherein the content of the first and second substances,
Figure FDA0002940422400000035
and
Figure FDA0002940422400000036
is projected point q'iThe components in the X-direction and the Y-direction,
Figure FDA0002940422400000037
and
Figure FDA0002940422400000038
as error r of projected pointiComponents in the X and Y directions; i is the ith feature matching interior point;
constructing an energy function for calculating an optimal deformation such that the image plane I can be fitted by the projection point errorsiArbitrary pixel coordinate x ═ (x, y)TThe deformation g (X, Y) in the X direction and the deformation h (X, Y) in the Y direction, X, Y representing the coordinates in the X direction and the coordinates in the Y direction. For deformation g (X, y) in the X direction, the energy function is as follows:
Jλ=JD+λJS
Figure FDA0002940422400000041
Figure FDA0002940422400000042
wherein, JλIs a function of deformation energy, JDIs an alignment term, JsIs a smoothing term, λ is a weight coefficient;
Figure FDA0002940422400000043
represents the projected point q'iThe deformation experienced in the X-direction,
Figure FDA0002940422400000044
as error r of projected pointiThe component in the X direction. i is the ith feature matching interior point, and n represents the number of the feature matching interior points;
similarly, for a deformation h (x, Y) in the Y direction, the energy function is as follows:
Jλ=JD+λJS
Figure FDA0002940422400000045
Figure FDA0002940422400000046
wherein, JλIs a function of deformation energy, JDIs an alignment term, JSIs a smoothing term, λ is a weight coefficient;
Figure FDA0002940422400000047
represents the projected point q'iThe deformation experienced in the Y direction is,
Figure FDA0002940422400000048
as error r of projected pointiThe component in the Y direction.
According to the thin-plate spline theory, by minimizing JλUnique analytical solutions for the deformation functions g (x, y) and h (x, y) were obtained:
Figure FDA0002940422400000049
wherein d isi=||x-q′i| | represents an arbitrary pixel coordinate x ═ x, yTAnd the ith projection point q'iN is the number of projection points;
the above formula has 2(n +3) to be solvedCoefficient of solution
Figure FDA00029404224000000410
α=(α1,α2,α3)T、β=(β1,β2,β3)TThe method is obtained by solving the following matrix equation:
Figure FDA0002940422400000051
wherein the content of the first and second substances,
Figure FDA0002940422400000052
dij=||q′j-q′ii, C is a weight coefficient, and I is a unit matrix; q ═ Q'1,...,q′n)TRepresenting homogeneous proxels, n being the number of proxels,
Figure FDA0002940422400000053
and
Figure FDA0002940422400000054
components of the proxel error in the X and Y directions;
after the deformation function is found, for IiVertex of upper mesh
Figure FDA0002940422400000055
By adding deformation and then performing multi-plane homography transformation, the method is obtainedjCorresponding point on
Figure FDA0002940422400000056
Figure FDA0002940422400000057
If the mesh vertex viAnd the correspondingPoint vi′Fall into IiAnd IjIn the overlapping region of (c), then v is collectediForming a set M of grid matching pointsij,MijMiddle element
Figure FDA0002940422400000058
As the vertices of the mesh, the mesh is,
Figure FDA0002940422400000059
as mesh vertices
Figure FDA00029404224000000510
In IjAnd (4) projecting points.
5. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 4, wherein: the feature matching interior point set
Figure FDA00029404224000000511
The acquisition steps are as follows:
1-1) a pair of adjacent images I in the pair of adjacent relations JiAnd IjDetecting the characteristics of each adjacent image through an SIFT algorithm, and matching through a 2NN algorithm to obtain an initial matching point set of each adjacent image;
1-2) estimating an interior point set corresponding to a homography matrix from the initial matching point set by using a RANSAC algorithm;
1-3) extracting an interior point set corresponding to the homography matrix from the initial matching point set, and estimating an interior point set corresponding to another homography matrix by performing RANSAC algorithm on the remaining matching points again;
1-4) repeating the step 1-3) for a plurality of times until the number of the remaining matching points is less than a set threshold value;
1-5) combining the inner point sets corresponding to the homography matrix estimated and extracted in each step into a feature matching inner point set
Figure FDA00029404224000000512
Wherein p isiIs IiI-th feature source point of (1), qiIs IjThe ith feature target point above, n is the number of feature matching inliers.
6. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 4, wherein: the multi-plane homography matrix H is obtained by the following steps:
matching inner point sets by features
Figure FDA00029404224000000513
And calculating a multi-plane homography transformation matrix H between adjacent images by using a direct linear transformation method.
7. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 1, wherein: v, acquiring the grid vertex coordinate sets of all the images as follows:
inputting N adjacent images I1,I2,...,INThe number N of images is more than or equal to 2; acquiring an adjacency relation J between adjacent images;
constructing a global matching graph by adjacent images and adjacent relations between the adjacent images
The method comprises the steps of conducting down-sampling on adjacent images according to the work _ scale, dividing grids on the down-sampled images according to the size of divided pixels, and obtaining grid vertex coordinates of each image
Figure FDA0002940422400000061
Wherein v isiRepresenting a vertex coordinate of the ith picture, wherein m is the number of vertexes;
the grid vertex coordinate set of all pictures is V ═ V (V)1,...,VN)。
8. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 1, wherein: the reference picture is the first picture of a set of contiguous pictures.
9. The adjacent image stitching method based on elastic registration and mesh optimization as claimed in claim 1, wherein: the adjacent images are obtained by unmanned aerial vehicle shooting.
CN202110174293.5A 2021-02-07 2021-02-07 Adjacent image splicing method based on elastic registration and grid optimization Active CN112862683B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110174293.5A CN112862683B (en) 2021-02-07 2021-02-07 Adjacent image splicing method based on elastic registration and grid optimization

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110174293.5A CN112862683B (en) 2021-02-07 2021-02-07 Adjacent image splicing method based on elastic registration and grid optimization

Publications (2)

Publication Number Publication Date
CN112862683A true CN112862683A (en) 2021-05-28
CN112862683B CN112862683B (en) 2022-12-06

Family

ID=75989340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110174293.5A Active CN112862683B (en) 2021-02-07 2021-02-07 Adjacent image splicing method based on elastic registration and grid optimization

Country Status (1)

Country Link
CN (1) CN112862683B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506216A (en) * 2021-06-24 2021-10-15 煤炭科学研究总院 Rapid suture line optimization method for panoramic image splicing
CN114387153A (en) * 2021-12-13 2022-04-22 复旦大学 Visual field expanding method for intubation robot
CN114913064A (en) * 2022-03-15 2022-08-16 天津理工大学 Large parallax image splicing method and device based on structure keeping and many-to-many matching
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734268A (en) * 2017-09-18 2018-02-23 北京航空航天大学 A kind of structure-preserved wide baseline video joining method
CN107730528A (en) * 2017-10-28 2018-02-23 天津大学 A kind of interactive image segmentation and fusion method based on grabcut algorithms
CN109961398A (en) * 2019-02-18 2019-07-02 鲁能新能源(集团)有限公司 Fan blade image segmentation and grid optimization joining method
CN110136090A (en) * 2019-04-11 2019-08-16 中国地质大学(武汉) The robust elastic model unmanned plane image split-joint method of registration is kept with part
CN110428367A (en) * 2019-07-26 2019-11-08 北京小龙潜行科技有限公司 A kind of image split-joint method and device
CN110781903A (en) * 2019-10-12 2020-02-11 中国地质大学(武汉) Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint
CN111915484A (en) * 2020-07-06 2020-11-10 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN112308775A (en) * 2020-09-23 2021-02-02 中国石油大学(华东) Underwater image splicing method and device

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107734268A (en) * 2017-09-18 2018-02-23 北京航空航天大学 A kind of structure-preserved wide baseline video joining method
CN107730528A (en) * 2017-10-28 2018-02-23 天津大学 A kind of interactive image segmentation and fusion method based on grabcut algorithms
CN109961398A (en) * 2019-02-18 2019-07-02 鲁能新能源(集团)有限公司 Fan blade image segmentation and grid optimization joining method
CN110136090A (en) * 2019-04-11 2019-08-16 中国地质大学(武汉) The robust elastic model unmanned plane image split-joint method of registration is kept with part
CN110428367A (en) * 2019-07-26 2019-11-08 北京小龙潜行科技有限公司 A kind of image split-joint method and device
CN110781903A (en) * 2019-10-12 2020-02-11 中国地质大学(武汉) Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint
CN111915484A (en) * 2020-07-06 2020-11-10 天津大学 Reference image guiding super-resolution method based on dense matching and self-adaptive fusion
CN112308775A (en) * 2020-09-23 2021-02-02 中国石油大学(华东) Underwater image splicing method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
杨晨晓等: "基于单应性矩阵和内容保护变形的图像拼接", 《计算机应用研究》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113506216A (en) * 2021-06-24 2021-10-15 煤炭科学研究总院 Rapid suture line optimization method for panoramic image splicing
CN113506216B (en) * 2021-06-24 2024-03-12 煤炭科学研究总院 Rapid suture line optimizing method for panoramic image stitching
CN114387153A (en) * 2021-12-13 2022-04-22 复旦大学 Visual field expanding method for intubation robot
CN114913064A (en) * 2022-03-15 2022-08-16 天津理工大学 Large parallax image splicing method and device based on structure keeping and many-to-many matching
CN114913064B (en) * 2022-03-15 2024-07-02 天津理工大学 Large parallax image splicing method and device based on structure maintenance and many-to-many matching
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115393196B (en) * 2022-10-25 2023-03-24 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging

Also Published As

Publication number Publication date
CN112862683B (en) 2022-12-06

Similar Documents

Publication Publication Date Title
CN112862683B (en) Adjacent image splicing method based on elastic registration and grid optimization
CN110211043B (en) Registration method based on grid optimization for panoramic image stitching
CN110648398B (en) Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
US11350073B2 (en) Disparity image stitching and visualization method based on multiple pairs of binocular cameras
US9740950B1 (en) Method and system for automatic registration of images
CN106447601B (en) Unmanned aerial vehicle remote sensing image splicing method based on projection-similarity transformation
CN104732482B (en) A kind of multi-resolution image joining method based on control point
CN108171791B (en) Dynamic scene real-time three-dimensional reconstruction method and device based on multi-depth camera
KR101175097B1 (en) Panorama image generating method
CN107767339B (en) Binocular stereo image splicing method
US20240169674A1 (en) Indoor scene virtual roaming method based on reflection decomposition
CN110111250B (en) Robust automatic panoramic unmanned aerial vehicle image splicing method and device
CN106485751B (en) Unmanned aerial vehicle photographic imaging and data processing method and system applied to foundation pile detection
CN103106688A (en) Indoor three-dimensional scene rebuilding method based on double-layer rectification method
CN110246161B (en) Method for seamless splicing of 360-degree panoramic images
Peña-Villasenín et al. 3-D modeling of historic façades using SFM photogrammetry metric documentation of different building types of a historic center
Li et al. A study on automatic UAV image mosaic method for paroxysmal disaster
CN104616247B (en) A kind of method for map splicing of being taken photo by plane based on super-pixel SIFT
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
CN108470324A (en) A kind of binocular stereo image joining method of robust
CN110717936B (en) Image stitching method based on camera attitude estimation
CN110781903A (en) Unmanned aerial vehicle image splicing method based on grid optimization and global similarity constraint
Pathak et al. Dense 3D reconstruction from two spherical images via optical flow-based equirectangular epipolar rectification
CN109472752A (en) More exposure emerging systems based on Aerial Images
Wan et al. Drone image stitching using local mesh-based bundle adjustment and shape-preserving transform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant