CN113313659A - High-precision image splicing method under multi-machine cooperative constraint - Google Patents

High-precision image splicing method under multi-machine cooperative constraint Download PDF

Info

Publication number
CN113313659A
CN113313659A CN202110450860.5A CN202110450860A CN113313659A CN 113313659 A CN113313659 A CN 113313659A CN 202110450860 A CN202110450860 A CN 202110450860A CN 113313659 A CN113313659 A CN 113313659A
Authority
CN
China
Prior art keywords
image
point
splicing
points
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110450860.5A
Other languages
Chinese (zh)
Other versions
CN113313659B (en
Inventor
席建祥
杨小冈
卢瑞涛
谢学立
陈彤
郭杨
王乐
刘祉祎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Rocket Force University of Engineering of PLA
Original Assignee
Rocket Force University of Engineering of PLA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Rocket Force University of Engineering of PLA filed Critical Rocket Force University of Engineering of PLA
Priority to CN202110450860.5A priority Critical patent/CN113313659B/en
Publication of CN113313659A publication Critical patent/CN113313659A/en
Application granted granted Critical
Publication of CN113313659B publication Critical patent/CN113313659B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a high-precision image stitching method under multi-machine cooperative constraint, which starts from the multi-machine cooperative control angle and builds a multi-scale multi-view image stitching model based on the space pose information of multiple unmanned aerial vehicles. Aiming at the phenomena of double images, pixel breakage and the like of a single homography matrix, the self-adaptive homography matrix method is provided for improving the splicing effect. For the problems of fuzzy identical overlapping areas and color inconsistency after splicing, a weighted smoothing algorithm is adopted, and smooth transition of overlapping partial images is realized through weight distribution, so that the problem of color difference near splicing superposition is effectively solved. The self-contained aerial photography data set is used for algorithm performance verification, and experimental results show that the method provided by the invention has good splicing performance, obviously improves the registration precision, and meets the requirements of high real-time performance and high precision for splicing the multi-scale and multi-view aerial photography images in the multi-machine collaborative inspection flight process.

Description

High-precision image splicing method under multi-machine cooperative constraint
Technical Field
The invention relates to the technical field of image data processing, in particular to a high-precision image splicing method under multi-machine cooperative constraint.
Background
Image stitching is a process of combining a plurality of images with overlapping regions in the same scene into a complete image with wide visual field and high resolution. Currently common image registration methods include image gray scale based, transform domain based, and feature based methods. The Feature-based matching method is widely applied, and includes basic HOG (Histogram of Oriented Gradients) Features, Scale-Invariant Feature Transform (SIFT) algorithm, speedup Robust Features (SURF) algorithm, and ORB (Oriented FAST and Robust BRIEF) and other improved algorithms. The HOG characteristics have invariance and strong robustness to image geometry and optical deformation, but have poor real-time performance, sensitivity to noise points and difficult processing to the shielding problem; the SIFT algorithm has strong robustness and high reliability, but the calculated amount is large, and the requirement of splicing instantaneity cannot be met; the SURF algorithm has high matching speed and high operation efficiency, but is not ideal in performance such as scale invariance, rotation invariance and the like; the ORB algorithm has high calculation speed, the running time of the ORB algorithm is far superior to SIFT and SURF, the real-time performance is good, and the ORB algorithm has invariance to noise and perspective transformation thereof. In addition, patent CN105869120A discloses a real-time optimization method for image stitching, which reduces the search range of feature points by locking the overlapping area, and increases the stitching speed, and the method focuses on high real-time performance of stitching, but the stitching quality is not high. Patent CN108921780A discloses a method for quickly stitching sequence images of unmanned aerial vehicle aerial photography, which has good stability and low redundant information, but image detail information is easy to lose, and is not beneficial to generating a high-precision stitching result image.
Because the camera visual angle is limited and the environment is complicated, be difficult to obtain high resolution, comprehensive wide area result picture in real time, and utilize many unmanned aerial vehicles to carry on multimode sensor and patrol the mission zone in coordination and can improve and patrol efficiency by a wide margin, nevertheless still have two problems when using unmanned aerial vehicle to patrol: the information sharing and communication problems among multiple machines are solved, the transmission of complex mass data is completed in a very short time, and the information asynchronization is avoided; secondly, the post-processing problem of images shot by multiple machines is solved, the image information acquired by the multiple machines is separated, and the aerial images acquired by the sensors have different scales and visual angles due to the fact that the flying heights, relative positions and postures of the unmanned aerial vehicles are different. At present, the splicing effect of the unmanned aerial vehicle aerial image splicing method on the same plane image is ideal, but the conditions of mutual intersection, overlapping and the like of a plurality of scales and a plurality of visual angle images often occur in multi-machine collaborative shooting, and the existing algorithm cannot meet the requirements of high real-time performance and high precision at the same time.
In order to solve the problem, how to reasonably utilize a multi-machine cooperation strategy to realize the splicing of multi-scale and multi-view aerial images becomes a problem to be solved urgently at present.
Disclosure of Invention
Aiming at the existing problems, the invention provides a high-precision image splicing method under multi-machine cooperative constraint, which can solve the problems of low splicing efficiency and poor quality caused by large data volume and complex types in the cooperative inspection process of multiple unmanned aerial vehicles.
The core thought of the invention is as follows:
firstly, from the perspective of multi-machine cooperative control, acquiring aerial images in real time, and selecting a rapid splicing scheme of splicing among unmanned aerial vehicles and splicing among sequence frames of single machines; then, an ORB feature extraction algorithm is utilized, the difference of image transformation relations of different scales and different visual angles under the condition of multi-camera shooting is fully considered, and a self-adaptive homography matrix is constructed to complete high-precision registration and splicing of the images; and finally, realizing smooth transition of the overlapped images by using a weighted smoothing algorithm, and solving the problems of visual effects such as fracture, obvious seam and the like of the spliced large-view-field image, thereby improving the image splicing quality.
The technical solution for realizing the purpose of the invention is as follows:
a high-precision image splicing method under multi-machine cooperative constraint is characterized by comprising the following steps:
step 1: acquiring ground images in real time through multiple sensors, and constructing a multi-machine cooperation real-time image imaging model by combining position, posture and angle information among multiple unmanned aerial vehicles, so as to determine an actual imaging area;
step 2: extracting a characteristic point set of an original image in an actual imaging area by adopting an ORB (object-oriented bounding box) method, performing image rough matching, purifying a rough matching result by using a random sampling consistency algorithm, and removing mismatching points;
and step 3: constructing an adaptive homography matrix, mapping the purified images to the same coordinate system for preliminary splicing, then performing time sequence correction on each sequence of images, and then performing color correction by adopting a weighted smoothing algorithm;
and 4, step 4: and outputting a result image after finishing image splicing.
Further, the specific operation steps of step 1 include:
step 11: resolving the acquired ground image data to obtain the motion parameters of the unmanned aerial vehicle, and realizing the flight control of the airframe;
step 12: utilize inertial navigation unit, GPS and the barometer of carrying on many unmanned aerial vehicles to obtain unmanned aerial vehicle attitude angle: pitch angle theta, roll angle phi, yaw angle
Figure BDA0003038332330000041
And information such as coordinates, flying height, etc.;
step 13: respectively establishing a ground coordinate system, a body coordinate system, a camera coordinate system and an image coordinate system;
step 14: coordinate transformation relation p between body coordinate system and ground coordinate system is established by combining attitude angle of unmanned aerial vehicleg=LbpbAnd establishing a multi-machine cooperation real-time image imaging model.
Further, the specific operation steps of step 2 include:
step (ii) of21: selecting any pixel point S in the original image, drawing a circle with the radius of 3 pixels by taking S as the center of the circle, detecting 16 pixel points falling on the circle, recording the number of continuous pixel points with the pixel gray scale satisfying the formula (2) in the 16 pixel points on the neighborhood circle as h, and judging whether h is greater than a preset threshold value epsilondIf yes, judging S as a characteristic point, and the gray value condition satisfied by the pixel is as follows:
Figure BDA0003038332330000042
wherein I (x) is the gray scale of any point on the circumference, I(s) is the gray scale of the center of the circle, epsilondIs a threshold value of the gray scale difference, and N represents the gray scale difference;
step 22: directly detecting the gray values of four pixel positions of each pixel point S in the circumferential vertical direction I (1) and I (9) and the horizontal direction I (5) and I (13), and counting that the difference value of the gray values of the four position points I (t) and the pixel of the selected point is greater than epsilondThe number of the pixel points M is as follows:
M=size(I∈{|I(t)-I(s)|>εd}),εd>0 (3);
if the characteristic point satisfies the formula (3) and M33, judging S as a characteristic point, otherwise, directly excluding the point;
step 23: calculating Hamming distances d between any one feature point S selected from one image and all feature points V in other images, sequencing the obtained distances d, selecting a point with the closest distance as a matching point, and establishing a rough matching point pair to form a feature point set;
step 24: and screening the obtained feature point set by adopting a random sampling consistency method, and finally removing mismatching points to obtain a purified matching feature point set.
Further, the specific operation steps of step 24 include:
step 241: selecting Q points on the obtained characteristic point set, and calculating all characteristic points P of the first image according to the set registration straight line model1Set of mapped points P on the second image* 2And satisfy the mappingThe relationship is as follows:
Figure BDA0003038332330000051
and m is 1,2, …, Q;
step 242: computing
Figure BDA0003038332330000052
Each point and the second image feature point set P2Euclidean distance of the corresponding point, setting the threshold value of the gray level difference of the characteristic points as delta, and counting P* 2The number N of the characteristic points satisfying that the Euclidean distance is less than deltaT:
Figure BDA0003038332330000053
Step 243: re-randomly selecting Q points, repeating the K steps 241-242, and recording N timesTUntil the iteration is finished;
step 244: selecting N as max (N)Tk(K-1, 2, …, K)) registration model as the final fitting result, thus rejecting the mis-match points.
Further, the specific operation steps of step 3 include:
step 31: adopting a mobile DLT method to realize linear fitting to the self-adaptive homography matrix according to the coordinate values of the plurality of groups of purified matching target point pairs;
step 32: time sequence correction is carried out on the multi-unmanned aerial vehicle aerial image sequence by adopting the image central point position after homography transformation, and time sequence moving parameter flag of the corresponding imageoThe values of (A) are as follows:
Figure BDA0003038332330000061
wherein, PoxIs the x coordinate of the central point of the o-th homography transformed image, and L is the width of the reference image;
step 33: traversing each frame of the image sequence according to flagoThe different values of (2) are corrected: when flagoWhen the number is 1, the next frame of the image to be spliced is used forSplicing, when flagoWhen the image number is-1, the previous frame of the image to be spliced is used for splicing until a flagoWhen the value is 0, outputting the corrected image sequence;
step 34: and recalculating new pixel values of the overlapped area by using the linear gradient of the image, and performing color correction.
Further, the specific operation steps of step 31 include:
step 311: the feature matching point sets of the two images are respectively X*' and X*Constructing an adaptive homography transformation relation X*'=H*X*
Step 312: changing the constructed adaptive homography conversion relation into an implicit form, and linearizing into: x*'×H*X*When the value is equal to 0, the following can be obtained:
Figure BDA0003038332330000062
step 313: memory linearization matrix
Figure BDA0003038332330000063
Can obtain A h 0, for A*Singular value decomposition is carried out to obtain an adaptive homography matrix H*Right singular vector of
Figure BDA0003038332330000071
Figure BDA0003038332330000072
Wherein, the weight value
Figure BDA0003038332330000073
Indicates a control point x*The more recent pixel data is important, the variance σ2Is a measure of how far the measurement point deviates from the control point;
step 314: according to the obtained right singular vector
Figure BDA0003038332330000074
I.e. a reconstructable adaptive homography matrix H*
Further, the formula for calculating the new pixel value of the overlap region in step 34 is:
I(x,y)=Il(xl,yl)(1-α)+Ir(xr,yr)α (6),
wherein the content of the first and second substances,
Figure BDA0003038332330000075
represents the distance from the abscissa of the pixel point to the abscissa of the left boundary of the overlap region, (x)l,yl) To stitch the front left image coordinates, (x)r,yr) For stitching front-right image coordinates, Il(xl,yl) For stitching front left image pixels, Ir(xr,yr) To stitch the front right image pixels, (x)lmax,ylmax) The maximum value of the left image coordinates before splicing, (x, y) are the image coordinates after splicing, and I (x, y) is the image pixel after splicing.
Compared with the prior art, the method has the following beneficial effects:
firstly, aiming at the transformation problem of images with different scales and different visual angles under the complex and changeable environment and the cooperation of multiple machines, the invention provides a self-adaptive homography matrix method, which basically eliminates the phenomena of double images, pixel fracture and the like existing in the traditional homography matrix.
Secondly, the invention also adopts a weighted smoothing algorithm, realizes the smooth transition of the overlapped part images through weight distribution, and effectively solves the problems of fuzzy overlapped areas and inconsistent colors after splicing by combining a linear gradual change thought.
In conclusion, the registration precision of the characteristics of the mosaic obtained by the method provided by the invention is obviously improved, and the high-precision requirement of mosaic is met.
Drawings
FIG. 1 is a flow chart of a stitching algorithm of the present invention;
FIGS. 2(a) - (c) are schematic diagrams of the transformation of each coordinate system;
FIG. 3 is a schematic diagram of feature point matching of four scene aerial photography source images;
FIG. 4 is two different sets of aerial source images;
FIG. 5 is a splicing experimental graph of two groups of different aerial photography source images;
fig. 6 is a multi-scale multi-view image stitching experimental diagram under four scenes.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present invention, the following further describes the technical solution of the present invention with reference to the drawings and the embodiments.
The invention provides a high-precision image splicing optimization method under multi-machine cooperative constraint, which comprises the following steps of:
step 1: acquiring images in real time by using multiple sensors, and establishing a multi-machine cooperation real-time image imaging model by combining position, posture and angle information among multiple unmanned aerial vehicles;
the multi-sensor real-time acquisition ground image, solve image data and handle and obtain unmanned aerial vehicle motion parameter, realize the flight control of organism. The attitude angles (pitch angle theta, roll angle phi and yaw angle) of the unmanned aerial vehicles are obtained by combining inertial navigation units (IMUs), GPS and barometers carried on the unmanned aerial vehicles
Figure BDA0003038332330000081
) Coordinates and fly height. A multi-machine collaborative imaging model is established based on pose information acquired by each module carried by the unmanned aerial vehicle, so that feature information of a patrol area can be conveniently acquired. The method comprises the following specific steps:
(1) establishing a coordinate system
First, a ground coordinate system p is establishedg=(xg,yg,zg)TAssuming that the ground is a plane, the points on the ground and the obstacle are all located on the plane. Wherein, the coordinate (x)g,yg,zg) For ground points in the x, y, z directionsThe component of (a), pointing to satisfy the right-hand rule of the coordinate system;
secondly, a coordinate system P of the body is establishedb=(xb,yb,zb)TSelecting the center of mass of the body as the origin, wherein xbAxis directed to the head, ybAxis pointing to the right of the fuselage, zbThe shaft points to the lower part of the machine body;
thirdly, a camera coordinate system p is establishedci=(xci,yci,zci)TAssuming that each drone i corresponds to a camera, the cameras can be placed in any position, and therefore the ground coordinates are chosen to describe the camera position. The relation between the camera coordinate system and the ground coordinate system can be determined by a rotation matrix RciAnd a translation vector tciIs described as pg=Rcipci+tci(ii) a The transformation relationship between the ground coordinate system and the camera coordinate system is shown in figure 2(a),
the conversion relationship between the body coordinate system and the ground coordinate system is shown in fig. 2 (b);
finally, an image coordinate system p is establishedIi=(xI,yI)TThe image coordinate system is a two-dimensional coordinate system established for the real-time image, the upper left corner of the real-time image is taken as the origin of coordinates, and xIAxis horizontal to right, yIAxis perpendicular xIAxially downwards, the coordinate transformation relation between the image coordinate system and the camera coordinate system is as follows: p is a radical ofIi=RIipciWherein the matrix R is transformedIiThe transformation of the image coordinate system to the camera coordinate system, in relation to the camera parameters, is shown in fig. 2 (c).
(2) Establishing an imaging model
When the airborne camera is used for imaging, the imaging area can be regarded as intersection of a rectangular pyramid and a plane, and the actual imaging area can be determined through linear equations of four edges of each unmanned aerial vehicle. And establishing a coordinate transformation relation between a body coordinate system and a ground coordinate system by combining the attitude angle of the unmanned aerial vehicle: p is a radical ofg=Lbpb. Transformation matrix L around three axes x, y, zbThe attitude angles theta, phi and phi with the unmanned aerial vehicle,
Figure BDA0003038332330000101
In this regard, it can be expressed as:
Figure BDA0003038332330000102
if there are n image coordinates p that unmanned aerial vehicle shotIi(i ═ 1,2, L, n) and pgCorrespond to, i.e. pgIn the imaging range of n unmanned aerial vehicles, the image fusion can be carried out on the part of images in the later stage.
Step 2: and (3) extracting gray value information of the continuous pixel points by adopting an ORB method, further judging image characteristic points, and performing characteristic matching and purification on the extracted characteristic points. The method comprises the following specific steps:
(1) feature extraction
The method comprises the steps of adopting an ORB method, judging image feature points based on gray value information of continuous pixel points, comparing the gray value size result of any pixel point S in an image with the gray value size result of a point in a circular neighborhood, and extracting a source image feature point set, wherein the specific process is described as follows:
firstly, global detection is carried out on candidate points: selecting any pixel point S in the image, drawing a circle with the radius of 3 pixels by taking S as the center of the circle, considering 16 pixels on the circumference of the pixel point, detecting the 16 pixel points falling on the circumference, recording the number of continuous pixel points with the gray scale of the pixels in the 16 pixel points on the neighborhood circle satisfying the formula (2) as h, and judging whether h is greater than a preset threshold value epsilondNormally, the threshold is set to 12, i.e., if h>If the gray value of the pixel meets the condition that S is judged as the characteristic point and the gray value corresponding to the pixel meets the condition that:
Figure BDA0003038332330000103
wherein I (x) is the gray scale of any point on the circumference, I(s) is the gray scale of the center of the circle, epsilondN represents a gray difference value as a threshold value of the gray difference value.
Secondly, carrying out optimization detection on the candidate points: the ORB optimization detection method is utilized to accelerate the extraction of the characteristic points,and the detection efficiency is improved. Directly detecting the gray values of four pixel positions of each pixel point S in the circumferential vertical direction I (1) and I (9) and the horizontal direction I (5) and I (13), and counting the gray difference value of the four position points I (t) (t takes 1, 5, 9 and 13 respectively) and the pixel gray value of the selected point is more than epsilondThe number of the pixel points M is as follows:
M=size(I∈{|I(t)-I(s)|>εd}),εd>0 (3),
if S satisfies formula (3) and M3And 3, judging S as a characteristic point, and if not, directly excluding the point. Therefore, the feature point set of each image in the original image is screened out, and subsequent feature matching is facilitated.
(2) Feature matching and purification
Firstly, carrying out feature matching on the extracted feature points: and calculating the Hamming distance d of the features, and taking the Hamming distance d as an evaluation criterion of feature matching similarity. Measuring Hamming distance between any characteristic point S in image and all characteristic points V in other images
Figure BDA0003038332330000111
Wherein i*0,1, …, n-1, denotes an encoding where S and V are both n bits,
Figure BDA0003038332330000112
is an exclusive or operation. And sequencing the obtained distances, selecting one closest to the distances as a matching point, and establishing a rough matching point pair. And a large number of mismatching exists in the matching result, and mismatching points need to be filtered by means of a screening mechanism.
Secondly, feature purification is carried out on the matched feature point set: screening the feature point set by using a Random Sample Consensus (RANSAC) method, selecting Q points, and calculating all feature points P of the first image according to an assumed registration straight line model1Set of mapped points P on the second image* 2And satisfy the mapping relation
Figure BDA0003038332330000113
Wherein m is 1,2, …, Q;
computing
Figure BDA0003038332330000121
Each point and the second image feature point set P2Setting the Euclidean distance of the corresponding points, setting the threshold value of the gray level difference of the characteristic points as delta, and counting P* 2The number N of the characteristic points satisfying that the Euclidean distance is less than deltaT
Figure BDA0003038332330000122
Re-randomly selecting Q points, repeating the above operation K times and recording N timesTUntil the iteration is finished;
finally, selecting N as max (N)Tk(K-1, 2, …, K)) registration model as the final fitting result, rejecting mis-matching points.
And step 3: and constructing an adaptive homography matrix, mapping the images to the same coordinate system for splicing, performing time sequence correction on each sequence of images, and performing color correction by adopting a weighted smoothing algorithm to finish image splicing. The method comprises the following specific steps:
(1) adaptive homography transformation
Calculating homography matrixes among aerial images, mapping the images to the same coordinate system according to the homography matrixes, and splicing the images into a rough image;
and then using a moving DLT (Direct Linear transform) method to realize linear fitting of the homography matrix according to the matched target point, wherein the fitting process is as follows: assuming that the feature matching point sets of the two images are X and X ', constructing a homography transformation relation X' ═ HX, namely:
(x' y' 1)T=H(x y 1)T
wherein, (x 'y' 1)TAs coordinates of the feature point in X' (X y 1)TH is a homography matrix of 3 × 3, which is the coordinates of the feature points in X, i.e.:
Figure BDA0003038332330000123
let the row element of H be denoted as rj(j is 1,2,3), and
Figure BDA0003038332330000131
it is changed to implicit form and linearized as: x' X HX ═ 0, then:
Figure BDA0003038332330000132
memory linearization matrix
Figure BDA0003038332330000133
Then, Ah is obtained as 0 and h is solved. Take the first two rows a of AsAnd maps it to A ∈ R2N*9SVD (singular value Decomposition) Decomposition is performed on A, and
Figure BDA0003038332330000134
defining right singular vector corresponding to minimum singular value of A, obtaining
Figure BDA0003038332330000135
H can be formed by
Figure BDA0003038332330000136
Reconstructing the mixture;
finally, based on the construction process of the homography matrix H, the position-dependent self-adaptive homography matrix H is designed*Satisfy the homography transformation relation X*'=H*X*And the weight is given to adapt to the transformation process of the data, so that the limitation that the traditional single homography matrix is only applicable in the same plane aiming at the view angle range is avoided, and the problem of ghosting or registration error caused by single homography transformation is solved. The method comprises the following specific operation steps:
step 1: the feature matching point sets of the two images are respectively X*' and X*Constructing an adaptive homography transformation relation X*'=H*X*
Step 2: transforming the constructed adaptive homography into a relational expressionIn implicit form, and is linearized as X*'×H*X*When the value is equal to 0, the following can be obtained:
Figure BDA0003038332330000137
and step 3: memory linearization matrix
Figure BDA0003038332330000141
Can obtain A h 0, for A*Singular value decomposition is carried out to obtain an adaptive homography matrix H*Right singular vector of
Figure BDA0003038332330000142
Figure BDA0003038332330000143
Wherein, the weight value
Figure BDA0003038332330000144
Indicates a control point x*The more recent pixel data is important, the variance σ2Is a measure of how far the measurement point deviates from the control point;
and 4, step 4: obtained according to the formula (4)
Figure BDA0003038332330000145
And reconstruct
Figure BDA0003038332330000146
Obtaining an adaptive homography matrix H*
(2) Timing correction
The invention adopts the image central point position after homography transformation to carry out time sequence correction on the aerial image sequence of the multiple unmanned aerial vehicles, and the correction formula is as follows:
Figure BDA0003038332330000147
wherein, PoxIs the x coordinate of the central point of the o-th homography transformed image, L is the width of the reference image, and flagoIs a time sequence moving parameter of a corresponding image, and the value is as follows:
Figure BDA0003038332330000148
traversing each frame of the image sequence until a flagoAnd outputting the corrected image sequence when the image sequence is equal to 0.
(3) Color correction
Aiming at the problems of blurring, ghosting and color inconsistency of the same overlapped area after splicing, the linear gradual change of the image is adopted to recalculate a new pixel value of the overlapped area, and the splicing effect is optimized through color correction.
Assuming that alpha represents the distance from the abscissa of the pixel point to the abscissa of the left boundary of the overlapping region, the gray value of the pixel point of the spliced image is given by the following formula:
I(x,y)=Il(xl,yl)(1-α)+Ir(xr,yr)α (6),
wherein the content of the first and second substances,
Figure BDA0003038332330000151
(xl,yl) To stitch the front left image coordinates, (x)r,yr) For stitching front-right image coordinates, Il(xl,yl) For stitching front left image pixels, Ir(xr,yr) To stitch the front right image pixels, (x)lmax,ylmax) The maximum value of the left image coordinates before splicing, (x, y) are the image coordinates after splicing, and I (x, y) is the image pixel after splicing.
By the processing, a good splicing effect can be realized, and a well spliced image is obtained.
Examples
The verification of the invention is completed in a multi-unmanned aerial vehicle cooperative target detection and identification system with the registration number of 2020SR 1088587.
1. Experimental Environment
CPU:Intel Xeon E5-1650 v4;
RAM:32GB;
NVIDIA TITAN-X, Windows10 system, Visual studio 2015+ Anaconda3.5+ Python 3.6.
2. Procedure of experiment
The experiment utilizes the aerial image that many unmanned aerial vehicle flight platforms gathered in the tour flight experiment, verifies the performance of the concatenation algorithm that provides, specifically includes two experiments:
(1) splicing experiment for two groups of different aerial photography source images
According to the steps of the splicing method provided by the invention, two groups of aerial source images from different sources are effectively spliced as shown in figure 4, the splicing position is smooth, and the splicing result is shown in figure 5. As can be seen from the figure, the method effectively expands the visual angle range of the image, retains the respective image characteristics of the two source images, and effectively displays the key information of pits, people, lamp boards and the like which are shielded by the big tree in the source images.
(2) Multi-scale and multi-view image splicing experiment under four scenes
Four experimental scenes are constructed by using a plurality of unmanned aerial vehicles to patrol scenes of key roads in a complex area, the matching schematic diagram of feature points of the aerial source images in the four scenes is shown in figure 3, the image splicing is carried out in each scene according to each step of the splicing method provided by the invention, and the splicing result is shown in figure 6. As can be seen from the figure, the method has high splicing precision and no image distortion.
3. Evaluation of splicing results
In order to fully verify the splicing method provided by the invention, the splicing effect is objectively and comprehensively evaluated by adopting evaluation parameters such as RMSE (Root Mean Square Error), Dice distance, Hausdorff distance and the like.
(1) RMSE evaluation
The Root Mean Square Error (RMSE) evaluation refers to the square root of the mean deviation between a plurality of characteristic points of the image to be registered and the pixel point position of the transformed reference image after the two images are registered:
Figure BDA0003038332330000171
wherein, Δ xqAnd Δ yqThe error in the x-direction and y-direction for the q-th registered feature point.
(2) Dice distance evaluation
The Dice distance is used to represent the degree to which two sets contain each other:
Figure BDA0003038332330000172
wherein, TP represents the logarithm of correct matching points, GT and PR represent the feature point sets of the target image and the transformation image respectively.
(3) Hausdorff distance evaluation
The Hausdorff distance is used to calculate the match of the two sets of points:
H(A,B)=max(h(A,B),h(B,A)),
wherein h (A, B) is maxa∈Aminb∈B||a-b||2,h(B,A)=maxb∈Bmina∈A||b-a||2
According to the three evaluation indexes, the method provided by the invention and the registration algorithm based on SURF and SIFT images are compared and analyzed, and the obtained comparison result is shown in Table 1:
TABLE 1 comparison of quantization indices for different methods
Figure BDA0003038332330000173
Figure BDA0003038332330000181
As can be seen from the table 1, the method provided by the invention is superior to SURF and SIFT algorithms on three evaluation criteria of RMSE, Dice and Hausdorff, the running time of the algorithm is short, the matching error of the feature points is obviously reduced, the complex and changeable environment of the algorithm experiment is met, and the requirements of high instantaneity and high precision of splicing the multi-scale and multi-view aerial images in the multi-machine collaborative inspection flight process are met.
Those not described in detail in this specification are within the skill of the art. Although the present invention has been described in detail with reference to the foregoing embodiments, it will be apparent to those skilled in the art that various changes in the embodiments and modifications of the invention can be made, and equivalents of some features of the invention can be substituted, and any changes, equivalents, improvements and the like, which fall within the spirit and principle of the invention, are intended to be included within the scope of the invention.

Claims (7)

1. A high-precision image splicing method under multi-machine cooperative constraint is characterized by comprising the following steps:
step 1: acquiring ground images in real time through multiple sensors, and constructing a multi-machine cooperation real-time image imaging model by combining position, posture and angle information among multiple unmanned aerial vehicles, so as to determine an actual imaging area;
step 2: extracting a characteristic point set of an original image in an actual imaging area by adopting an ORB (object-oriented bounding box) method, performing image rough matching, purifying a rough matching result by using a random sampling consistency algorithm, and removing mismatching points;
and step 3: constructing an adaptive homography matrix, mapping the purified images to the same coordinate system for preliminary splicing, then performing time sequence correction on each sequence of images, and then performing color correction by adopting a weighted smoothing algorithm;
and 4, step 4: and outputting a result image after finishing image splicing.
2. The method for splicing high-precision images under multi-machine cooperative constraint according to claim 1, wherein the specific operation steps of step 1 comprise:
step 11: resolving the acquired ground image data to obtain the motion parameters of the unmanned aerial vehicle, and realizing the flight control of the airframe;
step 12: utilize inertial navigation unit, GPS and the barometer of carrying on many unmanned aerial vehicles to obtain unmanned aerial vehicle attitude angle: pitch angle theta, roll angle phi, yaw angle
Figure FDA0003038332320000011
And information such as coordinates, flying height, etc.;
step 13: respectively establishing a ground coordinate system, a body coordinate system, a camera coordinate system and an image coordinate system;
step 14: coordinate transformation relation p between body coordinate system and ground coordinate system is established by combining attitude angle of unmanned aerial vehicleg=LbpbAnd establishing a multi-machine cooperation real-time image imaging model.
3. The method for splicing images with high precision under the multi-machine cooperative constraint as claimed in claim 1, wherein the specific operation steps of step 2 include:
step 21: selecting any pixel point S in the original image, drawing a circle with the radius of 3 pixels by taking S as the center of the circle, detecting 16 pixel points falling on the circle, recording the number of continuous pixel points with the pixel gray scale satisfying the formula (2) in the 16 pixel points on the neighborhood circle as h, and judging whether h is greater than a preset threshold value epsilondIf yes, judging S as a characteristic point, and the gray value condition satisfied by the pixel is as follows:
Figure FDA0003038332320000021
wherein I (x) is the gray scale of any point on the circumference, I(s) is the gray scale of the center of the circle, epsilondIs a threshold value of the gray scale difference, and N represents the gray scale difference;
step 22: directly detecting the gray values of four pixel positions of each pixel point S in the circumferential vertical direction I (1) and I (9) and the horizontal direction I (5) and I (13), and counting that the difference value of the gray values of the four position points I (t) and the pixel of the selected point is greater than epsilondThe number of the pixel points M is as follows:
M=size(I∈{|I(t)-I(s)|>εd}),εd>0 (3);
if the characteristic point satisfies the formula (3) and M33, judging S as a characteristic point, otherwise, directly excluding the point;
step 23: calculating Hamming distances d between any one feature point S selected from one image and all feature points V in other images, sequencing the obtained distances d, selecting a point with the closest distance as a matching point, and establishing a rough matching point pair to form a feature point set;
step 24: and screening the obtained feature point set by adopting a random sampling consistency method, and finally removing mismatching points to obtain a purified matching feature point set.
4. The method for splicing images with high precision under the multi-machine cooperative constraint as claimed in claim 3, wherein the specific operation steps of step 24 include:
step 241: selecting Q points on the obtained characteristic point set, and calculating all characteristic points P of the first image according to the set registration straight line model1Set of mapped points P on the second image* 2And the mapping relation is satisfied: p* 2 m=f(P1 m) And m is 1,2, …, Q;
step 242: calculating P* 2 mEach point and the second image feature point set P2Euclidean distance of the corresponding point, setting the threshold value of the gray level difference of the characteristic points as delta, and counting P* 2The number N of the characteristic points satisfying that the Euclidean distance is less than deltaT:NT=size(P* 2 m∈{||P2 m-P* 2 m||2<δ}),δ>0;
Step 243: re-randomly selecting Q points, repeating the K steps 241-242, and recording N timesTUntil the iteration is finished;
step 244: selecting N as max (N)Tk(K-1, 2, …, K)) registration model as the final fitting result, thus rejecting the mis-match points.
5. The method for splicing images with high precision under the multi-machine cooperative constraint as claimed in claim 1, wherein the specific operation steps of step 3 include:
step 31: adopting a mobile DLT method to realize linear fitting to the self-adaptive homography matrix according to the coordinate values of the plurality of groups of purified matching target point pairs;
step 32: time sequence correction is carried out on the multi-unmanned aerial vehicle aerial image sequence by adopting the image central point position after homography transformation, and time sequence moving parameter flag of the corresponding imageoThe values of (A) are as follows:
Figure FDA0003038332320000041
wherein, PoxIs the x coordinate of the central point of the o-th homography transformed image, and L is the width of the reference image;
step 33: traversing each frame of the image sequence according to flagoThe different values of (2) are corrected: when flagoWhen the number of the frames is 1, splicing the frames of the images to be spliced, and when the number of the frames is flagoWhen the image number is-1, the previous frame of the image to be spliced is used for splicing until a flagoWhen the value is 0, outputting the corrected image sequence;
step 34: and recalculating new pixel values of the overlapped area by using the linear gradient of the image, and performing color correction.
6. The method for splicing images with high precision under the multi-machine cooperative constraint as claimed in claim 5, wherein the specific operation steps of step 31 include:
step 311: the feature matching point sets of the two images are respectively X*' and X*Constructing an adaptive homography transformation relation X*'=H*X*
Step 312: changing the constructed adaptive homography conversion relation into an implicit form, and linearizing into: x*'×H*X*When the value is equal to 0, the following can be obtained:
Figure FDA0003038332320000042
step 313: memory linearization matrix
Figure FDA0003038332320000043
Can obtain A h 0, for A*Singular value decomposition is carried out to obtain an adaptive homography matrix H*Right singular vector of
Figure FDA0003038332320000051
Figure FDA0003038332320000052
Wherein, the weight value
Figure FDA0003038332320000053
Indicates a control point x*The more recent pixel data is important, the variance σ2Is a measure of how far the measurement point deviates from the control point;
step 314: according to the obtained right singular vector
Figure FDA0003038332320000054
I.e. a reconstructable adaptive homography matrix H*
7. The method for stitching high-precision images under multi-machine cooperative constraint as recited in claim 5, wherein the formula for calculating the new pixel value of the overlapped region in step 34 is:
I(x,y)=Il(xl,yl)(1-α)+Ir(xr,yr)α (6),
wherein the content of the first and second substances,
Figure FDA0003038332320000055
represents the distance from the abscissa of the pixel point to the abscissa of the left boundary of the overlap region, (x)l,yl) To stitch the front left image coordinates, (x)r,yr) For stitching front-right image coordinates, Il(xl,yl) For stitching front left image pixels, Ir(xr,yr) To stitch the front right image pixels, (x)lmax,ylmax) The maximum value of the left image coordinates before splicing, (x, y) are the image coordinates after splicing, and I (x, y) is the image pixel after splicing.
CN202110450860.5A 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint Active CN113313659B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110450860.5A CN113313659B (en) 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110450860.5A CN113313659B (en) 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint

Publications (2)

Publication Number Publication Date
CN113313659A true CN113313659A (en) 2021-08-27
CN113313659B CN113313659B (en) 2024-01-26

Family

ID=77371027

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110450860.5A Active CN113313659B (en) 2021-04-25 2021-04-25 High-precision image stitching method under multi-machine cooperative constraint

Country Status (1)

Country Link
CN (1) CN113313659B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114154352A (en) * 2022-01-18 2022-03-08 中国科学院长春光学精密机械与物理研究所 Method and device for designing topological structure of aerial imaging multi-actuator cooperative control
CN114820737A (en) * 2022-05-18 2022-07-29 浙江圣海亚诺信息技术有限责任公司 Remote sensing image registration method based on structural features
CN116681590A (en) * 2023-06-07 2023-09-01 中交广州航道局有限公司 Quick splicing method for aerial images of unmanned aerial vehicle
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
US20190385288A1 (en) * 2015-09-17 2019-12-19 Michael Edwin Stewart Methods and Apparatus for Enhancing Optical Images and Parametric Databases
CN112435163A (en) * 2020-11-18 2021-03-02 大连理工大学 Unmanned aerial vehicle aerial image splicing method based on linear feature protection and grid optimization

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130208997A1 (en) * 2010-11-02 2013-08-15 Zte Corporation Method and Apparatus for Combining Panoramic Image
WO2014023231A1 (en) * 2012-08-07 2014-02-13 泰邦泰平科技(北京)有限公司 Wide-view-field ultrahigh-resolution optical imaging system and method
US20190385288A1 (en) * 2015-09-17 2019-12-19 Michael Edwin Stewart Methods and Apparatus for Enhancing Optical Images and Parametric Databases
CN112435163A (en) * 2020-11-18 2021-03-02 大连理工大学 Unmanned aerial vehicle aerial image splicing method based on linear feature protection and grid optimization

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
张永梅;张晨希;郭莎;: "基于SIFT特征的彩色图像拼接方法研究", 计算机测量与控制, no. 08 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114154352A (en) * 2022-01-18 2022-03-08 中国科学院长春光学精密机械与物理研究所 Method and device for designing topological structure of aerial imaging multi-actuator cooperative control
CN114154352B (en) * 2022-01-18 2024-05-17 中国科学院长春光学精密机械与物理研究所 Topology structure design method and device for cooperative control of aviation imaging multiple actuators
CN114820737A (en) * 2022-05-18 2022-07-29 浙江圣海亚诺信息技术有限责任公司 Remote sensing image registration method based on structural features
CN114820737B (en) * 2022-05-18 2024-05-07 浙江圣海亚诺信息技术有限责任公司 Remote sensing image registration method based on structural features
CN116681590A (en) * 2023-06-07 2023-09-01 中交广州航道局有限公司 Quick splicing method for aerial images of unmanned aerial vehicle
CN116681590B (en) * 2023-06-07 2024-03-12 中交广州航道局有限公司 Quick splicing method for aerial images of unmanned aerial vehicle
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching

Also Published As

Publication number Publication date
CN113313659B (en) 2024-01-26

Similar Documents

Publication Publication Date Title
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN110569704B (en) Multi-strategy self-adaptive lane line detection method based on stereoscopic vision
CN113313659B (en) High-precision image stitching method under multi-machine cooperative constraint
CN107063228B (en) Target attitude calculation method based on binocular vision
CN104484648B (en) Robot variable visual angle obstacle detection method based on outline identification
CN114936971A (en) Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area
CN113532311A (en) Point cloud splicing method, device, equipment and storage equipment
WO2018145291A1 (en) System and method for real-time location tracking of drone
CN111507901B (en) Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint
CN112734841B (en) Method for realizing positioning by using wheel type odometer-IMU and monocular camera
CN110288656A (en) A kind of object localization method based on monocular cam
CN103822616A (en) Remote-sensing image matching method with combination of characteristic segmentation with topographic inequality constraint
CN113850126A (en) Target detection and three-dimensional positioning method and system based on unmanned aerial vehicle
CN113222820B (en) Pose information-assisted aerial remote sensing image stitching method
CN109272574A (en) Linear array rotary scanning camera imaging model building method and scaling method based on projective transformation
CN107677274A (en) Unmanned plane independent landing navigation information real-time resolving method based on binocular vision
CN110021039A (en) The multi-angle of view material object surface point cloud data initial registration method of sequence image constraint
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN111060006A (en) Viewpoint planning method based on three-dimensional model
CN113506342B (en) SLAM omni-directional loop correction method based on multi-camera panoramic vision
CN111967337A (en) Pipeline line change detection method based on deep learning and unmanned aerial vehicle images
CN114022560A (en) Calibration method and related device and equipment
Eichhardt et al. Affine correspondences between central cameras for rapid relative pose estimation
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
CN116740288B (en) Three-dimensional reconstruction method integrating laser radar and oblique photography

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant