CN113706389A - Image splicing method based on POS correction - Google Patents

Image splicing method based on POS correction Download PDF

Info

Publication number
CN113706389A
CN113706389A CN202111163419.5A CN202111163419A CN113706389A CN 113706389 A CN113706389 A CN 113706389A CN 202111163419 A CN202111163419 A CN 202111163419A CN 113706389 A CN113706389 A CN 113706389A
Authority
CN
China
Prior art keywords
image
points
point
solving
registered
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111163419.5A
Other languages
Chinese (zh)
Other versions
CN113706389B (en
Inventor
耿虎军
熊恒斌
胡炎
高峰
闫玉巧
仇梓峰
杨福琛
张泽勇
李方用
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CETC 54 Research Institute
Original Assignee
CETC 54 Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CETC 54 Research Institute filed Critical CETC 54 Research Institute
Priority to CN202111163419.5A priority Critical patent/CN113706389B/en
Publication of CN113706389A publication Critical patent/CN113706389A/en
Application granted granted Critical
Publication of CN113706389B publication Critical patent/CN113706389B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an image splicing method based on POS correction, and relates to the field of image processing. The method comprises the steps of firstly, extracting and matching feature points, calculating rotation and translation components of an image to be registered according to the geometric relation of the feature point pairs, accumulating the components to a panoramic canvas and carrying out rigid transformation on the image to be registered; then, solving the geographic coordinate of the central point of the image by using a strict geometric imaging model, and recording the row and column numbers of the image on the panoramic canvas; and finally, removing noise points by filtering according to the geographic coordinates of the central points of the images of the air route, and uniformly selecting control points to construct a GIS map. The invention can greatly improve the splicing speed and simultaneously eliminate systematic overall deviation caused by the randomness of the selection of the reference image.

Description

Image splicing method based on POS correction
Technical Field
The invention relates to the field of image processing, in particular to an image splicing method based on POS correction, which can be used for image splicing and GIS map generation facing to unmanned aerial vehicle videos.
Background
At present, the image stitching algorithm mainly has the following three types, and although the image stitching algorithm has various characteristics in application scenes, the image stitching algorithm has certain defects.
1. The image splicing method based on the features mainly comprises the steps of feature extraction, feature description and matching, RANSAC (random sample consensus) elimination mismatching, geometric transformation and image fusion. The method is suitable for the situation that the image sequence is short, but obvious error accumulation effect can occur to the image sequence with long time sequence, and the splicing efficiency is low; the requirement on an image scene is met, the number of image characteristic points is seriously depended on, and the splicing process is easily interrupted for sparse characteristic points or areas without the characteristic points; the image stitching result does not contain geographic coordinate information.
2. The image splicing method based on the POS data comprises the steps of obtaining longitude and latitude height and three-attitude data by using a POS system of an unmanned aerial vehicle, solving a geographic coordinate corresponding to each pixel based on a collinear condition equation, and splicing images based on geographic positions. However, due to the fact that the accuracy of POS data is low, the adjacent images in the panoramic image obtained by the method have obvious dislocation, and errors are large.
3. And part of research works organically combine the POS data and the original image to obtain an orthoimage, then solve the geometric projection transformation relation between the reference image and the image to be registered through feature extraction and matching, and finally splice the images based on the geographic position after re-correcting the orthoimage in a coordinate fine adjustment mode. There are two main problems with this approach: (1) the error of the normal image serving as the reference has randomness, and the coordinate fine adjustment eliminates the relative error between adjacent images, but systematic overall deviation from the reference image exists; (2) the images to be registered are subjected to geometric correction processing twice before and after, so that the resource occupancy rate is greatly increased, and the splicing speed is low.
Disclosure of Invention
The present invention is directed to provide an image stitching method based on POS correction, which avoids the problems of the background methods described above. The invention has the characteristics of high robustness, high splicing speed and small error.
The technical scheme adopted by the invention is as follows:
an image splicing method based on POS correction comprises the following steps:
(1) extracting and matching the characteristic points, calculating rotation and translation components of the image to be registered according to the geometric relationship of the purified characteristic point pairs, accumulating the rotation and translation components to the panoramic canvas, and then performing rigid transformation on the image to be registered;
(2) solving the geographical coordinates of the central point of the image by using a strict geometric imaging model, and recording the row and column numbers of the image on the panoramic canvas;
(3) and removing noise points by Savitzky-Golay filtering according to the geographic coordinates of the central points of the images of the air route, and then uniformly selecting control points to construct a GIS map.
Further, the specific mode of the step (1) is as follows:
(101) selecting two frame images with a frame interval as a reference image and an image to be registered, and respectively marking as I1And I2The width and height of the image are denoted as W and H, respectively;
(102) accelerated extraction of I Using GPU1And I2And calculating feature description vectors;
(103) refining and purifying the feature point matching pairs by using a RANSAC algorithm based on graph cut optimization, and eliminating mismatching;
(104) under normal scene, features after purificationWhen the number of the point pairs meets a threshold value T, solving I based on the characteristic point pairs2To I1Rotational and translational components of (a):
I1and I2The purified characteristic point pairs are respectively marked as Pi(i=1,2,3...,n)And Pj(j=1,2,3...,n)From which two points are respectively traversed and selected as Pi1And Pi2、Pj1And Pj2Is selected and combined with
Figure BDA0003290629950000031
Seed, the formed vector is noted as
Figure BDA0003290629950000032
And
Figure BDA0003290629950000033
then image I to be registered2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) are:
Figure BDA0003290629950000034
Figure BDA0003290629950000035
Figure BDA0003290629950000036
Figure BDA0003290629950000037
Figure BDA0003290629950000038
Figure BDA0003290629950000039
Figure BDA00032906299500000310
wherein, the point x represents taking an x coordinate, and the point y represents taking a y coordinate;
(105) under the special scene of sparse characteristic points or characteristic point-free areas, when the number of the purified characteristic point pairs does not meet a threshold value T, solving I based on geographic coordinates2To I1Rotational and translational components of (a):
separately solving for I using rigorous geometric imaging models1And I2Geographic coordinates of four vertices: pL1、PL2、PL3、P L4And PR1、PR2、PR3、PR4,I1And I2The geographic coordinate of the central point of the image is PLAnd PRFurther utilize I1And I2In a positional relationship of1Solving for I as a reference2Location distribution on the panoramic canvas;
the conversion coefficient s from the geographic coordinates to the image row and column number coordinates is:
Figure BDA0003290629950000041
image to be registered I2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) are:
Figure BDA0003290629950000042
Figure BDA0003290629950000043
Figure BDA0003290629950000044
Figure BDA0003290629950000045
Figure BDA0003290629950000046
Figure BDA0003290629950000047
Figure BDA0003290629950000048
here, the point lon represents longitude and the point lat represents latitude.
Further, the specific mode of the step (2) is as follows:
(201) POS data acquired by an unmanned aerial vehicle are used as external orientation elements of the sensor at the imaging moment, and camera calibration parameters and a focal length are used as internal orientation elements; the POS data comprises longitude and latitude height, a course angle, a pitch angle and a roll angle;
(202) extracting an elevation value from AW3D DEM product data with the resolution of 30m according to the longitude and latitude, and subtracting the ground elevation value from the altitude to obtain the ground height H;
(203) solving the geographic coordinates of ground points corresponding to four vertexes of the image in a ground rectangular coordinate system according to a strict geometric imaging model, solving the geographic coordinates of the image center point by linear interpolation of the four vertex coordinates, and solving the row and column numbers of the image center point on the panoramic canvas by each frame I to be registered in the early stage2And the translation components delta x and delta y from the initial reference image are obtained by accumulation solution.
Further, the specific mode of the step (3) is as follows:
(301) selecting geographical coordinates P of the center point of the image before and after Savitzky-Golay filteringi,i=1,2,...,NAdding pixels with (lon, lat) variation smaller than a threshold value into a candidate point set, wherein lon is longitude and lat is latitude;
(302) and uniformly selecting 4 points in the candidate point set as control points to participate in geometric correction according to the position distribution, thereby constructing the GIS panoramic map with geographic coordinate information.
Compared with the prior art, the invention has the following beneficial effects:
1. the method can acquire the mosaic image with the geographic coordinates, can adapt to complex scenes, has no special requirements on image scenes, and can be used for image mosaic of sparse characteristic points or characteristic point-free areas without interruption.
2. The invention does not need to carry out geometric correction on the images, and only needs to calculate the coordinates of a few points by using a mathematical formula for each frame of image, thereby saving a large amount of time spent on resampling.
3. According to the method, pixel points in a small range near a Savitzky-Golay filtering fitting curve are uniformly selected as control points according to position distribution, precision errors from a POS system are reduced and then are shared by a long-time-sequence image sequence, and geographic positioning errors are greatly reduced.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention.
FIG. 2 is a solution I based on feature point matching pairs2To I1Schematic of the rotational and translational components of (a).
FIG. 3 is a schematic view of a rigid geometry imaging model.
FIG. 4 is a solution I based on geographic coordinates2To I1Schematic of the rotational and translational components of (a).
Detailed Description
The invention is further described below with reference to the accompanying drawings.
An image splicing method based on POS correction comprises the following steps:
(1) extracting and matching the characteristic points, calculating rotation and translation components of the image to be registered according to the geometric relationship of the purified characteristic point pairs, and accumulating the components to the panoramic canvas to perform rigid transformation on the image to be registered;
(2) solving the geographical coordinates of the central point of the image by using a strict geometric imaging model, and recording the row and column numbers of the image on the panoramic canvas;
(3) removing noise points by utilizing the geographical coordinates of the central points of the images of the air route through Savitzky-Golay filtering, and uniformly selecting control points to construct a GIS map;
in the splicing process, the load rotation is easy to increase the error of geographic coordinate solution, and noise points with large deviation of the geographic coordinate of each central point of the image to be registered from a fitting curve (the geographic coordinate trend of the central point of each image of the airline) are removed through Savitzky-Golay filtering, so that the scene error can be effectively avoided;
the step (1) specifically comprises the following steps:
(101) selecting two frames of images with a certain frame interval as reference images and images to be registered as I respectively1And I2The width and height of the image are respectively marked as W and H;
(102) firstly, the GPU is adopted to accelerate the extraction of I1And I2And calculating feature description vectors;
(103) then, refining and purifying the matched pairs of the characteristic points by using a GC-RANSAC (RANSAC based on graph cut optimization) algorithm, and eliminating mismatching;
(104) under a normal scene, when the number of the purified characteristic point pairs meets a threshold value T, suggesting that T is taken as 20, and solving I based on the characteristic point pairs2To I1Rotational and translational components of (a):
I1and I2The purified characteristic point pairs are respectively marked as Pi(i=1,2,3...,n)And Pj(j=1,2,3...,n)From which two points are traversed and selected as Pi1And Pi2、Pj1And Pj2Is selected and combined with
Figure BDA0003290629950000071
Seed, the formed vector is noted as
Figure BDA0003290629950000072
And
Figure BDA0003290629950000073
then image I to be registered2To the reference picture I1Angle of rotation ofThe component θ and the translational components Δ x, Δ y can be obtained by the following equations:
Figure BDA0003290629950000074
Figure BDA0003290629950000075
Figure BDA0003290629950000076
Figure BDA0003290629950000077
Figure BDA0003290629950000078
Figure BDA0003290629950000079
Figure BDA00032906299500000710
(105) under a special scene (sparse characteristic points or characteristic point-free areas), when the number of the purified characteristic point pairs does not meet a threshold value T, solving I based on geographic coordinates2To I1Rotational and translational components of (a):
separately solving for I using rigorous geometric imaging models1And I2Geographic coordinates of four vertices: pL1、PL2、PL3、PL4And PR1、PR2、PR3、PR4,I1And I2The geographic coordinate of the central point of the image is PLAnd PRFurther utilize I1And I2In the above-described manner, the positional relationship of (a),with I1Solving for I as a reference2Distributed over the panoramic canvas.
The conversion coefficient of the geographic coordinates to the image row-column number coordinates is s,
Figure BDA0003290629950000081
image to be registered I2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) can be obtained by the following equations:
Figure BDA0003290629950000082
Figure BDA0003290629950000083
Figure BDA0003290629950000084
Figure BDA0003290629950000085
Figure BDA0003290629950000086
Figure BDA0003290629950000087
Figure BDA0003290629950000088
the step (2) specifically comprises the following steps:
(201) POS data (longitude and latitude height, course angle, pitch angle and roll angle) acquired by an unmanned aerial vehicle are used as external orientation elements of the imaging time sensor, and camera calibration parameters and focal length are used as internal orientation elements;
(202) extracting an elevation value from AW3D DEM product data with the resolution of 30m according to the longitude and latitude, and subtracting a ground elevation value from the altitude to obtain a ground height H;
(203) solving the geographic coordinates of ground points corresponding to four vertexes of the image in a ground rectangular coordinate system according to a strict geometric imaging model, solving the geographic coordinates of the image center point by linear interpolation of the four vertex coordinates, and solving the row and column numbers of the image center point on the panoramic canvas by each frame I to be registered in the early stage2And the translation components delta x and delta y from the initial reference image are obtained by accumulation solution.
The step (3) specifically comprises the following steps:
(301) selecting geographical coordinates P of the center point of the image before and after Savitzky-Golay filteringi,i=1,2,...,NAdding the pixels with small (lon, lat) change into a candidate point set;
(302) and uniformly selecting 4 points in the candidate point set as control points to participate in geometric correction according to the position distribution, thereby constructing the GIS panoramic map with geographic coordinate information.
The following is a more specific example:
referring to fig. 1 to 4, an image stitching method based on POS correction includes the following steps:
(1) extracting and matching the characteristic points, calculating the rotation and translation components of the image to be registered according to the geometric relationship of the purified characteristic point pairs, and accumulating the components to the panoramic canvas to perform rigid transformation on the image to be registered:
selecting two frames of images with a certain frame interval as reference images and images to be registered as I respectively1And I2The width and height of the image are respectively marked as W and H;
firstly, the GPU is adopted to accelerate the extraction of I1And I2And calculating feature description vectors; then, refining and purifying the matched pairs of the characteristic points by using a GC-RANSAC (RANSAC based on graph cut optimization) algorithm, and eliminating mismatching;
1. under normal scene, when the number of the feature point pairs after purificationWhen a threshold value T is met, T is suggested to be taken as 20, and solution I is solved based on characteristic point pairs2To I1Rotational and translational components of (a):
I1and I2The purified characteristic point pairs are respectively marked as Pi(i=1,2,3...,n)And Pj(j=1,2,3...,n)From which two points are traversed and selected as Pi1And Pi2、Pj1And Pj2Is selected and combined with
Figure BDA0003290629950000091
Seed, the formed vector is noted as
Figure BDA0003290629950000092
And
Figure BDA0003290629950000093
then image I to be registered2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) can be obtained by the following equations:
Figure BDA0003290629950000094
Figure BDA0003290629950000101
Figure BDA0003290629950000102
Figure BDA0003290629950000103
Figure BDA0003290629950000104
Figure BDA0003290629950000105
Figure BDA0003290629950000106
2. under a special scene (sparse characteristic points or characteristic point-free areas), when the number of the purified characteristic point pairs does not meet a threshold value T, solving I based on geographic coordinates2To I1Rotational and translational components of (a):
separately solving for I using rigorous geometric imaging models1And I2Geographic coordinates of four vertices: pL1、PL2、PL3、PL4And PR1、PR2、PR3、PR4,I1And I2The geographic coordinate of the central point of the image is PLAnd PRFurther utilize I1And I2In a positional relationship of1Solving for I as a reference2Distributed over the panoramic canvas.
The conversion coefficient of the geographic coordinates to the image row-column number coordinates is s,
Figure BDA0003290629950000107
image to be registered I2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) can be obtained by the following equations:
Figure BDA0003290629950000108
Figure BDA0003290629950000109
Figure BDA00032906299500001010
Figure BDA00032906299500001011
Figure BDA0003290629950000111
Figure BDA0003290629950000112
Figure BDA0003290629950000113
(2) solving the geographic coordinates of the central point of the image by using a strict geometric imaging model, and recording the row and column numbers of the image on the panoramic canvas:
POS data (longitude and latitude height, course angle, pitch angle and roll angle) acquired by an unmanned aerial vehicle are used as external orientation elements of the imaging time sensor, and camera calibration parameters and focal length are used as internal orientation elements;
extracting an elevation value from AW3D DEM product data with the resolution of 30m according to the longitude and latitude, and subtracting a ground elevation value from the altitude to obtain a ground height H;
according to the collinear condition equation in the strict geometric imaging model, the following relation exists between A and a:
Figure BDA0003290629950000114
wherein, A (X, Y, Z) is the coordinate of the ground point in the rectangular coordinate system of the ground, a (X, Y) is the coordinate of the image point of the corresponding point in the image plane coordinate system, S (X)s,Ys,Zs) Is the (longitude, latitude, height) coordinate, o, of the unmanned plane under the rectangular coordinate system of the groundi、piAnd q isiThe angle between the image plane coordinate system and the ground rectangular coordinate system in the direction X, Y, Z is respectively as follows:
Figure BDA0003290629950000115
wherein the content of the first and second substances,
Figure BDA0003290629950000116
respectively heading angle, pitch angle and roll angle. And (3) obtaining the pixel coordinates and the coordinates of the corresponding ground point in the ground rectangular coordinate system according to the formula (1) and the known coordinates of the unmanned aerial vehicle and the known internal and external orientation elements of the sensor at the imaging moment.
From equations (1) and (2), the solving equation of the coordinates (X, Y, Z) of the ground points of the four vertices of the image is as follows:
Figure BDA0003290629950000121
Figure BDA0003290629950000122
a=(f*p1+x*p3) (5)
b=(f*p2+y*p3) (6)
c=(f*q1+x*q3) (7)
d=(f*q2+y*q3) (8)
e=(f*o1+x*o3) (9)
k=(f*o2+y*o3) (10)
wherein f is the focal length, and Z is the ground elevation value extracted from AW3D DEM.
The geographic coordinate of the image center point is solved by linear interpolation of four vertex coordinates, and the row number of the image center point on the panoramic canvas is calculated by each frame I to be registered at the early stage2And the translation components delta x and delta y from the initial reference image are obtained by accumulation solution.
(3) Removing noise points by Savitzky-Golay filtering according to the geographic coordinates of the central points of the images of the air route, and uniformly selecting control points to construct a GIS map:
the Savitzky-Golay filtering process is represented by the following equation:
Figure BDA0003290629950000123
where s is the original data value, s*Is the smoothing result value, wiThe coefficient is the smoothing coefficient of the ith data in the sliding window, r is half of the width of the sliding window, and is an odd number, and N is 2r + 1. j represents the jth value of the original data. It is known from experience that: the size of the sliding window is set to be frames/8, and the frames are the number of POS data acquired by the current unmanned aerial vehicle. w is aiThe solution can be found by modeling the following k-th order polynomial and then applying the least squares method to minimize the error.
Figure BDA0003290629950000131
Wherein b is0,b1,…,bkIs a pending coefficient or weight.
The load rotation in the splicing process is easy to increase the error of the geographic coordinate solution, and each I is removed through Savitzky-Golay filtering2The noise point with larger deviation of the geographical coordinate of the central point from the fitting curve (the geographical coordinate trend of the central point of each image of the air route) can effectively avoid scene errors;
selecting geographical coordinates P of the center point of the image before and after Savitzky-Golay filteringi,i=1,2,...,NAnd (lon, lat) pixels with small changes are added into the candidate point set, and 4 points in the candidate point set are uniformly selected as control points according to the position distribution to participate in geometric correction, so that the GIS panoramic map with the geographic coordinate information is constructed.
In a word, the method can avoid the mode of performing geometric correction processing on each frame of image in the traditional image splicing method based on POS correction, can greatly improve the splicing speed, and simultaneously eliminates the systematic overall deviation caused by the randomness of reference image selection.
It should be noted that the above examples are only illustrative for the patent spirit of the present invention. Various modifications or additions may be made to the described embodiments or alternatives may be employed by those skilled in the art without departing from the spirit of the present invention or exceeding the scope of the claims appended hereto.

Claims (4)

1. An image splicing method based on POS correction is characterized by comprising the following steps:
(1) extracting and matching the characteristic points, calculating rotation and translation components of the image to be registered according to the geometric relationship of the purified characteristic point pairs, accumulating the rotation and translation components to the panoramic canvas, and then performing rigid transformation on the image to be registered;
(2) solving the geographical coordinates of the central point of the image by using a strict geometric imaging model, and recording the row and column numbers of the image on the panoramic canvas;
(3) and removing noise points by Savitzky-Golay filtering according to the geographic coordinates of the central points of the images of the air route, and then uniformly selecting control points to construct a GIS map.
2. The image stitching method based on POS correction according to claim 1, wherein the specific mode of the step (1) is as follows:
(101) selecting two frame images with a frame interval as a reference image and an image to be registered, and respectively marking as I1And I2The width and height of the image are denoted as W and H, respectively;
(102) accelerated extraction of I Using GPU1And I2And calculating feature description vectors;
(103) refining and purifying the feature point matching pairs by using a RANSAC algorithm based on graph cut optimization, and eliminating mismatching;
(104) under a normal scene, when the number of the purified characteristic point pairs meets a threshold value T, solving I based on the characteristic point pairs2To I1Rotational and translational components of (a):
I1and I2The purified characteristic point pairs are respectively marked as Pi(i=1,2,3...,n)And Pj(j=1,2,3...,n)From which two points are respectively traversed and selected as Pi1And Pi2、Pj1And Pj2The formed vector is recorded as
Figure FDA0003290629940000011
And
Figure FDA0003290629940000012
then image I to be registered2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) are:
Figure FDA0003290629940000021
Figure FDA0003290629940000022
Figure FDA0003290629940000023
Figure FDA0003290629940000024
Figure FDA0003290629940000025
Figure FDA0003290629940000026
Figure FDA0003290629940000027
wherein, the point x represents taking an x coordinate, and the point y represents taking a y coordinate;
(105) under the special scene of sparse characteristic points or characteristic point-free areas, when the number of the purified characteristic point pairs does not meet a threshold value T, solving I based on geographic coordinates2To I1Rotational and translational components of (a):
separately solving for I using rigorous geometric imaging models1And I2Geographic coordinates of four vertices: pL1、PL2、PL3、PL4And PR1、PR2、PR3、PR4,I1And I2The geographic coordinate of the central point of the image is PLAnd PRFurther utilize I1And I2In a positional relationship of1Solving for I as a reference2Location distribution on the panoramic canvas;
the conversion coefficient s from the geographic coordinates to the image row and column number coordinates is:
Figure FDA0003290629940000028
image to be registered I2To the reference picture I1The rotation angle component θ and the translation component Δ x, Δ y of (a) are:
Figure FDA0003290629940000029
Figure FDA0003290629940000031
Figure FDA0003290629940000032
Figure FDA0003290629940000033
Figure FDA0003290629940000034
Figure FDA0003290629940000035
Figure FDA0003290629940000036
here, the point lon represents longitude and the point lat represents latitude.
3. The image stitching method based on POS correction according to claim 1, wherein the specific way of the step (2) is as follows:
(201) POS data acquired by an unmanned aerial vehicle are used as external orientation elements of the sensor at the imaging moment, and camera calibration parameters and a focal length are used as internal orientation elements; the POS data comprises longitude and latitude height, a course angle, a pitch angle and a roll angle;
(202) extracting an elevation value from AW3D DEM product data with the resolution of 30m according to the longitude and latitude, and subtracting the ground elevation value from the altitude to obtain the ground height H;
(203) solving the geographic coordinates of ground points corresponding to four vertexes of the image in a ground rectangular coordinate system according to a strict geometric imaging model, solving the geographic coordinates of the image center point by linear interpolation of the four vertex coordinates, and solving the row and column numbers of the image center point on the panoramic canvas by each frame I to be registered in the early stage2And the translation components delta x and delta y from the initial reference image are obtained by accumulation solution.
4. The image stitching method based on POS correction according to claim 1, wherein the specific way of the step (3) is as follows:
(301) selecting Savitzky-geographical coordinates P of image center point before and after Golay filteringi,i=1,2,...,NAdding pixels with (lon, lat) variation smaller than a threshold value into a candidate point set, wherein lon is longitude and lat is latitude;
(302) and uniformly selecting 4 points in the candidate point set as control points to participate in geometric correction according to the position distribution, thereby constructing the GIS panoramic map with geographic coordinate information.
CN202111163419.5A 2021-09-30 2021-09-30 Image splicing method based on POS correction Active CN113706389B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111163419.5A CN113706389B (en) 2021-09-30 2021-09-30 Image splicing method based on POS correction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111163419.5A CN113706389B (en) 2021-09-30 2021-09-30 Image splicing method based on POS correction

Publications (2)

Publication Number Publication Date
CN113706389A true CN113706389A (en) 2021-11-26
CN113706389B CN113706389B (en) 2023-03-28

Family

ID=78662581

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111163419.5A Active CN113706389B (en) 2021-09-30 2021-09-30 Image splicing method based on POS correction

Country Status (1)

Country Link
CN (1) CN113706389B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345737A (en) * 2013-06-04 2013-10-09 北京航空航天大学 UAV high resolution image geometric correction method based on error compensation
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN110310248A (en) * 2019-08-27 2019-10-08 成都数之联科技有限公司 A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system
CN111951201A (en) * 2019-05-16 2020-11-17 杭州海康机器人技术有限公司 Unmanned aerial vehicle aerial image splicing method and device and storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103345737A (en) * 2013-06-04 2013-10-09 北京航空航天大学 UAV high resolution image geometric correction method based on error compensation
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN111951201A (en) * 2019-05-16 2020-11-17 杭州海康机器人技术有限公司 Unmanned aerial vehicle aerial image splicing method and device and storage medium
CN110310248A (en) * 2019-08-27 2019-10-08 成都数之联科技有限公司 A kind of real-time joining method of unmanned aerial vehicle remote sensing images and system

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
邱芬鹏: "基于局部特征的无人机航拍图像快速拼接方法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Also Published As

Publication number Publication date
CN113706389B (en) 2023-03-28

Similar Documents

Publication Publication Date Title
CN106127697B (en) EO-1 hyperion geometric correction method is imaged in unmanned aerial vehicle onboard
CN110648398B (en) Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
CN111583110B (en) Splicing method of aerial images
US8698875B2 (en) Estimation of panoramic camera orientation relative to a vehicle coordinate frame
US8259994B1 (en) Using image and laser constraints to obtain consistent and improved pose estimates in vehicle pose databases
Zhou et al. A two-step approach for the correction of rolling shutter distortion in UAV photogrammetry
JP2007248216A (en) Ortho-correction apparatus and method for synthetic aperture radar image
CN109325913B (en) Unmanned aerial vehicle image splicing method and device
CN113514829B (en) InSAR-oriented initial DSM area network adjustment method
CN108171732B (en) Detector lunar landing absolute positioning method based on multi-source image fusion
CN110555813B (en) Rapid geometric correction method and system for remote sensing image of unmanned aerial vehicle
CN113793270A (en) Aerial image geometric correction method based on unmanned aerial vehicle attitude information
CN112767245B (en) System and method for map splicing construction based on real-time video images of multiple unmanned aerial vehicles
CN110853140A (en) DEM (digital elevation model) -assisted optical video satellite image stabilization method
CN114241064B (en) Real-time geometric calibration method for internal and external orientation elements of remote sensing satellite
CN115631094A (en) Unmanned aerial vehicle real-time image splicing method based on spherical correction
CN110223233B (en) Unmanned aerial vehicle aerial photography image building method based on image splicing
CN110660099B (en) Rational function model fitting method for remote sensing image processing based on neural network
CN107941241B (en) Resolution board for aerial photogrammetry quality evaluation and use method thereof
CN112182967B (en) Automatic photovoltaic module modeling method based on thermal imaging instrument
CN108109118B (en) Aerial image geometric correction method without control points
CN113345032A (en) Wide-angle camera large-distortion image based initial image construction method and system
CN107516291B (en) Night scene image ortho-rectification processing method
CN113706389B (en) Image splicing method based on POS correction
Zhao et al. Digital Elevation Model‐Assisted Aerial Triangulation Method On An Unmanned Aerial Vehicle Sweeping Camera System

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant