CN113191946A - Aviation three-step area array image splicing method - Google Patents

Aviation three-step area array image splicing method Download PDF

Info

Publication number
CN113191946A
CN113191946A CN202110230020.8A CN202110230020A CN113191946A CN 113191946 A CN113191946 A CN 113191946A CN 202110230020 A CN202110230020 A CN 202110230020A CN 113191946 A CN113191946 A CN 113191946A
Authority
CN
China
Prior art keywords
image
coordinates
point
points
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110230020.8A
Other languages
Chinese (zh)
Other versions
CN113191946B (en
Inventor
孙文邦
岳广
李铜哨
张星铭
白新伟
吴迪
杨帅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
PLA AIR FORCE AVIATION UNIVERSITY
Original Assignee
PLA AIR FORCE AVIATION UNIVERSITY
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by PLA AIR FORCE AVIATION UNIVERSITY filed Critical PLA AIR FORCE AVIATION UNIVERSITY
Priority to CN202110230020.8A priority Critical patent/CN113191946B/en
Publication of CN113191946A publication Critical patent/CN113191946A/en
Application granted granted Critical
Publication of CN113191946B publication Critical patent/CN113191946B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/32Indexing scheme for image data processing or generation, in general involving image mosaicing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)

Abstract

An aviation three-step area array image splicing method belongs to the technical field of image processing. The invention aims to determine the coordinates of the shooting position of each image, then finish three-step rough image stitching, further finish coordinate fine adjustment and finish fine stitching. The method carries out image splicing by combining geographic information, carries out coordinate fine adjustment by combining feature matching, detects and matches features, and carries out coordinate fine adjustment by utilizing matching pairs. The algorithm realizes seamless splicing of the three-step images, solves the problem that the three-step images are difficult to process by the traditional algorithm, and has better splicing effect by comparing the images spliced by depending on the geographic position information, and is obviously superior to the method spliced by depending on the geographic information.

Description

Aviation three-step area array image splicing method
Technical Field
The invention belongs to the technical field of image processing.
Background
The aerial remote sensing is widely applied to the fields of planting agriculture, military reconnaissance, disaster detection and the like at present, but is limited by flight height and camera field angle, the field of view of a remote sensing image is small, and the range of an area capable of being irradiated is limited. In order to globally grasp and understand a large target area, multiple strips or multiple times of imaging are generally needed, and then multiple remote sensing images are spliced into a whole wide-field image.
When the image strips are increased, certain problems are brought to image splicing. As the number of bands increases, the overlapping relationship between images increases, and the image matching relationship becomes complicated. Solutions to the problem of multi-band image stitching can be mainly classified into a method based on feature matching and a method based on pos (position and Orientation system) data.
In the feature point-based splicing method, a multi-strip splicing model is constructed in a permissive mode, the influence of pitching and rolling on an overlapping domain is analyzed, but the method considers fewer elements and is not suitable for aerial images. The stone major advocates and others explain the difficulties in marine surveying and mapping and propose limiting factors, but the aerial remote sensing precision cannot reach the surveying and mapping level, and the applicability is not strong. In the method for splicing by adopting airborne POS data, Korean super utilizes a POS data and feature matching method to complete sequence image splicing, but the method is only suitable for image splicing of a single strip. The Xuqihui completes the multi-strip image splicing by combining POS data with a characteristic matching method, but the registration relation of images among strips is not considered in an experiment, so the splicing effect is poor. Ruizhe Shao proposes to calculate the position of the remote sensing image of the unmanned aerial vehicle at the next moment according to the position and attitude parameters, determine the image overlapping area between strips, and quickly and accurately determine the position of a matching pair by using a matching pair in the overlapping area.
When the method is used for processing the multi-strip images obtained by three steps, the fact that most of the articles cannot well solve the problem of three-parallel image splicing and the effect of splicing the multi-strip images is not ideal is found, and due to the fact that the method considers few influencing factors, the multi-strip image splicing is staggered.
Disclosure of Invention
The invention aims to determine the coordinates of the shooting position of each image, then finish three-step rough image stitching, further finish coordinate fine adjustment and finish fine stitching.
The method comprises the following steps:
s1, image stitching is carried out by combining geographic information: in order to finish the rough image splicing combined with the geographic information, the position coordinates of an image bottom point and a ground bottom point in the image are searched;
1. determining coordinates of the image bottom points: the image bottom point C is an intersection point of a vertical line passing through the photographing center S and serving as a ground plane and the image plane, is positioned in the image plane and is expressed in the form of image point coordinates, and when the image bottom point C is positioned on an image, the image bottom point coordinates can be obtained by recording the edge max point of each image; when the image bottom point C is not on the image, a new image needs to be constructed, the image bottom point C is included, and the coordinate is correspondingly recorded;
2. determining the position coordinates of the ground bottom points:
respectively obtaining the arc length between latitudes on the meridian and the arc length between longitudes on the parallel rings by using the following formulas (1) and (2)
Figure RE-GDA0003121032230000011
SB=ΔLN0cosB0 (2)
Wherein, Delta B is B-B0,MmIs BmMeridian radius of curvature of (B)mIs B and B0E' is the second eccentricity of the ellipsoid, Δ L ═ L-L0,N0Representing the curvature radius of the prime movel circle;
calculating an Euclidean distance as a relative distance between two ground points by using the arc length distance corresponding to the obtained longitude and latitude difference, and when the course direction is determined, determining the relative direction between the two ground points, thereby calculating the position and distance relationship between the ground points based on the longitude and latitude information;
s2, coordinate fine adjustment is carried out by combining feature matching
After the mismatching pairs are eliminated by performing feature detection and matching between the image overlapping regions, a coordinate fine-tuning strategy is selected according to the characteristics of three-step image imaging, and the feature points which are correctly matched are used for coordinate fine tuning so as to correct errors caused by POS data;
s3, feature detection and matching
Extracting SIFT feature points in the overlapping area, performing feature matching on the feature points after completing feature detection, and removing mismatching pairs;
s4, coordinate fine adjustment by matching
After the mismatching pairs are eliminated, the position coordinates of each feature point in the current state are recorded, the feature point coordinates in the image are nested to the coordinates of the ground point together through the nesting of the image bottom point and the ground bottom point, the coordinates are recorded as the coordinates in a new image, and the coordinate values needing fine adjustment are obtained through the formula (3)
Figure RE-GDA0003121032230000021
Wherein (Δ r)l,Δcl) Represents the distance between the center and left stripe feature matching points, (Δ r)r,Δcr) Representing the distance between the central strip feature matching point and the right strip feature matching point;
s5, the obtained fine tuning values are not single values, in order to ensure the precision of the fine tuning values, a plurality of fine tuning values obtained above are screened, the cumulative frequency of each fine tuning value is calculated within the range of a threshold value, and the fine tuning value corresponding to the maximum value of the cumulative frequency is taken as the final fine tuning value; and after the fine adjustment of the periodic image coordinates is finished, fine adjustment of image coordinates between the strips is carried out, and when the fine adjustment of the image between the strips is carried out, the intermediate periodic image is taken as a reference image, so that the periodic images on the upper side and the lower side are subjected to fine adjustment, and the fine image splicing is finished.
The invention provides a novel three-step image splicing method aiming at the problem of poor splicing effect of the three-step image at present. Experimental results show that the algorithm realizes seamless splicing of three-step images, solves the problem that the three-step images are difficult to process by the traditional algorithm, and has better splicing effect by comparing the images spliced by depending on the geographic position information, and is obviously superior to the method spliced by depending on the geographic information.
Drawings
FIG. 1 is a diagram of the position of an image nadir in relation to a ground nadir;
FIG. 2 is a diagram showing the relationship between the image of the bottom point of the sought image after geometric correction and the coordinate system;
FIG. 3 is a diagram showing a relationship between the positions of the camera and the camera when the installation inclination angle is too large;
FIG. 4 is a diagram showing the relationship between the image and the coordinate system of the geometrically corrected bottom point of the image not on the image;
FIG. 5 is a graph of the corresponding matching points of the center and left bands;
FIG. 6 is a diagram of a coordinate fine adjustment strategy;
FIG. 7 is a graph of flight path calculated by equations (12) and (13) for the relative distance between the above-ground points;
FIG. 8 is a diagram of image nadir positions;
FIG. 9 is stitching images based on geographic location coordinates;
FIG. 10 is a fine mosaic image;
fig. 11 is a graph showing the results before and after fine adjustment of the coordinates.
Detailed Description
The latitude information in the POS data is utilized to determine the shooting position coordinate of each image, and then three-step rough splicing of the images can be completed. Because the precision of the longitude and latitude data obtained by the differential GPS system is not high, the rough splicing completed by using the data has obvious errors, so that the coordinate fine tuning is further performed, and the fine splicing is completed.
1. Image stitching by combining geographic information
The differential GPS system records the position of the aerial image at the moment of shooting, and the expression form is longitude and latitude coordinates. As shown in fig. 1, when the image is imaged, the latitude and longitude coordinates recorded by the differential GPS are the positions of the ground points (D points) in the planned coordinate system. The ground point is a point which is right below the plane shooting instant photographing center S and is positioned on the ground plane. In order to finish rough image splicing in combination with geographic information, the image bottom point and the ground bottom point position coordinates in the image are searched, and the two points of information are nested under the uniform ground resolution, so that the image splicing can be finished.
1.1 image base point coordinate determination
In fig. 1, point C is the base point. The base point is the intersection of the image plane and a perpendicular line passing through the center of photography S as the ground plane. The image base point is located in the image plane and is represented in the form of image point coordinates. The coordinates of the bottom point of the image after geometric correction are changed, and in order to determine the position of the coordinates of the bottom point of the image, the coordinates of the bottom point of the image need to be searched in the corrected image.
As shown in fig. 2, the relationship between the geometrically corrected image and the coordinate system is shown in the image.
In FIG. 2, O-cr is the digital image display coordinate system. C-XY is a digital image operation coordinate system, and because the image direction is rotated to the true heading direction in the geometric correction process, the X axis points to the true north direction, and the Y axis is perpendicular to the X axis and points to the right. In the figure, C is expressed as an image bottom point, geometric correction of the image is completed, coordinates of four top points of the image are recorded, and the coordinates of the O point relative to the digital image operation coordinate system are equivalent to the displayed coordinates of the C point relative to the digital image by observing the figure 2. The coordinates of the bottom point of the image can be obtained by recording the max point of the edge of each image.
However, when the imaging mode is three-step, the magnitude of the installation inclination angle of the left and right cameras will affect whether the image base point exists on the image. As shown in fig. 3, when the camera installation inclination is too large, the image bottom point exists in the image plane and is not on the image.
In this case, the relationship between the geometrically corrected image and the coordinate system is as shown in fig. 4. The solving of the coordinates of the bottom point C is consistent with the method. However, whether the C-point coordinates exist on the image needs to be considered when constructing the digital image display coordinate system. When the point C is not on the image, a new image needs to be constructed to include the point C and record the coordinates accordingly, so that the coordinates of the point can be used later.
1.2 ground point position coordinate determination
Point D in fig. 1 is the nadir. And the POS system records the expression form of the current ground point as longitude and latitude coordinates. In image splicing, the image position can be determined by calculating the distance and direction information between the images.
And determining the position relation between the points through the longitude and latitude coordinates recorded by the POS system. First, the arc length between each longitude and latitude can be calculated by using the following formulas 1 and 2.
Figure RE-GDA0003121032230000031
SB=ΔLN0 cosB0 (2)
Wherein, Delta B is B-B0,MmIs BmMeridian radius of curvature of (B)mIs B and B0E' is the second eccentricity of the ellipsoid, Δ L ═ L-L0,N0And expressing the curvature radius of the unitary-mortise ring.
By the formula 1, the arc length between latitudes on the meridian can be calculated, when the meridian is short, if the latitude difference delta B at two ends of the meridian is less than 20', the accuracy is 0.001 m, the meridian can be regarded as an arc, and the radius of curvature adopts the radius of curvature of the meridian at the average latitude at two ends. Similarly, by equation 2, the arc length between longitudes on the parallel rings can be calculated.
And calculating the Euclidean distance as the relative distance between two ground bottom points by using the arc length distance corresponding to the obtained longitude and latitude difference. When the heading direction is determined, the relative direction between the two nadirs is also determined. Therefore, the position and distance relation between the bottom points can be calculated based on the longitude and latitude information.
And nesting the found image bottom point coordinates and the corresponding ground bottom point coordinates to finish image splicing based on geographic information.
2. Coordinate fine tuning in conjunction with feature matching
When image splicing is performed only by using the position coordinates, the requirement on the precision of data recorded by a POS system is very high. When processing image data with low accuracy of the POS system, fine splicing is still required to be further carried out.
The above problem can be solved by using feature matching. By carrying out feature detection and matching between image overlapping regions and eliminating mismatching pairs, a coordinate fine adjustment strategy is selected according to the characteristics of three-step image imaging, and feature points which are correctly matched are used for coordinate fine adjustment so as to correct errors brought by POS data.
2.1 feature detection and matching
The SIFT feature points have invariance to image translation, rotation and scaling, and have strong robustness to light change, noise and affine change. Since SIFT feature points are obvious, the image identification accuracy rate is high according to the points. In order to improve the operation efficiency of the SIFT algorithm, feature points need to be extracted in an overlapping region. And after the characteristic detection is finished, carrying out characteristic matching on the characteristic points, and eliminating mismatching pairs.
2.2 coordinate Fine tuning with matching pairs
And after the mismatching pairs are eliminated, recording the position coordinates of each characteristic point in the current state. By nesting the image bottom points and the ground bottom points, the feature point coordinates in the image are nested to the ground point coordinates together, and the feature point coordinates are recorded as coordinates under a new image. As shown in fig. 5. (r)ml1,cml1) And (r)l1,cl1) Coordinates of the corresponding matching points of the central and left strips, respectively, (r)mr1,cmr1) And (r)r1,cr1) The coordinates of the corresponding matching points of the central strip and the right strip, respectively. In order to reduce the effect of error accumulation, the center band image is selected as a reference image, and the distance between the left and right images and the middle image is calculated by equation 3, so that coordinate values required to be finely adjusted are obtained.
Figure RE-GDA0003121032230000041
Wherein (Δ r)l,Δcl) Represents the distance between the feature matching point of the central band and the feature matching point of the left band, and similarly, (Δ r)r,Δcr) Representing the distance of the center strip feature matching point from the right strip feature matching point.
Since the feature matching pair is not unique, the calculated trim value is not a single value. In order to ensure the precision of the fine tuning values, the obtained fine tuning values are screened, the cumulative frequency of each fine tuning value is calculated within a certain threshold range, and the fine tuning value corresponding to the maximum value of the cumulative frequency is taken as the final fine tuning value. The threshold was set to 10 in the experiment.
As shown in fig. 6, fine adjustment of the coordinates of the image in a single period can be completed by fine adjustment of the coordinates of the left and right images and the central stripe. And repeating the steps to finish the coordinate fine adjustment among the multiple periodic strips.
After the fine adjustment of the periodic image coordinates is completed, the inter-slice image coordinate fine adjustment is then performed. Although the three images have overlapping rates and the matching relationship of the image features is different among the stripes, the central stripe is selected as the reference image when the period fine adjustment is performed. Therefore, when the inter-band image fine adjustment is performed, in order to reduce the error accumulation effect, the intermediate periodic image is used as a reference image, and the upper and lower periodic images are subjected to fine adjustment, thereby completing the fine image stitching.
Experiments and analyses
And (3) calculating the relative distance between the ground upper and lower points through the formula (1) and the formula (2), calibrating the relative position between the points, and drawing the flight path, as shown in fig. 7. When the flight track is drawn, the resolution of the ground track route map is kept consistent with the resolution of the image after geometric correction, so that correct nesting among subsequent images is guaranteed.
In fig. 7, a straight line indicates a rough flight path, and a white triangle indicates a corresponding ground position at the time of capturing each image. And judging according to the flight direction, and sequentially shooting the 1 st to the 9 th ground positions from the lower right corner to the upper left corner. According to the method shown in fig. 6, the coordinates of the base point of the image are recorded in the corrected image, as shown in fig. 8, and the white dots in the image indicate the coordinates of the base point of the image. Because of adopting the three parallel imaging mode, the left and right installation inclination angles are larger, so the coordinates of the image bottom points do not fall on the image.
Nesting the coordinates of the image bottom points in the figure 8 and the corresponding track points in the figure 7, and drawing the images spliced based on the geographic position coordinates in the figure 7, wherein the splicing result is shown in the figure. In fig. 9, the approximate positions of the respective images are basically determined, but the data accuracy is low, so that a large amount of misalignment occurs in the stitching. In the road and the obvious target area of the building, the dislocation phenomenon is obvious, the visual effect is poor, and therefore further precise splicing is needed.
And extracting feature points by using an SIFT algorithm, matching the features, and recording the position coordinates of each matched pair after rejecting mismatched pairs. And (4) calculating by using the coordinate relation between the matching pairs according to the formula (3) to obtain coordinate fine adjustment values among the images. The sequence of first fine-tuning the periodic image and then the strip image is fine-tuned to obtain a fine-stitched image, as shown in fig. 10.
By comparing detail parts, as shown in fig. 11, it is found that after coordinate fine adjustment, no obvious splicing and dislocation phenomenon exists between images, and obvious objects such as roads, buildings and the like have no obvious deformation. The fine splicing effect is obviously superior to the coarse splicing effect, and the visual effect is better.

Claims (1)

1. An aviation three-step area array image splicing method is characterized by comprising the following steps: the method comprises the following steps:
s1, image stitching is carried out by combining geographic information: in order to finish the rough image splicing combined with the geographic information, the position coordinates of an image bottom point and a ground bottom point in the image are searched;
1. determining coordinates of the image bottom points: the image bottom point C is an intersection point of a vertical line passing through the photographing center S and serving as a ground plane and the image plane, is positioned in the image plane and is expressed in the form of image point coordinates, and when the image bottom point C is positioned on an image, the image bottom point coordinates can be obtained by recording the edge max point of each image; when the image bottom point C is not on the image, a new image needs to be constructed, the image bottom point C is included, and the coordinate is correspondingly recorded;
2. determining the position coordinates of the ground bottom points:
respectively obtaining the arc length between latitudes on the meridian and the arc length between longitudes on the parallel rings by using the following formulas (1) and (2)
Figure FDA0002958769090000011
SB=ΔLN0cosB0 (2)
Wherein, Delta B is B-B0,MmIs BmMeridian radius of curvature of (B)mIs B and B0E' is the second eccentricity of the ellipsoid, Δ L ═ L-L0,N0Representing the curvature radius of the prime movel circle;
calculating an Euclidean distance as a relative distance between two ground points by using the arc length distance corresponding to the obtained longitude and latitude difference, and when the course direction is determined, determining the relative direction between the two ground points, thereby calculating the position and distance relationship between the ground points based on the longitude and latitude information;
s2, coordinate fine adjustment is carried out by combining feature matching
After the mismatching pairs are eliminated by performing feature detection and matching between the image overlapping regions, a coordinate fine-tuning strategy is selected according to the characteristics of three-step image imaging, and the feature points which are correctly matched are used for coordinate fine tuning so as to correct errors caused by POS data;
s3, feature detection and matching
Extracting SIFT feature points in the overlapping area, performing feature matching on the feature points after completing feature detection, and removing mismatching pairs;
s4, coordinate fine adjustment by matching
After the mismatching pairs are eliminated, the position coordinates of each feature point in the current state are recorded, the feature point coordinates in the image are nested to the coordinates of the ground point together through the nesting of the image bottom point and the ground bottom point, the coordinates are recorded as the coordinates in a new image, and the coordinate values needing fine adjustment are obtained through the formula (3)
Figure FDA0002958769090000012
Wherein (Δ r)l,Δcl) Represents the distance between the center and left stripe feature matching points, (Δ r)r,Δcr) Representing the distance between the central strip feature matching point and the right strip feature matching point;
s5, the obtained fine tuning values are not single values, in order to ensure the precision of the fine tuning values, a plurality of fine tuning values obtained above are screened, the cumulative frequency of each fine tuning value is calculated within the range of a threshold value, and the fine tuning value corresponding to the maximum value of the cumulative frequency is taken as the final fine tuning value; and after the fine adjustment of the periodic image coordinates is finished, fine adjustment of image coordinates between the strips is carried out, and when the fine adjustment of the image between the strips is carried out, the intermediate periodic image is taken as a reference image, so that the periodic images on the upper side and the lower side are subjected to fine adjustment, and the fine image splicing is finished.
CN202110230020.8A 2021-03-02 2021-03-02 Aerial three-step area array image splicing method Active CN113191946B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110230020.8A CN113191946B (en) 2021-03-02 2021-03-02 Aerial three-step area array image splicing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110230020.8A CN113191946B (en) 2021-03-02 2021-03-02 Aerial three-step area array image splicing method

Publications (2)

Publication Number Publication Date
CN113191946A true CN113191946A (en) 2021-07-30
CN113191946B CN113191946B (en) 2022-12-27

Family

ID=76973061

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110230020.8A Active CN113191946B (en) 2021-03-02 2021-03-02 Aerial three-step area array image splicing method

Country Status (1)

Country Link
CN (1) CN113191946B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115593A (en) * 2022-06-28 2022-09-27 先临三维科技股份有限公司 Scanning processing method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8818101B1 (en) * 2012-01-03 2014-08-26 Google Inc. Apparatus and method for feature matching in distorted images
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
US20190082103A1 (en) * 2017-09-11 2019-03-14 Qualcomm Incorporated Systems and methods for image stitching
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images
CN111815765A (en) * 2020-07-21 2020-10-23 西北工业大学 Heterogeneous data fusion-based image three-dimensional reconstruction method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8818101B1 (en) * 2012-01-03 2014-08-26 Google Inc. Apparatus and method for feature matching in distorted images
US20190082103A1 (en) * 2017-09-11 2019-03-14 Qualcomm Incorporated Systems and methods for image stitching
CN107808362A (en) * 2017-11-15 2018-03-16 北京工业大学 A kind of image split-joint method combined based on unmanned plane POS information with image SURF features
CN111583110A (en) * 2020-04-24 2020-08-25 华南理工大学 Splicing method of aerial images
CN111815765A (en) * 2020-07-21 2020-10-23 西北工业大学 Heterogeneous data fusion-based image three-dimensional reconstruction method

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JI XIAOYUE等: ""Real-time panorama stitching method for UAV sensor images based on the feature matching validity prediction of grey relational analysis"", 《2018 15TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION, ROBOTICS AND VISION (ICARCV)》 *
RUIZHE SHAO等: ""Fast Anchor Point Matching for Emergency UAV Image Stitching Using Position and Pose Information"", 《SENSORS》 *
刘津: ""航空影像快速处理系统之关键技术研究"", 《中国优秀硕士学位论文全文数据库》 *
岳广等: "航空面阵图像拼接的累积误差消除方法", 《红外与激光工程》 *
郑晖等: "无人机视频流倾斜影像快速拼接方法", 《科学技术与工程》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115115593A (en) * 2022-06-28 2022-09-27 先临三维科技股份有限公司 Scanning processing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN113191946B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
US10634500B2 (en) Aircraft and obstacle avoidance method and system thereof
CN110926474B (en) Satellite/vision/laser combined urban canyon environment UAV positioning and navigation method
US7844077B2 (en) Location measuring device and method
CN108759823B (en) Low-speed automatic driving vehicle positioning and deviation rectifying method on designated road based on image matching
CN110859044A (en) Integrated sensor calibration in natural scenes
CN104732482A (en) Multi-resolution image stitching method based on control points
US9495747B2 (en) Registration of SAR images by mutual information
CN110569861A (en) Image matching positioning method based on point feature and contour feature fusion
CN115272077B (en) Image stitching method and system based on vision fusion
CN113191946B (en) Aerial three-step area array image splicing method
KR100671504B1 (en) Method for correcting of aerial photograph image using multi photograph image
Gressin et al. Trajectory-based registration of 3d lidar point clouds acquired with a mobile mapping system
CN113221883B (en) Unmanned aerial vehicle flight navigation route real-time correction method
CN109358315B (en) Auxiliary target indirect positioning method and system
Oh et al. Automatic georeferencing of aerial images using stereo high-resolution satellite images
CN109003451B (en) Intersection OD matrix estimation method based on low-altitude unmanned aerial vehicle
US10859377B2 (en) Method for improving position information associated with a collection of images
CN116310901A (en) Debris flow material source dynamic migration identification method based on low-altitude remote sensing
Jende et al. Fully automatic feature-based registration of mobile mapping and aerial nadir images for enabling the adjustment of mobile platform locations in GNSS-denied urban environments
Belaroussi et al. Vehicle attitude estimation in adverse weather conditions using a camera, a GPS and a 3D road map
CN113160070B (en) Aviation three-step area array image geometric correction method
CN115294190B (en) Method for measuring landslide volume for municipal engineering based on unmanned aerial vehicle
RU2768219C1 (en) Method of identifying reference points on space images of area in absence of georeferencing parameters
Kwoh et al. AUTOMATIC RELATIVE REGISTRATION OF SPOT5 IMAGERY FOR COLOR MERGING
CN111928843B (en) Star sensor-based medium and long distance target autonomous detection and tracking method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant