CN110084743B - Image splicing and positioning method based on multi-flight-zone initial flight path constraint - Google Patents
Image splicing and positioning method based on multi-flight-zone initial flight path constraint Download PDFInfo
- Publication number
- CN110084743B CN110084743B CN201910071912.0A CN201910071912A CN110084743B CN 110084743 B CN110084743 B CN 110084743B CN 201910071912 A CN201910071912 A CN 201910071912A CN 110084743 B CN110084743 B CN 110084743B
- Authority
- CN
- China
- Prior art keywords
- image
- frame
- splicing
- flight
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformation in the plane of the image
- G06T3/40—Scaling the whole image or part thereof
- G06T3/4038—Scaling the whole image or part thereof for image mosaicing, i.e. plane images composed of plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Abstract
The invention belongs to the field of computer image processing and mapping, and particularly relates to an image splicing and positioning method based on starting track constraint of multiple flight bands, which is used for overcoming the problems that two or even multiple flight bands are crossed and overlapped frequently in multi-flight band splicing. According to the method, the splicing image behind each flight band is corrected according to the fitting straight line by using linear regression according to the initial flight path of each flight band, so that the accumulated error is reduced, the subsequent splicing image and the initial splicing image generate a dependency relationship, the phenomena of flight band intersection or bifurcation and the like are avoided, the flight path with more complexity can be adapted, and the method is not only limited to rectangular flight paths as long as a single flight band is a straight line, and the image splicing quality between the flight bands and the positioning accuracy of the panoramic image are improved.
Description
Technical Field
The invention relates to the field of computer image processing and mapping, in particular to an image splicing and positioning method based on starting track constraint of multiple flight zones, which is mainly used for improving the splicing quality of images between flight zones and the positioning accuracy of a panoramic image.
Background
The UAV (Unmanned Aerial Vehicle) has the characteristics of simple operation, rapid response, flexible flight, low cost and the like, and is widely applied to the technical fields of surveying and mapping, such as disaster relief, military reconnaissance, marine detection, environmental protection and the like. Most of these requirements are directly reflected in obtaining a panoramic view of the flight operations area and GPS information at various points in the view.
At present, the splicing method of the aerial images of the unmanned aerial vehicle is mainly a splicing method based on image characteristics, the splicing effect of the method on the single aerial image is ideal, but the splicing of multiple aerial images often causes the conditions of mutual intersection, overlapping and the like of two or even multiple aerial images; the traditional unmanned aerial vehicle image positioning method mainly uses the ground resolution and the scale to recur the GPS information of other points in a frame image through the central point GPS of the frame image, but the method has errors when calculating the resolution and the scale of each frame image, and the whole errors are gradually accumulated along with the splicing process.
How to improve the splicing quality of images between aerial belts and the positioning accuracy of a panoramic image is a difficult problem which needs to be solved urgently.
Disclosure of Invention
The invention aims to provide an image splicing and positioning method based on multi-flight-zone initial track constraint, aiming at the technical problems, and the image splicing and positioning method is used for improving the image splicing quality between flight zones and the positioning accuracy of a panoramic image.
In order to achieve the purpose, the invention adopts the technical scheme that:
the image splicing and positioning method based on the starting track constraint of the multi-flight-band comprises the following steps:
step 1. Splicing aerial zone images
1-1, completing the splicing of the front K frame images in the flight band according to the following method:
step 1-1-1, image preprocessing: graying each frame of received video image;
step 1-1-2, image feature extraction: detecting image features of each frame of image by using a SURF operator to obtain a feature point set, and calculating the feature points by using a BRISK feature descriptor to generate a feature description vector;
step (ii) of1-1-3, image feature matching: BF matching is carried out on the characteristic description vectors of the K-1 frame and the K frame image obtained in the step 1-1-2 to obtain an initial matching result, wherein K =1,2,3.. K; then, eliminating abnormal matching values from the initial matching result through an RANSAC algorithm to obtain an optimal matching point pair set; finally, according to the best matching point pair set, calculating a perspective transformation homography matrix H of the k-1 th frame and the k-th frame by a least square method k-1 ;
Step 1-1-4, image splicing: calculating a homography matrix of the kth frame relative to the 1 st frame:
H k =H k-1 *H k-2 *...*H 0 ;
passing four corner points of the k frame image through a homography matrix H k The new coordinates are converted to be updated to the corresponding position of the panorama, and the pixel center point of the spliced image is stored to a point set P 1 ;
Step 1-2. Splicing images from frame K +1 to frame 1 in the 1 st flight band
Completing the splicing of the previous K frames of images in the step 1, and fitting to obtain a straight line L according to elements in the point set P;
for the kth frame image, K = K +1, K +2, K +3 1 ;K 1 The total video image frame number of the navigation band;
the homography matrix H of the k frame image is obtained by the same processing from the step 1-1-1 to the step 1-1-3 k-1 According to a homography matrix H k-1 Calculating the central point A (x) of the k frame image a ,y a ) (ii) a Projecting the point A onto a straight line L to obtain a projection point B (x) b ,y b ) (ii) a And further obtaining an offset matrix of the k frame image: b is k =[1,0,x b -x a ;0,1,y b -y a ];
H 'is updated' k-1 =H k-1 *B k And further obtaining a cumulative homography matrix of the kth frame relative to the 1 st frame:
H k =H′ k-1 *H′ k-2 *...*H′ K+1 *H′ K *H K-1 *...*H 0
passing four corner points of the k frame image through homography momentMatrix H k The new coordinates are converted into new coordinates and updated to the corresponding position of the panoramic image;
step 2, splicing the 1 st hovering part: completing splicing of all images at the hovering position by adopting the same processing process in the step 1-1;
step 3, repeating the steps 1-2, and completing the 2 nd navigation band, the 2 nd hovering part, the 3 rd navigation band and the 3 rd hovering part in sequence until the last navigation band 1 is spliced; completing the splicing of the panoramic image;
step 4, positioning any point of the panorama by GPS
Step 4-1, respectively storing UTM and pixel coordinates of pixel center points of all spliced frames in the panoramic image into a UTM point set and a pixel center point set, and solving a mapping function relation of the UTM point set and the pixel center point set by using perspective transformation;
and 4-2, calculating the coordinates of the pixel points of any point in the panoramic image according to the mapping function relationship in the step 4-1 to obtain corresponding UTM coordinates, and realizing positioning.
Further, in step 4-1, if the mapping function relationship cannot be obtained, the number of point sets is sequentially reduced by equal probability until the mapping function relationship is obtained.
The invention has the beneficial effects that:
the invention provides an image splicing and positioning method based on multi-flight-zone initial flight path constraint, which is characterized in that according to each flight-zone initial flight path, linear regression is used to determine a fitting straight line, and according to the fitting straight line, a spliced image behind the flight zone is corrected, so that accumulated errors are reduced, and a subsequent spliced image and the initial spliced image generate a dependency relationship, thereby avoiding the phenomena of flight-zone intersection or bifurcation and the like, and simultaneously being suitable for more complicated flight paths.
Drawings
FIG. 1 is a schematic diagram of an offset correction vector according to the present invention.
FIG. 2 is a diagram illustrating the results of three-row start track constraints in an embodiment of the present invention.
Detailed Description
The invention is explained in further detail below with reference to the figures and examples; for the convenience of describing the present invention, the terms involved are first explained as necessary:
SURF: SURF (Speeded Up Robust Features) is a Robust image recognition and description algorithm and is inheritance and development of SIFT algorithm; the extraction of SURF features includes: constructing a Hessian matrix, generating all feature points, constructing a scale space, positioning the feature points, distributing the main directions of the feature points, generating a feature point descriptor, matching the feature points and the like, and calculating to obtain SURF feature matching point pairs of adjacent images through the steps, but the SURF algorithm consumes more time in a feature description stage and a feature matching stage and is difficult to meet the occasion with high real-time requirement; the invention adopts SURF operator to detect the characteristic feature points of the image and determine the main direction.
BRISK: binary Robust exchangeable keys provide a feature extraction algorithm and a Binary feature description operator, when images with large blur are matched, the BRISK algorithm is most excellent in a plurality of algorithms, but the algorithm feature detection operator is a FAST operator, and the fineness and accuracy of feature points extracted by the operator are not as good as those of SIFT and S URF operators; in consideration of the detection speed and the robustness of fuzzy splicing, the invention adopts a BRISK operator as a feature descriptor.
BF: the BF algorithm, namely the storm Force algorithm, is a common pattern matching algorithm, the thought of the BF algorithm is to match the first character of the target string S with the first character of the pattern string T, if equal, continue to compare the second character of S with the second character of T; and if not, comparing the second character of the S with the first character of the T, and sequentially comparing until a final matching result is obtained.
RANSAC: random Sample Consensus is an algorithm for calculating mathematical model parameters of data according to a set of Sample data sets containing abnormal data to obtain valid Sample data.
Homography matrix: in computer vision, the homography of a plane is defined as the projection mapping of one plane to another, and the homography matrix is the mapping matrix describing the mapping relationship.
UTM coordinates: UTM (UNIVERSAL transform machinery or GRID SYSTEM, UNIVERSAL cross ink card grid system) coordinates are a planar rectangular coordinate, which is widely used in topographic maps, as a reference grid for satellite images and natural resource databases, and for other applications requiring precise positioning; because the images are two-dimensional plane images after the image splicing is finished, the original GPS coordinates must be converted into UTM coordinates to be suitable.
The embodiment provides an image splicing and positioning method based on multi-flight-zone initial flight path constraint, which can improve the splicing quality of images between flight zones and the positioning accuracy of a panoramic image, and comprises five steps of image preprocessing, image feature extraction, image feature matching, flight path constraint and GPS positioning of any point of the panoramic image; the method comprises the following specific steps:
step 1. Splicing the 1 st flight band image
1-1, completing the splicing of the previous K frame images in the 1 st flight zone according to the following method:
step 1-1-1 image preprocessing: graying each frame of video image
Graying: carrying out weighted average on the RGB components of the color image by corresponding weights; in this embodiment, a more reasonable grayscale image can be obtained by performing weighted average on the RGB three components according to the following formula:
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j);
step 1-1-2 image feature extraction
SURF adopts an approximate Hessian matrix to detect characteristic points, and integral images are used for convolution operation, so that the operation is reduced, and the characteristic extraction speed is improved; the BRISK binary descriptor directly generates a binary bit string through simple intensity comparison of pixel points around the feature points, the calculation of the similar distance between the feature points is simple and effective, and the occupied memory is small; the invention adopts SURF to detect the feature points and adopts BRISK to calculate the feature descriptors; the specific process is as follows:
a) Constructing a Hessian matrix, and constructing a scale space: let a point on the image be X (X, y), and the matrix M at the σ scale be defined as:
wherein Lxx is a result of the convolution of a Gaussian filter second order derivative and X, lxy and the like have similar meanings, and sigma is a spatial scale; when the discriminant of the Hessian matrix obtains a local maximum value, the position of a key point is determined to be positioned;
b) Detecting the characteristic points: in the obtained scale space, each pixel point processed by the Hessian matrix is compared with 26 points in the two-dimensional image space and the neighborhood of the scale space, a key point is preliminarily positioned, and the key point with weak energy and the key point with wrong positioning are filtered out, so that the final stable characteristic point is screened out;
c) The BRISK calculates the feature descriptor: obtaining a corresponding BRISK binary feature descriptor (feature description vector) by the stable feature point through a BRISK descriptor algorithm;
step 1-1-3, matching image features, and calculating a matching result to obtain a homography matrix:
a) The BRISK descriptor is a binary bit string consisting of 1 and 0, and the high-speed matching can be realized by adopting the Hamming distance (exclusive OR operation) in the invention, so that the efficiency is outstanding; BF matching is carried out on the characteristic description vectors of the K-1 frame and the K frame image obtained in the step 1-1-2 to obtain an initial matching result, wherein K =1,2,3.. K;
b) The RANSAC algorithm has strong fault-tolerant capability and robustness on noise points and mismatching points, and can better eliminate mismatching point pairs; rejecting abnormal matching values from the initial matching result in a) through a RANSAC algorithm to obtain a stable and high-precision optimal matching point pair set;
c) According to the best matching point pair set, calculating the perspective transformation homography matrix H of the k-1 frame and the k frame by a least square method k-1 ;
Step 1-1-4, image splicing;
calculating a homography matrix of the kth frame relative to the 1 st frame: h k =H k-1 *H k-2 *...*H 0 (ii) a Map the k frameThe four corners of the image are passed through a homography matrix H k The new coordinates are converted to be updated to the corresponding position of the panorama, and the pixel center point of the spliced image is stored to a point set P 1 ;
Step 1-2. Splicing images from frame K +1 to frame 1 in the 1 st flight band
Completing the image splicing of the previous K frames according to the step 1 and according to the point set P 1 Middle element, fitting to obtain a straight line L 1 ;
For the kth frame image, K = K +1, K +2, K +3 1 ;K 1 The total video image frame number of the 1 st navigation band;
the homography matrix H of the k frame image is obtained by the same processing from the step 1-1-1 to the step 1-1-3 k-1 According to a homography matrix H k-1 Calculating the central point of the k frame image, namely the point A, and expressing the coordinate as (x) a ,y a ) As shown in fig. 1; projecting the point A to a straight line L 1 The projection point is obtained, namely the point B, and the coordinate is expressed as (x) b ,y b ) As shown in fig. 1; and further obtaining an offset matrix of the k frame image: b is k =[1,0,x b -x a ;0,1,y b -y a ](ii) a H 'is updated' k-1 =H k-1 *B k ;
And further obtaining an accumulated homography matrix of the kth frame relative to the 1 st frame:
H k =H′ k-1 *H′ k-2 *...*H′ K+1 *H′ K *H K-1 *...*H 0
passing four corner points of the k frame image through a homography matrix H k The new coordinates are converted into new coordinates and updated to the corresponding position of the panorama;
step 2. Splicing the 1 st hovering part
After all the frame images of the 1 st flight band are spliced in the step 1, the obtained accumulative homography matrix is as follows:
H A1 =H′ K1-1 *H′ K1-2 *...*H′ K+1 *H′ K *H K-1 *...*H 0 ;
completing splicing of all images at the hovering position by adopting the same processing process in the step 1-1;
step 3, repeating the steps 1-2, and completing the 2 nd navigation band, the 2 nd hovering part, the 3 rd navigation band and the 3 rd hovering part in sequence until the last navigation band 1 is spliced; completing the splicing of the panoramic image;
step 4, positioning any point of the panorama by GPS
After splicing is completed, in order to improve positioning accuracy, the space geometric process of imaging is avoided, mathematical simulation is directly carried out on image deformation, and the method comprises the following specific steps:
step 4-1, after the splicing is finished, respectively storing UTM and pixel coordinates of pixel center points of all the spliced frames in the panoramic image into a UTM point set and a pixel center point set, solving a mapping function relation of the UTM point set and the pixel center point set by using perspective transformation, considering a possible empty matrix problem, and if the mapping function relation cannot be solved, sequentially reducing the number of the point sets with equal probability until the number is solved;
and 4-2, calculating the coordinates of the pixel points of any point in the panoramic image according to the mapping function relationship in the step 4-1 to obtain corresponding UTM coordinates, and realizing positioning.
In the embodiment, the result of the initial track constraint of the three flight zones by adopting the method is shown in fig. 2, and it can be seen from the figure that the matching degree of the rutting marks is higher, the error of the image splicing effect in the left and right directions is extremely small, and the accurate spliced image can be obtained according to the method, so that the unmanned aerial vehicle flight image can be spliced into the geodesic and a good foundation is laid for geodesic positioning.
While the invention has been described with reference to specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the same, equivalent or similar purpose, unless expressly stated otherwise; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.
Claims (2)
1. The image splicing and positioning method based on the starting track constraint of the multiple flight zones comprises the following steps:
step 1. Splicing aerial belt images
1-1, completing the splicing of the front K frame images in the flight band according to the following method:
step 1-1-1 image preprocessing: graying each frame of received video image;
step 1-1-2, image feature extraction: detecting image features of each frame of image by using a SURF operator to obtain a feature point set, and calculating the feature points by using a BRISK feature descriptor to generate a feature description vector;
step 1-1-3, image feature matching: BF matching is carried out on the characteristic description vectors of the K-1 frame and the K frame image obtained in the step 1-1-2 to obtain an initial matching result, wherein K =1,2,3.. K; then, eliminating abnormal matching values from the initial matching result through an RANSAC algorithm to obtain an optimal matching point pair set; finally, according to the best matching point pair set, calculating a perspective transformation homography matrix H of the k-1 th frame and the k-th frame by a least square method k-1 ;
Step 1-1-4, image splicing: calculating a homography matrix of the kth frame relative to the 1 st frame:
H k =H k-1 *H k-2 *...*H 0 ;
passing four corner points of the k frame image through a homography matrix H k The new coordinates are converted to be updated to the corresponding position of the panorama, and the pixel center point of the spliced image is stored to a point set P 1 ;
Step 1-2. Splicing images from frame K +1 to frame 1 in the 1 st flight band
Completing the image splicing of the previous K frames in the step 1, and fitting to obtain a straight line L according to elements in the point set P;
for the kth frame image, K = K +1, K +2, K +3 1 ;K 1 The total video image frame number of the navigation band;
the homography matrix H of the k frame image is obtained by the same processing from the step 1-1-1 to the step 1-1-3 k-1 According to a homography matrix H k-1 Calculating the central point A (x) of the k frame image a ,y a ) (ii) a Projecting the point A onto a straight line L to obtain a projection point B (x) b ,y b ) (ii) a And further obtaining an offset matrix of the k frame image: b is k =[1,0,x b -x a ;0,1,y b -y a ];
H 'is updated' k-1 =H k-1 *B k And further obtaining a cumulative homography matrix of the kth frame relative to the 1 st frame:
H k =H′ k-1 *H′ k-2 *...*H′ K+1 *H′ K *H K-1 *...*H 0
passing four corner points of the k frame image through a homography matrix H k The new coordinates are converted into new coordinates and updated to the corresponding position of the panoramic image;
step 2, splicing the 1 st hovering part: completing splicing of all images at the hovering position by adopting the same processing process in the step 1-1;
step 3, repeating the steps 1-2, and completing the 2 nd navigation band, the 2 nd hovering part, the 3 rd navigation band and the 3 rd hovering part in sequence until the last 1 navigation band is spliced; completing the splicing of the panoramic image;
step 4, positioning any point of the panorama by GPS
Step 4-1, respectively storing UTM and pixel coordinates of pixel center points of all spliced frames in the panoramic image into a UTM point set and a pixel center point set, and solving a mapping function relation of the UTM point set and the pixel center point set by using perspective transformation;
and 4-2, calculating the coordinates of the pixel points of any point in the panoramic image according to the mapping function relationship in the step 4-1 to obtain corresponding UTM coordinates, and realizing positioning.
2. The method for splicing and positioning the images based on the multi-track starting track constraint of claim 1, wherein in the step 4-1, if the mapping function relation cannot be solved, the number of the point sets which are decreased in sequence with equal probability is calculated until the point sets are solved.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910071912.0A CN110084743B (en) | 2019-01-25 | 2019-01-25 | Image splicing and positioning method based on multi-flight-zone initial flight path constraint |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910071912.0A CN110084743B (en) | 2019-01-25 | 2019-01-25 | Image splicing and positioning method based on multi-flight-zone initial flight path constraint |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110084743A CN110084743A (en) | 2019-08-02 |
CN110084743B true CN110084743B (en) | 2023-04-14 |
Family
ID=67413012
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910071912.0A Active CN110084743B (en) | 2019-01-25 | 2019-01-25 | Image splicing and positioning method based on multi-flight-zone initial flight path constraint |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110084743B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111461986B (en) * | 2020-04-01 | 2023-11-03 | 深圳市科卫泰实业发展有限公司 | Night real-time two-dimensional image stitching method for unmanned aerial vehicle |
CN111461013B (en) * | 2020-04-01 | 2023-11-03 | 深圳市科卫泰实业发展有限公司 | Unmanned aerial vehicle-based real-time fire scene situation awareness method |
CN111507901B (en) * | 2020-04-15 | 2023-08-15 | 中国电子科技集团公司第五十四研究所 | Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint |
CN112884767B (en) * | 2021-03-26 | 2022-04-26 | 长鑫存储技术有限公司 | Image fitting method |
Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0987873A2 (en) * | 1998-09-15 | 2000-03-22 | Hewlett-Packard Company | Navigation system for handheld scanner |
CN101442619A (en) * | 2008-12-25 | 2009-05-27 | 武汉大学 | Method for splicing non-control point image |
CN101777129A (en) * | 2009-11-25 | 2010-07-14 | 中国科学院自动化研究所 | Image matching method based on feature detection |
CN102201115A (en) * | 2011-04-07 | 2011-09-28 | 湖南天幕智能科技有限公司 | Real-time panoramic image stitching method of aerial videos shot by unmanned plane |
JP2012242321A (en) * | 2011-05-23 | 2012-12-10 | Topcon Corp | Aerial photograph imaging method and aerial photograph imaging device |
CN102967859A (en) * | 2012-11-14 | 2013-03-13 | 电子科技大学 | Forward-looking scanning radar imaging method |
CN103020934A (en) * | 2012-12-12 | 2013-04-03 | 武汉大学 | Seamless automatic image splicing method resistant to subtitle interference |
CN104156968A (en) * | 2014-08-19 | 2014-11-19 | 山东临沂烟草有限公司 | Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method |
CN104574347A (en) * | 2013-10-24 | 2015-04-29 | 南京理工大学 | On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data |
WO2015075700A1 (en) * | 2013-11-25 | 2015-05-28 | First Resource Management Group Inc. | Apparatus for and method of forest-inventory management |
GB201620652D0 (en) * | 2016-12-05 | 2017-01-18 | Gaist Solutions Ltd | Method and system for creating images |
CN106485655A (en) * | 2015-09-01 | 2017-03-08 | 张长隆 | A kind of taken photo by plane map generation system and method based on quadrotor |
CN106934795A (en) * | 2017-01-23 | 2017-07-07 | 陕西师范大学 | The automatic testing method and Forecasting Methodology of a kind of glue into concrete beam cracks |
CN107016646A (en) * | 2017-04-12 | 2017-08-04 | 长沙全度影像科技有限公司 | One kind approaches projective transformation image split-joint method based on improved |
CN107274336A (en) * | 2017-06-14 | 2017-10-20 | 电子科技大学 | A kind of Panorama Mosaic method for vehicle environment |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN108596982A (en) * | 2018-04-24 | 2018-09-28 | 深圳市航盛电子股份有限公司 | A kind of easy vehicle-mounted multi-view camera viewing system scaling method and device |
CN108921847A (en) * | 2018-08-08 | 2018-11-30 | 长沙理工大学 | Bridge floor detection method based on machine vision |
CN109087244A (en) * | 2018-07-26 | 2018-12-25 | 贵州火星探索科技有限公司 | A kind of Panorama Mosaic method, intelligent terminal and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7097311B2 (en) * | 2003-04-19 | 2006-08-29 | University Of Kentucky Research Foundation | Super-resolution overlay in multi-projector displays |
US20100194851A1 (en) * | 2009-02-03 | 2010-08-05 | Aricent Inc. | Panorama image stitching |
US9330484B2 (en) * | 2012-08-02 | 2016-05-03 | Here Global B.V. | Plane panorama location correction in three-dimensional mapping |
JP6236259B2 (en) * | 2013-09-06 | 2017-11-22 | 株式会社東芝 | Image processing apparatus, image processing method, and image processing program |
US9824486B2 (en) * | 2013-12-16 | 2017-11-21 | Futurewei Technologies, Inc. | High resolution free-view interpolation of planar structure |
KR102328020B1 (en) * | 2015-04-03 | 2021-11-17 | 한국전자통신연구원 | System and method for displaying panorama image using single look-up table |
US10339390B2 (en) * | 2016-02-23 | 2019-07-02 | Semiconductor Components Industries, Llc | Methods and apparatus for an imaging system |
US10227119B2 (en) * | 2016-04-15 | 2019-03-12 | Lockheed Martin Corporation | Passive underwater odometry using a video camera |
-
2019
- 2019-01-25 CN CN201910071912.0A patent/CN110084743B/en active Active
Patent Citations (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0987873A2 (en) * | 1998-09-15 | 2000-03-22 | Hewlett-Packard Company | Navigation system for handheld scanner |
CN101442619A (en) * | 2008-12-25 | 2009-05-27 | 武汉大学 | Method for splicing non-control point image |
CN101777129A (en) * | 2009-11-25 | 2010-07-14 | 中国科学院自动化研究所 | Image matching method based on feature detection |
CN102201115A (en) * | 2011-04-07 | 2011-09-28 | 湖南天幕智能科技有限公司 | Real-time panoramic image stitching method of aerial videos shot by unmanned plane |
JP2012242321A (en) * | 2011-05-23 | 2012-12-10 | Topcon Corp | Aerial photograph imaging method and aerial photograph imaging device |
CN102967859A (en) * | 2012-11-14 | 2013-03-13 | 电子科技大学 | Forward-looking scanning radar imaging method |
CN103020934A (en) * | 2012-12-12 | 2013-04-03 | 武汉大学 | Seamless automatic image splicing method resistant to subtitle interference |
CN104574347A (en) * | 2013-10-24 | 2015-04-29 | 南京理工大学 | On-orbit satellite image geometric positioning accuracy evaluation method on basis of multi-source remote sensing data |
WO2015075700A1 (en) * | 2013-11-25 | 2015-05-28 | First Resource Management Group Inc. | Apparatus for and method of forest-inventory management |
CN104156968A (en) * | 2014-08-19 | 2014-11-19 | 山东临沂烟草有限公司 | Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method |
CN106485655A (en) * | 2015-09-01 | 2017-03-08 | 张长隆 | A kind of taken photo by plane map generation system and method based on quadrotor |
GB201620652D0 (en) * | 2016-12-05 | 2017-01-18 | Gaist Solutions Ltd | Method and system for creating images |
CN106934795A (en) * | 2017-01-23 | 2017-07-07 | 陕西师范大学 | The automatic testing method and Forecasting Methodology of a kind of glue into concrete beam cracks |
CN107016646A (en) * | 2017-04-12 | 2017-08-04 | 长沙全度影像科技有限公司 | One kind approaches projective transformation image split-joint method based on improved |
CN107274336A (en) * | 2017-06-14 | 2017-10-20 | 电子科技大学 | A kind of Panorama Mosaic method for vehicle environment |
CN107808362A (en) * | 2017-11-15 | 2018-03-16 | 北京工业大学 | A kind of image split-joint method combined based on unmanned plane POS information with image SURF features |
CN108596982A (en) * | 2018-04-24 | 2018-09-28 | 深圳市航盛电子股份有限公司 | A kind of easy vehicle-mounted multi-view camera viewing system scaling method and device |
CN109087244A (en) * | 2018-07-26 | 2018-12-25 | 贵州火星探索科技有限公司 | A kind of Panorama Mosaic method, intelligent terminal and storage medium |
CN108921847A (en) * | 2018-08-08 | 2018-11-30 | 长沙理工大学 | Bridge floor detection method based on machine vision |
Non-Patent Citations (2)
Title |
---|
全景视图泊车辅助系统中的多视点视频拼接;卢官明等;《南京邮电大学学报(自然科学版)》;20160629(第03期);全文 * |
基于约束逼近投影变换的全景图像拼接方法;余清洲等;《集美大学学报(自然科学版)》;20151125(第06期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110084743A (en) | 2019-08-02 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084743B (en) | Image splicing and positioning method based on multi-flight-zone initial flight path constraint | |
CN111028277B (en) | SAR and optical remote sensing image registration method based on pseudo-twin convolution neural network | |
CN103822616B (en) | A kind of figure segmentation retrains with topographic relief the Remote Sensing Images Matching Method combined | |
CN111507901B (en) | Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint | |
Jiang et al. | Multiscale locality and rank preservation for robust feature matching of remote sensing images | |
CN110966991A (en) | Single unmanned aerial vehicle image positioning method without control point | |
US8666170B2 (en) | Computer system and method of matching for images and graphs | |
CN110992263B (en) | Image stitching method and system | |
Li et al. | RIFT: Multi-modal image matching based on radiation-invariant feature transform | |
CN106919944A (en) | A kind of wide-angle image method for quickly identifying based on ORB algorithms | |
CN107610097A (en) | Instrument localization method, device and terminal device | |
Perdigoto et al. | Calibration of mirror position and extrinsic parameters in axial non-central catadioptric systems | |
CN114255197A (en) | Infrared and visible light image self-adaptive fusion alignment method and system | |
CN110929782A (en) | River channel abnormity detection method based on orthophoto map comparison | |
Huang et al. | SAR and optical images registration using shape context | |
Bellavia et al. | Image orientation with a hybrid pipeline robust to rotations and wide-baselines | |
Hu et al. | Multiscale structural feature transform for multi-modal image matching | |
JP3863014B2 (en) | Object detection apparatus and method | |
CN114358133A (en) | Method for detecting looped frames based on semantic-assisted binocular vision SLAM | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
Mehrdad et al. | Toward real time UAVS’image mosaicking | |
CN114565653A (en) | Heterogeneous remote sensing image matching method with rotation change and scale difference | |
Brink | Stereo vision for simultaneous localization and mapping | |
CN109919998B (en) | Satellite attitude determination method and device and terminal equipment | |
CN114078245A (en) | Image processing method and image processing device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |