CN113012084A - Unmanned aerial vehicle image real-time splicing method and device and terminal equipment - Google Patents
Unmanned aerial vehicle image real-time splicing method and device and terminal equipment Download PDFInfo
- Publication number
- CN113012084A CN113012084A CN202110239571.0A CN202110239571A CN113012084A CN 113012084 A CN113012084 A CN 113012084A CN 202110239571 A CN202110239571 A CN 202110239571A CN 113012084 A CN113012084 A CN 113012084A
- Authority
- CN
- China
- Prior art keywords
- image
- matching point
- spliced
- pair
- matching
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 68
- 239000011159 matrix material Substances 0.000 claims abstract description 42
- 238000012216 screening Methods 0.000 claims abstract description 28
- 238000012937 correction Methods 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 238000012545 processing Methods 0.000 claims description 12
- 230000008030 elimination Effects 0.000 claims description 5
- 238000003379 elimination reaction Methods 0.000 claims description 5
- 230000005540 biological transmission Effects 0.000 claims description 4
- 230000000877 morphologic effect Effects 0.000 claims description 3
- 238000009825 accumulation Methods 0.000 abstract description 3
- 230000000694 effects Effects 0.000 abstract description 3
- 238000010586 diagram Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 5
- 238000004364 calculation method Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000000717 retained effect Effects 0.000 description 3
- 238000005259 measurement Methods 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- UDHXJZHVNHGCEC-UHFFFAOYSA-N Chlorophacinone Chemical compound C1=CC(Cl)=CC=C1C(C=1C=CC=CC=1)C(=O)C1C(=O)C2=CC=CC=C2C1=O UDHXJZHVNHGCEC-UHFFFAOYSA-N 0.000 description 1
- 239000000654 additive Substances 0.000 description 1
- 230000000996 additive effect Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20224—Image subtraction
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the invention discloses a real-time unmanned aerial vehicle image splicing method, a device and terminal equipment, wherein the method comprises the following steps: receiving an image transmitted by the unmanned aerial vehicle to obtain an image to be spliced; matching the image to be spliced with the processed image based on the extracted characteristic points to obtain an image pair and a primary screening matching point pair; based on the basic matrix and the homography matrix, eliminating the primary screening matching points with errors to obtain residual matching point pairs; obtaining attitude information of the images to be spliced and three-dimensional space coordinates of the remaining matching point pairs through free net adjustment, and constructing a triangular net to form a ground elevation model; according to the ground elevation model, performing orthoimage correction on the image to be spliced to form a corrected image; and splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image. By the method, error accumulation generated in the splicing process of the existing method is solved, and the image splicing effect processed in real time by the existing method is improved.
Description
Technical Field
The invention relates to the field of image splicing, in particular to a real-time image splicing method and device for an unmanned aerial vehicle and terminal equipment.
Background
In recent years, unmanned aerial vehicles have been widely used in fields such as surveying, emergency, disaster relief and the like due to characteristics of high efficiency, flexibility, low cost and the like. The unmanned aerial vehicle aerial photogrammetry technology greatly reduces the work cycle, manpower and financial investment of the traditional aerial photogrammetry technology, so that the aerial photogrammetry technology has more practical significance in emergency disaster relief and the like. The aerial photography real-time aerial photography image of the unmanned aerial vehicle can transmit image data back to the ground station in real time, and the received unmanned aerial vehicle image is dynamically spliced in real time by adopting an aerial photography data processing method. In the prior art, image stitching is performed by using the obtained POS (position and orientation system) data, and no processing is performed on the images, and no matter the original images or the central projection, only rotation and scaling are performed, which results in many errors occurring in the stitching process.
Disclosure of Invention
In order to overcome the error accumulation generated in the splicing process in the prior art and improve the effect of the existing method for processing image splicing in real time, one of the purposes of the application is to provide an unmanned aerial vehicle image real-time splicing method, which comprises the following steps:
receiving an image transmitted by the unmanned aerial vehicle to obtain an image to be spliced;
extracting the characteristic points of the image to be spliced and the characteristic points of the processed image, and matching the image to be spliced with the processed image based on the characteristic points to obtain an image pair and a primary screening matching point pair;
calculating a basic matrix according to the primary screening matching point pairs, calculating residual values of all the primary screening matching point pairs according to the basic matrix, and removing the primary screening matching points larger than a preset residual value threshold value to obtain first matching point pairs;
calculating a homography matrix according to the first matching point pairs, converting the first matching point pairs according to the homography matrix, calculating a coordinate difference value, and removing the first matching point pairs larger than a preset coordinate difference value threshold value to obtain second matching point pairs;
carrying out free net adjustment on the image pair and the second matching point pair to obtain attitude information of the unmanned aerial vehicle when shooting the image to be spliced and a three-dimensional space coordinate of the second matching point pair;
constructing a triangulation network according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model;
performing orthoimage correction on the images to be spliced according to the ground elevation model to form corrected images;
and splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image.
In an optional implementation manner, the step of receiving the image transmitted by the unmanned aerial vehicle to obtain the image to be stitched includes:
receiving images and POS data transmitted by an unmanned aerial vehicle, numbering the images according to a transmission sequence, and obtaining the POS data of the images and the image numbers of the images;
when the image number is smaller than or equal to a preset image number threshold value, processing an image corresponding to the image number to generate a processed image;
and when the image number is larger than a preset image number threshold value, taking the image corresponding to the image number as the image to be spliced.
In an optional implementation manner, the extracting the feature points of the image to be stitched and the feature points of the processed image, and matching the image to be stitched with the processed image based on the feature points to obtain an image pair and a prescreening matching point pair includes:
calculating the Euclidean distance between the images to be stitched and the processed images according to the POS data of the images to be stitched, extracting a preset number of the processed images according to the sequence of the Euclidean distances from small to large, and forming image pairs by the images to be stitched and the extracted processed images;
searching and matching characteristic points of the image to be spliced and the extracted processed image to obtain an initial matching point pair;
and calculating the reprojection error between the initial matching point pairs, and eliminating the initial matching point pairs with the reprojection error larger than a preset reprojection error threshold value to obtain an initial screening matching point pair.
In an optional implementation manner, the step of constructing a triangulation network according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model includes:
constructing an elevation histogram according to the three-dimensional space coordinates of the second matching point pair, and screening out a range section in which the number of matching points in the elevation histogram is less than a preset number threshold;
deleting the screened matching points in the range section, constructing a triangular net based on three-dimensional space coordinates of the remaining matching points, and combining the triangular net and the attitude information to form a ground elevation model.
In an optional implementation manner, the step of stitching the corrected image and the processed image corresponding to the image pair to obtain a stitched image includes:
calculating the difference value of the corrected image and the image in the overlapping area of the processed image matched with the image to be spliced to obtain a difference value image;
searching and acquiring a splicing seam of the difference image;
and performing morphological processing on the difference image, and blurring the splicing seam to obtain a spliced image.
In an optional implementation manner, after the step of stitching the corrected image and the processed image corresponding to the image pair to obtain a stitched image, the method further includes:
performing image slicing on the spliced image to form an image tile;
adding the image tile to map data.
In an optional implementation manner, before the step of stitching the corrected image and the processed image corresponding to the image pair to obtain a stitched image, the method includes:
taking the first image transmitted by the unmanned aerial vehicle as a standard, and carrying out color adjustment on the corrected image;
the splicing of the corrected image and the processed image corresponding to the image pair to obtain a spliced image comprises:
and splicing the corrected image after color adjustment with the corresponding processed image in the image pair to obtain a spliced image.
Another object of this application is to provide a real-time splicing apparatus of unmanned aerial vehicle image, the device includes following module:
the receiving module is used for receiving the image transmitted by the unmanned aerial vehicle to obtain an image to be spliced;
the matching module is used for extracting the characteristic points of the images to be spliced and the characteristic points of the processed images, matching the images to be spliced and the processed images based on the characteristic points to obtain image pairs and preliminarily screened matching point pairs;
the residual error value elimination module is used for calculating a basic matrix according to the preliminary screening matching point pairs, calculating residual values of all the preliminary screening matching point pairs according to the basic matrix, and eliminating the matching points larger than a preset residual value threshold value to obtain a first matching point pair;
a coordinate difference error eliminating module, configured to calculate a homography matrix according to the first matching point pairs, convert the first matching point pairs according to the homography matrix, calculate a coordinate difference, and eliminate the first matching point pairs larger than a preset coordinate difference threshold value to obtain second matching point pairs;
the free net adjustment module is used for carrying out free net adjustment on the image pair and the second matching point pair to obtain attitude information when the unmanned aerial vehicle shoots the images to be spliced and three-dimensional space coordinates of the second matching point pair;
the modeling module is used for constructing a triangulation network according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model;
the correction module is used for performing orthoimage correction on the image to be spliced according to the ground elevation model to form a corrected image;
and the splicing module is used for splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image.
Another object of the present application is to provide a terminal device, which includes a processor and a memory, where the memory stores a computer program, and the computer program, when executed by the processor, implements the steps of the real-time image stitching method for an unmanned aerial vehicle provided by the present application.
Another object of the present application is to provide a readable storage medium, in which a computer program is stored, and when the computer program is executed, the steps of the unmanned aerial vehicle image real-time stitching method provided by the present application are implemented.
Compared with the prior art, the method has the following beneficial effects:
the embodiment of the application provides a real-time unmanned aerial vehicle image splicing method, a real-time unmanned aerial vehicle image splicing device, a terminal device and a readable storage medium, wherein the method comprises the following steps: receiving an image transmitted by the unmanned aerial vehicle to obtain an image to be spliced; matching the image to be spliced with the processed image based on the extracted characteristic points to obtain an image pair and a primary screening matching point pair; based on the basic matrix and the homography matrix, eliminating the primary screening matching points with errors to obtain residual matching point pairs; obtaining attitude information of the images to be spliced and three-dimensional space coordinates of the remaining matching point pairs through free net adjustment, and constructing a triangular net to form a ground elevation model; according to the ground elevation model, performing orthoimage correction on the image to be spliced to form a corrected image; and splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image. By the method, error accumulation generated in the splicing process of the existing method is solved, and the image splicing effect processed in real time by the existing method is improved.
Drawings
In order to more clearly illustrate the technical solution of the present invention, the drawings required to be used in the embodiments will be briefly described below, and it should be understood that the following drawings only illustrate some embodiments of the present invention, and therefore should not be considered as limiting the scope of the present invention. For a person skilled in the art, it is possible to derive other relevant figures from these figures without inventive effort. Like components are numbered similarly in the various figures.
Fig. 1 is a flowchart of a method for splicing images of an unmanned aerial vehicle in real time according to an embodiment of the present application;
fig. 2 is a sub-flowchart of step 110 of a method for real-time stitching images of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 3 is a sub-flowchart of step 120 of a method for real-time stitching images of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 4 is a sub-flowchart of step 160 of a method for real-time stitching images of an unmanned aerial vehicle according to an embodiment of the present application;
fig. 5 is a block diagram of an apparatus for splicing images of an unmanned aerial vehicle in real time according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present application, it is noted that the terms "first", "second", "third", and the like are used merely for distinguishing between descriptions and are not intended to indicate or imply relative importance.
In the description of the present application, it is also to be noted that, unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which various embodiments of the present invention belong. The terms (such as those defined in commonly used dictionaries) should be interpreted as having a meaning that is consistent with their contextual meaning in the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein in various embodiments of the present invention.
Referring to fig. 1, fig. 1 is a flowchart of a real-time image stitching method for an unmanned aerial vehicle according to this embodiment, and the following describes each step of the method in detail.
And S110, receiving the image transmitted by the unmanned aerial vehicle to obtain an image to be spliced.
In this embodiment, user control unmanned aerial vehicle carries out the aviation and makes a video recording, shoots the image in real time, and unmanned aerial vehicle passes terminal equipment back with the image of shooing in real time, and terminal equipment receives the image of unmanned aerial vehicle transmission, obtains to wait to splice the image to follow-up to treat to splice the image and handle.
Referring to fig. 2, in an alternative embodiment, the step of receiving the image transmitted by the drone to obtain the stitched image includes:
s111, receiving images and POS data transmitted by the unmanned aerial vehicle, numbering the images according to a transmission sequence, and obtaining POS data of the images and image numbers of the images;
wherein, unmanned aerial vehicle is when flight operation, and the unmanned aerial vehicle image that obtains can carry corresponding POS data usually to more conveniently handle the image in the follow-up processing procedure. The POS data mainly includes GPS (Global Positioning System) data and IMU (Inertial Measurement Unit) data, i.e., external orientation elements in oblique photogrammetry. The GPS data, generally indicated at X, Y, Z, represents the geographic location of the aircraft at the time of the exposure point in flight. The IMU data mainly comprises: heading angle, pitch angle and roll angle.
S112, when the image number is smaller than or equal to a preset image number threshold value, processing the image corresponding to the image number to generate a processed image; and when the image number is larger than a preset image number threshold value, taking the image corresponding to the image number as the image to be spliced.
The image number of the first acquired image is 001, the image number of the second acquired image is 002, and so on, the preset image number threshold is T (T > -6), and if T ═ 6 is taken, the images with the image number 007 are started, and the subsequently acquired images are all taken as the images to be stitched.
In this embodiment, all the related processed images of the current image are automatically searched, and adjustment calculation is performed on all the related processed images, so that the obtained image position and posture are more accurate, and the stitching result is more accurate. While the processed image is generated, an initial ground elevation model and initial image slice data can be generated in advance.
And S120, extracting the characteristic points of the image to be spliced and the characteristic points of the processed image, and matching the image to be spliced with the processed image based on the characteristic points to obtain an image pair and a primary screening matching point pair.
Referring to fig. 3, in a possible implementation manner, in order to reduce matching errors between an image to be stitched and a processed image, the step of extracting feature points of the image to be stitched and feature points of the processed image, and matching the image to be stitched and the processed image based on the feature points to obtain an image pair and a prescreened matching point pair includes:
s121: calculating the Euclidean distance between the images to be stitched and the processed images according to the POS data of the images to be stitched, extracting a preset number of the processed images according to the sequence of the Euclidean distances from small to large, and forming image pairs by the images to be stitched and the extracted processed images;
in the present embodiment, the number of extracted processed images is N (5< N < 30).
S122: searching and matching characteristic points of the image to be spliced and the extracted processed image to obtain an initial matching point pair;
in this embodiment, the image to be stitched and the extracted processed image are subjected to bidirectional feature point search based on a KD-Tree (short for k-dimensional Tree) algorithm, and the searched feature points are matched to obtain an initial matching point pair.
The KD-Tree is a Tree-type data structure for storing instance points in the K-dimensional control so as to rapidly retrieve the instance points.
S123: and calculating the reprojection error between the initial matching point pairs, and eliminating the matching point pairs with the reprojection error larger than a preset reprojection error threshold value to obtain an initial screening matching point pair.
Here, the reprojection error refers to an error between a projected point (theoretical value) and a measurement point on the image.
In this embodiment, based on a Random Sample Consensus (Random Sample Consensus) algorithm, an 8-point method is used to estimate a base matrix F, and then a nonlinear least square solution of an LM (Levenberg-Marquardt) algorithm is used to remove matching point pairs with a reprojection error greater than a preset reprojection error threshold, and to retain matching point pairs in image pairs with a matching point number greater than X, where X may take the value of 16.
The basis matrix f (fundamental matrix) is a 3 × 3 matrix, which expresses the correspondence between the pixels of the stereo pair.
S130, calculating a basic matrix according to the preliminary screening matching point pairs, calculating residual values of all the preliminary screening matching point pairs according to the basic matrix, and eliminating the preliminary screening matching points larger than a preset residual value threshold value to obtain a first matching point pair.
In this embodiment, a ranac algorithm is used, an 8-point method is adopted, n pairs of matching points are randomly selected from all matching point pairs, and a basic matrix F is estimated according to a least square method, wherein n can take 8;
calculating residual values of all the matching point pairs according to the calculated basic matrix F, presetting a residual value threshold value, and removing the matching point pairs larger than the residual value threshold value;
and repeatedly estimating the basic matrix F, calculating the residual error value and eliminating the matching point pairs, and judging whether the following conditions are met: the number of correctly matched pairs/total number of matched pairs > 95%;
stopping the iterative operation when the conditions are met, otherwise, continuing the iterative operation, and simultaneously setting the maximum iterative times N (N < 2048);
after the iteration process, the retained matching point pair is the final matching point pair, the final basic matrix F is recalculated according to the retained matching point pair, the residual values of all the matching points are recalculated according to the final basic matrix F, and the first matching point pair is retained after the elimination according to the preset residual value threshold.
And S140, calculating a homography matrix according to the first matching point pairs, converting the first matching point pairs according to the homography matrix, calculating a coordinate difference value, and removing the first matching point pairs larger than a preset coordinate difference value threshold value to obtain second matching point pairs.
The homography transformation is used for describing the position mapping relation of the object between a world coordinate system and a pixel coordinate system, and a corresponding transformation matrix is called a homography matrix.
In the present embodiment, the homography matrix is calculated by the least square method; converting the matching point pairs by using the calculated homography matrix and calculating coordinate difference values of the converted matching point pairs; presetting a coordinate difference threshold, taking the points meeting the coordinate difference threshold as inner points, and eliminating the points not meeting the coordinate difference threshold; and recalculating the homography matrix according to a least square method by using the reserved inner points.
S150, carrying out free net adjustment on the image pair and the second matching point pair to obtain attitude information when the unmanned aerial vehicle shoots the image to be spliced and the three-dimensional space coordinate of the second matching point pair.
In this embodiment, carry out the free net adjustment to the image pair and the matching point pair, obtain unmanned aerial vehicle position and attitude information when shooing the image. That is, by the principle of collinearity equation, the equation AX is listed as L by using the adjustment of the beam method, and the unknowns (position information and attitude information) are solved by the least square method.
Since the free net adjustment is performed based on the matching point pairs, the continuous coarse adjustment and elimination in the free net adjustment process can achieve a more accurate adjustment result, and thus, in the execution process of S150, the operations of S130 and S140 may be repeated many times.
And S160, constructing a triangular net according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model.
Referring to fig. 4, in a possible implementation, since the ground elevation model is used to correct the image to be stitched, a more accurate ground elevation model needs to be constructed, and the step of constructing a triangulation network according to the pose information and the three-dimensional spatial coordinates of the second matching point pair to form the ground elevation model includes:
s161, constructing an elevation histogram according to the three-dimensional space coordinates of the second matching point pair, and screening out a range section in which the number of matching points in the elevation histogram is less than a preset number threshold;
and S162, deleting the screened matching points in the range section, and constructing a triangular net based on the three-dimensional space coordinates of the remaining matching points, wherein the triangular net is combined with the attitude information to form a ground elevation model.
S170, performing orthoimage correction on the image to be spliced according to the ground elevation model to form a corrected image.
In this embodiment, a binning-based orthoimage correction is performed on an image by using a block parallel method. The correction resolution is calculated according to the aerial image resolution, and the calculation formula is as follows:
wherein, m is the photographic scale, and f is the photographic focus of unmanned aerial vehicle when shooting the image, unit mm, and H is the relative flight height of unmanned aerial vehicle when shooting the image, and unit m, L are the pixel size of camera, and unit mm, L are the ground resolution of original image. The corrected resolution value is greater than the resolution of the original image.
And (3) performing image correction by adopting an inverse solution method, wherein the correction parallel method is calculation parallel due to the existence of image reading and image writing. In the correction process, the images are processed in a blocking mode, orthorectification is carried out on each block of image, parallel processing is adopted in the correction process, and OpenMP multi-CPU parallel is adopted in the parallel method.
OpenMP provides a high-level abstract description of a parallel algorithm, and is a set of instructive compiling and processing schemes for multiprocessor program design of a shared memory parallel system.
And S180, splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image.
In a possible implementation manner, the step of stitching the corrected image and the processed image corresponding to the image pair to obtain a stitched image includes:
calculating the difference value of the corrected image and the image in the overlapping area of the processed image matched with the image to be spliced to obtain a difference value image;
searching and acquiring a splicing seam of the difference image;
and performing morphological processing on the difference image, and blurring the splicing seam to obtain a spliced image.
Because the corrected image and the processed image have overlapped parts, the completely overlapped parts are intercepted, the difference (subtraction operation) calculation is carried out on the images in the overlapped range, and the search of the splicing seam is carried out on the difference image.
In an optional implementation manner, after the step of stitching the corrected image and the processed image corresponding to the image pair to obtain a stitched image, the method further includes:
performing image slicing on the spliced image to form an image tile;
adding the image tile to map data.
In this embodiment, in the data acquisition process, the image stitching result can be presented in real time. The real-time spliced images have accurate geographic coordinates, and accurate map service in the whole area can be realized.
In an optional implementation manner, before the step of stitching the corrected image and the processed image corresponding to the image pair to obtain a stitched image, the method includes:
taking the first image transmitted by the unmanned aerial vehicle as a standard, and carrying out color adjustment on the corrected image;
the splicing of the corrected image and the processed image corresponding to the image pair to obtain a spliced image comprises:
and splicing the corrected image after color adjustment with the corresponding processed image in the image pair to obtain a spliced image.
In this embodiment, the images to be stitched are subjected to Wallis-based color adjustment;
gc(x,y)=g(x,y)r1+r0
r1=(csf)/(csg+sf/c),r0=bmf+(1-b-r1)mg
parameter r1And r0Multiplicative coefficient and additive coefficient respectively; m isfThe target value of the image mean value; sfIs the target value of the image variance; c is the image contrast expansion constant, and takes the value of [0, 1%](ii) a b is the image brightness coefficient with the value of [0,1];mgIs the average value of the current image; sgIs the variance of the current image.
In the real-time processing process, the color adjustment of the image may not be limited to the color adjustment performed in a certain step in the whole process after the image is corrected and before the image is spliced. And performing real-time color adjustment on the images to be spliced to ensure the consistency of the whole color tone. In addition, the color tone of the image to be spliced can be adjusted according to the standard sample.
Referring to fig. 5, fig. 5 is a block diagram of an apparatus for real-time image stitching method of an unmanned aerial vehicle according to the present embodiment, and the following describes each module of the apparatus in detail.
The receiving module 210 is configured to receive an image transmitted by the unmanned aerial vehicle to obtain an image to be stitched;
the matching module 220 is configured to extract feature points of the image to be stitched and feature points of the processed image, and match the image to be stitched with the processed image based on the feature points to obtain an image pair and an initially screened matching point pair;
a residual error elimination module 230, configured to calculate a base matrix according to the preliminary screening matching point pairs, calculate residual values of all preliminary screening matching point pairs according to the base matrix, and eliminate matching points greater than a preset residual value threshold to obtain a first matching point pair;
a coordinate difference error eliminating module 240, configured to calculate a homography matrix according to the first matching point pairs, convert the first matching point pairs according to the homography matrix, calculate a coordinate difference, and eliminate the first matching point pairs larger than a preset coordinate difference threshold value to obtain second matching point pairs;
a free net adjustment module 250, configured to perform free net adjustment on the image pair and the second matching point pair to obtain posture information of the unmanned aerial vehicle when shooting an image to be stitched and a three-dimensional space coordinate of the second matching point pair;
the modeling module 260 is used for constructing a triangulation network according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model;
a correcting module 270, configured to perform orthoimage correction on the image to be stitched according to the ground elevation model to form a corrected image;
a stitching module 280, configured to stitch the corrected image with the processed image corresponding to the image pair to obtain a stitched image.
The embodiment provides an electronic device, which comprises a processor and a memory, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the steps of the unmanned aerial vehicle image real-time splicing method provided by the embodiment of the application are realized.
The embodiment provides a readable storage medium, in which a computer program is stored, and when the computer program is executed, the steps of the unmanned aerial vehicle image real-time stitching method provided by the embodiment of the application are implemented.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative and, for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, each functional module or unit in each embodiment of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention or a part of the technical solution that contributes to the prior art in essence can be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a smart phone, a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention.
Claims (10)
1. An unmanned aerial vehicle image real-time splicing method is characterized by comprising the following steps:
receiving an image transmitted by the unmanned aerial vehicle to obtain an image to be spliced;
extracting the characteristic points of the image to be spliced and the characteristic points of the processed image, and matching the image to be spliced with the processed image based on the characteristic points to obtain an image pair and a primary screening matching point pair;
calculating a basic matrix according to the primary screening matching point pairs, calculating residual values of all the primary screening matching point pairs according to the basic matrix, and removing the primary screening matching points larger than a preset residual value threshold value to obtain first matching point pairs;
calculating a homography matrix according to the first matching point pairs, converting the first matching point pairs according to the homography matrix, calculating a coordinate difference value, and removing the first matching point pairs larger than a preset coordinate difference value threshold value to obtain second matching point pairs;
carrying out free net adjustment on the image pair and the second matching point pair to obtain attitude information of the unmanned aerial vehicle when shooting the image to be spliced and a three-dimensional space coordinate of the second matching point pair;
constructing a triangulation network according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model;
performing orthoimage correction on the images to be spliced according to the ground elevation model to form corrected images;
and splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image.
2. The method according to claim 1, wherein the step of receiving the image transmitted by the drone to obtain the image to be stitched comprises:
receiving images and POS data transmitted by an unmanned aerial vehicle, numbering the images according to a transmission sequence, and obtaining the POS data of the images and the image numbers of the images;
when the image number is smaller than or equal to a preset image number threshold value, processing an image corresponding to the image number to generate a processed image;
and when the image number is larger than a preset image number threshold value, taking the image corresponding to the image number as the image to be spliced.
3. The method according to claim 1, wherein the step of extracting the feature points of the image to be stitched and the feature points of the processed image, and matching the image to be stitched with the processed image based on the feature points to obtain an image pair and a prescreened matching point pair comprises:
calculating the Euclidean distance between the images to be stitched and the processed images according to the POS data of the images to be stitched, extracting a preset number of the processed images according to the sequence of the Euclidean distances from small to large, and forming image pairs by the images to be stitched and the extracted processed images;
searching and matching characteristic points of the image to be spliced and the extracted processed image to obtain an initial matching point pair;
and calculating the reprojection error between the initial matching point pairs, and eliminating the initial matching point pairs with the reprojection error larger than a preset reprojection error threshold value to obtain an initial screening matching point pair.
4. The method of claim 1, wherein the step of constructing a triangulation network from the pose information and the three-dimensional spatial coordinates of the second matched point pair to form a ground elevation model comprises:
constructing an elevation histogram according to the three-dimensional space coordinates of the second matching point pair, and screening out a range section in which the number of matching points in the elevation histogram is less than a preset number threshold;
deleting the screened matching points in the range section, constructing a triangular net based on three-dimensional space coordinates of the remaining matching points, and combining the triangular net and the attitude information to form a ground elevation model.
5. The method of claim 1, wherein the step of stitching the corrected image with the corresponding processed image of the image pair to obtain a stitched image comprises:
calculating the difference value of the corrected image and the image in the overlapping area of the processed image matched with the image to be spliced to obtain a difference value image;
searching and acquiring a splicing seam of the difference image;
and performing morphological processing on the difference image, and blurring the splicing seam to obtain a spliced image.
6. The method of claim 5, wherein after the step of stitching the corrected image with the corresponding processed image of the image pair to obtain a stitched image, the method further comprises:
performing image slicing on the spliced image to form an image tile;
adding the image tile to map data.
7. The method of claim 1, wherein the step of stitching the corrected image to the corresponding processed image of the image pair to obtain the stitched image comprises:
taking the first image transmitted by the unmanned aerial vehicle as a standard, and carrying out color adjustment on the corrected image;
the splicing of the corrected image and the processed image corresponding to the image pair to obtain a spliced image comprises:
and splicing the corrected image after color adjustment with the corresponding processed image in the image pair to obtain a spliced image.
8. The utility model provides a real-time splicing apparatus of unmanned aerial vehicle image, its characterized in that includes following module:
the receiving module is used for receiving the image transmitted by the unmanned aerial vehicle to obtain an image to be spliced;
the matching module is used for extracting the characteristic points of the images to be spliced and the characteristic points of the processed images, matching the images to be spliced and the processed images based on the characteristic points to obtain image pairs and preliminarily screened matching point pairs;
the residual error value elimination module is used for calculating a basic matrix according to the preliminary screening matching point pairs, calculating residual values of all the preliminary screening matching point pairs according to the basic matrix, and eliminating the matching points larger than a preset residual value threshold value to obtain a first matching point pair;
a coordinate difference error eliminating module, configured to calculate a homography matrix according to the first matching point pairs, convert the first matching point pairs according to the homography matrix, calculate a coordinate difference, and eliminate the first matching point pairs larger than a preset coordinate difference threshold value to obtain second matching point pairs;
the free net adjustment module is used for carrying out free net adjustment on the image pair and the second matching point pair to obtain attitude information when the unmanned aerial vehicle shoots the images to be spliced and three-dimensional space coordinates of the second matching point pair;
the modeling module is used for constructing a triangulation network according to the attitude information and the three-dimensional space coordinates of the second matching point pair to form a ground elevation model;
the correction module is used for performing orthoimage correction on the image to be spliced according to the ground elevation model to form a corrected image;
and the splicing module is used for splicing the corrected image and the processed image corresponding to the image pair to obtain a spliced image.
9. A terminal device, comprising a processor and a memory, wherein the memory stores a computer program, and the computer program is executed by the processor to implement the steps of the unmanned aerial vehicle image real-time stitching method according to any one of claims 1 to 7.
10. A readable storage medium, wherein a computer program is stored in the readable storage medium, and when the computer program is executed, the steps of the unmanned aerial vehicle image real-time stitching method according to any one of claims 1 to 7 are implemented.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110239571.0A CN113012084A (en) | 2021-03-04 | 2021-03-04 | Unmanned aerial vehicle image real-time splicing method and device and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110239571.0A CN113012084A (en) | 2021-03-04 | 2021-03-04 | Unmanned aerial vehicle image real-time splicing method and device and terminal equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN113012084A true CN113012084A (en) | 2021-06-22 |
Family
ID=76405047
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110239571.0A Pending CN113012084A (en) | 2021-03-04 | 2021-03-04 | Unmanned aerial vehicle image real-time splicing method and device and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113012084A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114201633A (en) * | 2022-02-17 | 2022-03-18 | 四川腾盾科技有限公司 | Self-adaptive satellite image generation method for unmanned aerial vehicle visual positioning |
CN115063551A (en) * | 2022-08-18 | 2022-09-16 | 北京山维科技股份有限公司 | Method and device for generating slice orthoimage based on oblique photography three-dimensional model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927731A (en) * | 2014-05-05 | 2014-07-16 | 武汉大学 | Low-altitude remote sensing image rapid and automatic splicing method without POS assisting |
CN112288637A (en) * | 2020-11-19 | 2021-01-29 | 埃洛克航空科技(北京)有限公司 | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method |
-
2021
- 2021-03-04 CN CN202110239571.0A patent/CN113012084A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103927731A (en) * | 2014-05-05 | 2014-07-16 | 武汉大学 | Low-altitude remote sensing image rapid and automatic splicing method without POS assisting |
CN112288637A (en) * | 2020-11-19 | 2021-01-29 | 埃洛克航空科技(北京)有限公司 | Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method |
Non-Patent Citations (2)
Title |
---|
孙世友 等: "大地图 测绘地理信息大数据理论与实践", 中国环境出版社, pages: 53 - 54 * |
贾银江: "无人机遥感图像拼接关键技术研究", 《中国博士学位论文全文数据库信息科技辑》, 15 August 2016 (2016-08-15), pages 2 - 5 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114201633A (en) * | 2022-02-17 | 2022-03-18 | 四川腾盾科技有限公司 | Self-adaptive satellite image generation method for unmanned aerial vehicle visual positioning |
CN115063551A (en) * | 2022-08-18 | 2022-09-16 | 北京山维科技股份有限公司 | Method and device for generating slice orthoimage based on oblique photography three-dimensional model |
CN115063551B (en) * | 2022-08-18 | 2022-11-22 | 北京山维科技股份有限公司 | Method and device for generating slice orthoimage based on oblique photography three-dimensional model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112085844B (en) | Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment | |
US6587601B1 (en) | Method and apparatus for performing geo-spatial registration using a Euclidean representation | |
CN112085845A (en) | Outdoor scene rapid three-dimensional reconstruction device based on unmanned aerial vehicle image | |
CN110135455A (en) | Image matching method, device and computer readable storage medium | |
US6597818B2 (en) | Method and apparatus for performing geo-spatial registration of imagery | |
CN101763632B (en) | Method for demarcating camera and device thereof | |
US20140267254A1 (en) | Accurate Image Alignment to a 3D Model | |
US20100045701A1 (en) | Automatic mapping of augmented reality fiducials | |
CN111928842B (en) | Monocular vision based SLAM positioning method and related device | |
CN111798373A (en) | Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization | |
CN113012084A (en) | Unmanned aerial vehicle image real-time splicing method and device and terminal equipment | |
CN112132754B (en) | Vehicle movement track correction method and related device | |
CN116597168B (en) | Matching method, device, equipment and medium of vehicle-mounted laser point cloud and panoramic image | |
CN113592706B (en) | Method and device for adjusting homography matrix parameters | |
CN112270748B (en) | Three-dimensional reconstruction method and device based on image | |
JP3863014B2 (en) | Object detection apparatus and method | |
CN114445583A (en) | Data processing method and device, electronic equipment and storage medium | |
Koizumi et al. | Development of attitude sensor using deep learning | |
CN117522853A (en) | Fault positioning method, system, equipment and storage medium of photovoltaic power station | |
CN116630556A (en) | Method, system and storage medium for reconstructing map based on aerial map data | |
CN117011561A (en) | Image matching optimization method and system based on geometric constraint and convolutional neural network | |
US11790606B2 (en) | Determining camera rotations based on known translations | |
CN112288817B (en) | Three-dimensional reconstruction processing method and device based on image | |
CN114387532A (en) | Boundary identification method and device, terminal, electronic equipment and unmanned equipment | |
US10553022B2 (en) | Method of processing full motion video data for photogrammetric reconstruction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210622 |