CN112288637A - Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method - Google Patents

Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method Download PDF

Info

Publication number
CN112288637A
CN112288637A CN202011316356.8A CN202011316356A CN112288637A CN 112288637 A CN112288637 A CN 112288637A CN 202011316356 A CN202011316356 A CN 202011316356A CN 112288637 A CN112288637 A CN 112288637A
Authority
CN
China
Prior art keywords
image
module
splicing
unmanned aerial
matching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011316356.8A
Other languages
Chinese (zh)
Inventor
方伯军
张齐鹏
胡勤伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Airlook Aviation Technology Beijing Co ltd
Original Assignee
Airlook Aviation Technology Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airlook Aviation Technology Beijing Co ltd filed Critical Airlook Aviation Technology Beijing Co ltd
Priority to CN202011316356.8A priority Critical patent/CN112288637A/en
Publication of CN112288637A publication Critical patent/CN112288637A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a device and a method for quickly splicing aerial images of an unmanned aerial vehicle, wherein the device comprises a preprocessing module, a display processing module and a post-processing module; the video data and POS data shot by the unmanned aerial vehicle are received by the preprocessing module, preprocessed and then sent to the display processing module; the display processing module displays the preprocessed video data and the preprocessed POS data on the spherical model in real time to prepare for splicing; and the post-processing module performs splicing processing on the images displayed in real time and corrects and fuses the overlapped area. Compared with the traditional off-line three-dimensional modeling method, the rapid splicing device and the rapid splicing method for the aerial images of the unmanned aerial vehicle do not need hour-level processing time, and can achieve a real-time effect. Traditional concatenation directly based on the image, the concatenation effect has dislocation distortion. The orthographic projection obtained by the method has no dislocation and obvious distortion, and achieves the similar precision effect of three-dimensional reconstruction.

Description

Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method
Technical Field
The invention relates to the technical field of image processing, in particular to a device and a method for quickly splicing aerial images of an unmanned aerial vehicle.
Background
With the development of science and technology, the application of unmanned aerial vehicle aerial photography is more extensive. The unmanned aerial vehicle aerial photography has the characteristics of high efficiency, flexibility, rapidness and low cost. The digital camera and the digital video camera mounted on the camera can acquire high-resolution images. The unmanned aerial vehicle aerial photography has wide application fields, including agriculture, forestry, electric power, national soil resources, city planning and the like. The oblique photography technique in unmanned aerial vehicle aerial photography is a high and new technique developed in the last ten years in the international photogrammetry field, and the technique uses a vertical camera and four oblique cameras to be carried on the unmanned aerial vehicle, synchronously acquires images through five visual angles, and acquires rich top surfaces of buildings and side-looking high-resolution textures. And generating a three-dimensional model by using the acquired images in an off-line manner, and further deriving a spliced digital orthophoto map. Digital Orthophoto map is abbreviated as dom (digital orthophotomap). The Digital orthophoto map is a plan map with kilometer grids, outline (inside and outside) decorations and notes, which is obtained by performing radiation correction, differential correction and mosaic on scanned Digital aerial photos or remote sensing images pixel by using a Digital Elevation Model (DEM), and cutting generated image data according to a specified map range.
The existing oblique photography technology has the problems of high hardware requirement, high time delay, long three-dimensional model output period and the like, and cannot meet the real-time requirement in the disaster prevention process.
Disclosure of Invention
The invention aims to provide a device and a method for quickly splicing aerial images of an unmanned aerial vehicle, which can solve the problems of high image splicing time delay and long splicing result output period in the prior art.
The purpose of the invention is realized by the following technical scheme:
in a first aspect, the invention provides an unmanned aerial vehicle aerial image fast splicing device, which comprises a preprocessing module, a display processing module and a post-processing module; the preprocessing module receives video data and POS data shot by the unmanned aerial vehicle, preprocesses the video data and the POS data and sends the preprocessed video data and POS data to the display processing module; the display processing module displays the preprocessed video data and POS data on a spherical model in real time to prepare for splicing; and the post-processing module is used for splicing the images displayed in real time and correcting and fusing the overlapped area.
Further, the preprocessing module comprises:
the sampling module is used for sampling the video data and POS data of the unmanned aerial vehicle at regular time, zooming the sampled single-frame image to a set scale and then storing the zoomed single-frame image;
the characteristic extraction module is used for extracting the characteristics of the single-frame image;
the space matching module is used for matching k frame images closest to the current frame image to form an image matching pair by using GPS information and a kd tree;
the characteristic matching module is used for carrying out characteristic matching and filtering of mismatching by utilizing characteristic points among the images;
and the point cloud generating module is used for generating a track by utilizing the matching relation among the characteristic points, carrying out triangulation on the generated track to generate a new three-dimensional space point and carrying out error adjustment on the three-dimensional space point.
Further, the display processing module comprises:
the first 2D triangular mesh generation module is used for triangulating the feature points by using the matching result of the feature matching module to generate a 2D triangular mesh;
the 3D triangular mesh generation module is used for triangulating the three-dimensional space points to generate a 3D triangular mesh;
the second 2D triangular grid generating module is used for removing the elevation dimension of the 3D triangular grid to generate a second 2D triangular grid;
an image segmentation module for segmenting a single frame image into a plurality of image blocks using a first 2D triangular mesh;
the DSM generation module generates a DSM elevation map by using the 3D triangular mesh;
and the DOM generation module generates digital DOM and image four-origin information by using the second 2D triangular mesh and the image segmentation result.
Further, the post-processing module comprises:
the DSM splicing module is used for directly splicing a plurality of DSMs generated by the DSM generating module by using four-way point information to form a complete DSM image;
the complete DSM processing module is used for performing smooth gradual change processing on the overlapping area of the complete DSM image;
the DOM splicing module is used for directly splicing a plurality of DOMs generated by the DOM generating module by using the four-to-point information to form a complete DOM image;
and the complete DOM processing module is used for carrying out smooth gradual change processing on the overlapped area of the complete DOM image.
In a second aspect, the invention provides a method for quickly splicing aerial images of an unmanned aerial vehicle, which comprises the following steps:
step S1, receiving video data and POS data and preprocessing;
step S2, displaying the preprocessed video data and POS data in real time;
and step S3, splicing the images displayed in real time, and correcting and fusing the spliced overlapped areas.
Further, the step of preprocessing in step S1 includes:
s101, sampling video data and POS data of the unmanned aerial vehicle at regular time, zooming a sampled single-frame image to a set scale and storing the zoomed single-frame image;
step S102, extracting the characteristics of the single-frame image;
step S103, matching k frame images with the nearest current frame image time to form an image matching pair;
step S104, performing feature matching by using feature points among the images and filtering mismatching;
and S105, generating a track by using the matching relation among the characteristic points, performing triangulation on the generated track to generate a new three-dimensional space point, and performing error adjustment on the three-dimensional space point.
Further, the step S2 includes:
step S201, using the feature matching result of the step S104, triangulating feature points used for feature matching to generate a first 2D triangular mesh;
step S202, cutting a single-frame image into a plurality of image blocks by using a first 2D triangular mesh;
step S203, triangulating the three-dimensional space points generated in the step S105 by using a first 2D triangular mesh to generate a 3D triangular mesh;
step S204, removing the elevation dimension of the 3D triangular grid to generate a second 2D triangular grid;
step S205, generating DSM;
and S206, generating DOM and image four-origin point information and sending the information to the spherical model for real-time display.
Further, the step S3 includes:
step S301, directly splicing the DSMs generated in the step S205 by using four-way point information to form a complete DSM image;
step S302, carrying out smooth gradual change processing on the overlapped area of the complete DSM image;
step S303, directly splicing the plurality of DOMs generated in the step S206 by using four-way point information to form a complete DSM image;
and S304, performing smooth gradual change processing on the overlapped area of the complete DOM image.
Further, an ORB method is adopted for feature extraction.
Compared with the traditional off-line three-dimensional modeling method, the rapid splicing device and the rapid splicing method for the aerial images of the unmanned aerial vehicle do not need hour-level processing time, and can achieve a real-time effect. Traditional concatenation directly based on the image, the concatenation effect has dislocation distortion. The orthographic projection obtained by the method has no dislocation and obvious distortion, and achieves the similar precision effect of three-dimensional reconstruction.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this application, serve to provide a further understanding of the application and to enable other features, objects, and advantages of the application to be more apparent. The drawings and their description illustrate the embodiments of the invention and do not limit it. In the drawings:
FIG. 1 is a schematic structural diagram of the rapid splicing device for aerial images of an unmanned aerial vehicle of the invention;
FIG. 2 is a step diagram of the fast splicing method of aerial images of the unmanned aerial vehicle of the invention.
Detailed Description
In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only partial embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It should be noted that the terms "first," "second," and the like in the description and claims of this application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It should be understood that the data so used may be interchanged under appropriate circumstances such that embodiments of the application described herein may be used. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
In this application, the terms "upper", "lower", "left", "right", "front", "rear", "top", "bottom", "inner", "outer", "middle", "vertical", "horizontal", "lateral", "longitudinal", and the like indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings. These terms are used primarily to better describe the present application and its embodiments, and are not used to limit the indicated devices, elements or components to a particular orientation or to be constructed and operated in a particular orientation.
Moreover, some of the above terms may be used to indicate other meanings besides the orientation or positional relationship, for example, the term "on" may also be used to indicate some kind of attachment or connection relationship in some cases. The specific meaning of these terms in this application will be understood by those of ordinary skill in the art as appropriate.
In addition, the term "plurality" shall mean two as well as more than two.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the embodiments with reference to the attached drawings.
The invention discloses a rapid splicing device for aerial images of an unmanned aerial vehicle, which is shown in figure 1. The display system comprises a preprocessing module, a display processing module and a post-processing module. The preprocessing module receives video data and POS data shot by a camera carried by the unmanned aerial vehicle, and sends the video data and the POS data to the display processing module after preprocessing. And the display processing module displays the preprocessed video data and the preprocessed POS data on the spherical model in real time to prepare for splicing. And the post-processing module performs splicing processing on the images displayed in real time and corrects and fuses the overlapped area.
Unmanned aerial vehicle is when flight operation, and the unmanned aerial vehicle image that obtains can carry supporting POS data usually. Thereby, the image can be more conveniently processed in the processing. The POS data mainly include GPS data and IMU data, i.e. external orientation elements in oblique photogrammetry: latitude, longitude, elevation, heading angle (Phi), pitch angle (Omega), and roll angle (Kappa). The latitude, longitude, and elevation are GPS data, generally indicated at X, Y, Z, representing the geographic location of the aircraft at the time of the exposure point in flight. The IMU data mainly contains the heading angle (Phi): angle between the longitudinal axis of the aircraft and space shuttle and the north pole of the earth, pitch angle (Omega): angle of the vector parallel to the axis of the fuselage and pointing in front of the aircraft to the ground, roll angle (Kappa): also called the roll angle, is the angle between the zb axis of the coordinate system of the machine body and the vertical plane passing through the xb axis of the machine body, and the roll of the machine body to the right is positive, otherwise negative.
Further, in a preferred embodiment of the present application, the preprocessing module includes:
and the sampling module (new frame) is mainly used for receiving the video data and POS data of the unmanned aerial vehicle, carrying out timing sampling, and storing the sampled single-frame image after zooming to a set scale. And simultaneously converting the format of the POS data.
And a feature extraction module (feature extract) which mainly extracts features of the single-frame image. In order to deal with the characteristic of low time delay, an ORB method with higher speed is used for feature extraction, and the extracted feature points are subjected to rasterization processing, so that the feature points can be uniformly distributed as far as possible. ORB is a short for organized Fast and Rotated Brief and can be used to quickly create feature vectors for key points in an image, which can be used to identify objects in the image. Wherein Fast and Brief are the feature detection algorithm and the vector creation algorithm, respectively. The ORB first looks for a special area from the image, called a keypoint. Key points are small areas, such as corners, that stand out in the image, such as they have the characteristic that the pixel values change sharply from light to dark. The ORB will then compute a corresponding feature vector for each keypoint. The feature vector created by the ORB algorithm contains only 1 and 0, called binary feature vector. The order of 1 and 0 will vary depending on the particular keypoint and the pixel area around it. The vector represents the intensity pattern around the keypoint, so multiple feature vectors can be used to identify larger regions, even particular objects in the image. ORB is characterized by being ultra fast and to some extent immune to noise and image transformations, such as rotation and scaling transformations.
The spatial matching module (spatial match) mainly utilizes GPS information and a kd tree to select a k frame image forming image matching pair nearest to a current frame and send the k frame image forming image matching pair to the next characteristic matching stage.
The feature matching module (feature match) mainly uses feature points between images to perform feature matching and filtering of mismatching.
And a point cloud generating module (pointcloud) which mainly generates a track by utilizing the matching relation between the characteristic points and triangulates the generated track to generate a new three-dimensional space point 3D. The point3D was error adjusted using the beam adjustment method (bundle adjust). The point cloud is a massive point set which expresses target space distribution and target surface characteristics under the same space reference system, and after the space coordinates of each sampling point on the surface of the object are obtained, a point set is obtained, which is called as the point cloud. The beam adjustment method is used for optimizing a plurality of camera motion matrixes and a non-coding element three-dimensional structure in a projective space. The greatest feature of this optimization method is that it can handle data loss situations and provide true maximum likelihood estimates.
Further, in a preferred embodiment of the present application, the display processing module includes:
the first 2D triangular mesh generation module (2D mesh1) is mainly used for triangulating the feature points used by feature matching by using the matching result of the feature matching module to generate a 2D triangular mesh.
And the 3D triangular Mesh generation module (3D Mesh) is mainly used for triangulating the point3D points generated by the current frame by using the results of pointclosed and 2D Mesh1 to generate the 3D triangular Mesh.
The second 2D triangular Mesh generation module (2D Mesh2) is mainly configured to remove the dimension of the elevation of the 3D triangular Mesh generated by the 3D Mesh, and only retain the dimension information of x and y to generate a new 2D triangular Mesh.
The Image partitioning module (Image Patch) mainly uses a first 2D triangular mesh to partition a single frame Image into Image blocks. Each image block is a grid size.
The DSM generation module (Local DSM) mainly uses a 3D triangular mesh to generate a DSM high-level diagram, pixel points do not have corresponding elevations, and interpolation calculation is carried out in the corresponding triangular mesh. DSM: the Digital Surface Model is a ground elevation Model including the heights of Surface buildings, bridges, trees and the like.
The DOM generation module (Local Ortho Mosaic) mainly uses the results of the 2D Mesh2 and the Image Patch to generate digital orthophoto map DOM and Image four-origin information, and sends the information to the spherical model for real-time display.
Further, in a preferred embodiment of the present application, the post-processing module includes:
and the DSM splicing module is used for directly splicing the plurality of DSMs generated by the DSM generating module by using the four-to-point information to form a complete DSM image.
The complete DSM processing module has an overlapping area because of the direct splicing of a plurality of DSMs. And (4) the color difference of the overlapping area has sudden change, after two adjacent DSMs are downsampled by using a Gaussian pyramid, the two DSMs are weighted and superposed by using a smoothly gradually changed weight in different frequency bands.
And the DOM splicing module is used for directly splicing the multiple DOMs generated by the DOM generating module by using the four-to-point information to form a complete DOM image.
The complete DOM processing module has an overlapping area due to the fact that a plurality of DOMs are directly spliced. And (4) the color difference of the overlapping area has sudden change, after two adjacent DOM are downsampled by using a Gaussian pyramid, the two DOM are weighted and superposed by using a smoothly gradually changed weight in different frequency bands.
The invention discloses a rapid splicing method for aerial images of an unmanned aerial vehicle, which comprises the following steps:
and step S1, the preprocessing module receives the video data and the POS data and preprocesses the video data and the POS data.
Video data and POS data are shot by the camera carried by the unmanned aerial vehicle.
Further, in a preferred embodiment of the present application, the process of preprocessing the video data and the POS data by the preprocessing module includes:
and S101, sampling video data and POS data of the unmanned aerial vehicle at regular time, zooming the sampled single-frame image to a set scale, and storing the zoomed single-frame image.
The format conversion of the POS data is required here, some POS data are stored in WGS84 coordinate system, some are stored in CGCS2000 coordinate system, and some columns of longitude and latitude are arranged differently, so some data preprocessing operations are required to ensure the uniformity of the input data.
And S102, extracting the characteristics of the single-frame image.
In order to deal with the characteristic of low time delay, an ORB method with higher speed is used for feature extraction, and the extracted feature points are subjected to rasterization processing, so that the feature points can be uniformly distributed as far as possible.
And step S103, matching k frame images with the nearest current frame image time to form an image matching pair.
And matching the k frame image forming image matching pair which is closest to the current frame in time by using GPS information provided by the unmanned aerial vehicle and using a kd tree, and sending the k frame image forming image matching pair into the next characteristic matching stage. k represents the number of images having an overlapping area with the current image. k is not a determined value, k is regulated to be more than or equal to 2, and dynamic adjustment is carried out according to the acquired data density.
And step S104, performing feature matching by using the feature points among the images, and filtering mismatching.
The feature extraction uses algorithms of open sources in the industry, such as sift, surf, orb and the like, and specific configuration is carried out according to different scene requirements. For example, in some regions with insignificant features such as mountainous areas, grasslands, water surfaces, and the like, the feature extraction algorithm with relatively strong adaptability, such as sift, may be used, and in regions with significant features, such as cities, the feature extraction algorithm with high speed, such as orb and surf, may be used.
Feature matching is also calculated using an algorithm open in the industry, and needs to be used correspondingly according to the used feature extraction method: the feature descriptors such as sift and surf are floating point number vectors, and the similarity of two feature points can be compared by using Euclidean distance. A feature descriptor like orb in binary form can be used to calculate the similarity between two feature points using hamming distance.
The filtering method of the mismatching is also calculated by using an algorithm of an open source in the industry. And when the area of the aerial photo is the areas with high flatness such as grasslands, water surfaces and the like, estimating a homography matrix for the feature matching result by adopting a ransac algorithm, and filtering mismatching by using the minimum reprojection error. And when the navigation sheet area is an area with low flatness, such as a residential area, estimating an essential matrix or a basic matrix for the feature matching result by adopting a ransac algorithm, and filtering mismatching by using Sampson distance.
And step S105, generating a track (track) by utilizing the matching relation between the characteristic points, triangulating the generated track to generate a new three-dimensional space point3D, and adjusting the error of the three-dimensional space point.
Assume that A, B, C three images exist, and a, b, and c are feature points on the three images, respectively. After the features are matched, two matching pairs of a-b and b-c are generated, and then the feature points a, b and c form a track. the track has the following characteristics:
1) the length of track is more than or equal to 2;
2) each feature point in the track is from a different image;
3) each feature point in the track points to the same object in the real world, and is therefore also called a homonymous image point.
The error adjustment of the three-dimensional space point is performed by using a beam adjustment method (bundle adjustment).
And step S2, displaying the preprocessed video data and POS data on the spherical model in real time to prepare for splicing.
Further, in a preferred embodiment of the present application, the step S2 includes:
and step S201, triangulating the feature points used for feature matching by using the feature matching result of the step S104 to generate a first 2D triangular mesh.
Step S202, a single frame image is cut into a plurality of image blocks by using a first 2D triangular mesh.
The size of each image block is a grid size.
Step S203, triangulating the three-dimensional space points of the current frame image generated in step S105 by using the first 2D triangular mesh to generate a 3D triangular mesh.
And S204, removing the elevation dimension of the 3D triangular grid to generate a second 2D triangular grid.
The method mainly removes the dimension of the elevation of the 3D triangular Mesh generated by the 3D Mesh, and only retains the dimension information of x and y.
And step S205, generating a digital surface model DSM.
The method mainly comprises the steps of generating a DSM high-level diagram by using a 3D triangular grid, carrying out interpolation calculation in the corresponding triangular grid when a pixel point does not have a corresponding elevation. DSM: the Digital Surface Model is a ground elevation Model including the heights of Surface buildings, bridges, trees and the like.
And S206, generating digital orthophoto map DOM and four-arrival-point information of the image, and sending the information to a spherical model for real-time display.
Mainly, the digital orthophoto map DOM is generated using the second 2D triangular mesh and the results of the image segmentation module.
And step S3, splicing the images displayed in real time, and correcting and fusing the spliced overlapped areas.
Further, in a preferred embodiment of the present application, the step S3 includes:
step S301, the multiple DSMs generated in step S205 are directly stitched using the four-way point information to form a complete DSM image.
Step S302, performing smooth gradation processing on the overlapping area of the complete DSM image.
Since multiple DSMs are spliced directly, there will be an overlap area. And (4) the color difference of the overlapping area has sudden change, after two adjacent DSMs are downsampled by using a Gaussian pyramid, the two DSMs are weighted and superposed by using a smoothly gradually changed weight in different frequency bands.
The weighted superposition of the smooth gradual change weight and the smooth gradual change weight uses an open source algorithm commonly used in the field, and is not described herein.
Step S303, directly concatenating the plurality of DOMs generated in step S206 using the four-way point information to form a complete DSM image.
And S304, performing smooth gradual change processing on the overlapped area of the complete DOM image.
Since multiple DOM are directly stitched, there will be an overlapping region. And (4) the color difference of the overlapping area has sudden change, after two adjacent DOM are downsampled by using a Gaussian pyramid, the two DOM are weighted and superposed by using a smoothly gradually changed weight in different frequency bands.
The above description is only a preferred embodiment of the present application and is not intended to limit the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (9)

1. The fast splicing device for aerial images of the unmanned aerial vehicle is characterized by comprising a preprocessing module, a display processing module and a post-processing module; the preprocessing module receives video data and POS data shot by the unmanned aerial vehicle, preprocesses the video data and the POS data and sends the preprocessed video data and POS data to the display processing module; the display processing module displays the preprocessed video data and POS data on a spherical model in real time to prepare for splicing; and the post-processing module is used for splicing the images displayed in real time and correcting and fusing the overlapped area.
2. The device for rapidly splicing aerial images of unmanned aerial vehicles according to claim 1, wherein the preprocessing module comprises:
the sampling module is used for sampling the video data and POS data of the unmanned aerial vehicle at regular time, zooming the sampled single-frame image to a set scale and then storing the zoomed single-frame image;
the characteristic extraction module is used for extracting the characteristics of the single-frame image;
the space matching module is used for matching k frame images closest to the current frame image to form an image matching pair by using GPS information and a kd tree;
the characteristic matching module is used for carrying out characteristic matching and filtering of mismatching by utilizing characteristic points among the images;
and the point cloud generating module is used for generating a track by utilizing the matching relation among the characteristic points, carrying out triangulation on the generated track to generate a new three-dimensional space point and carrying out error adjustment on the three-dimensional space point.
3. The device for rapidly splicing aerial images of unmanned aerial vehicles according to claim 2, wherein the display processing module comprises:
the first 2D triangular mesh generation module is used for triangulating the feature points by using the matching result of the feature matching module to generate a 2D triangular mesh;
the 3D triangular mesh generation module is used for triangulating the three-dimensional space points to generate a 3D triangular mesh;
the second 2D triangular grid generating module is used for removing the elevation dimension of the 3D triangular grid to generate a second 2D triangular grid;
an image segmentation module for segmenting a single frame image into a plurality of image blocks using a first 2D triangular mesh;
the DSM generation module generates a DSM elevation map by using the 3D triangular mesh;
and the DOM generation module generates digital DOM and image four-origin information by using the second 2D triangular mesh and the image segmentation result.
4. The device for rapidly splicing aerial images of unmanned aerial vehicles according to claim 3, wherein the post-processing module comprises:
the DSM splicing module is used for directly splicing a plurality of DSMs generated by the DSM generating module by using four-way point information to form a complete DSM image;
the complete DSM processing module is used for performing smooth gradual change processing on the overlapping area of the complete DSM image;
the DOM splicing module is used for directly splicing a plurality of DOMs generated by the DOM generating module by using the four-to-point information to form a complete DOM image;
and the complete DOM processing module is used for carrying out smooth gradual change processing on the overlapped area of the complete DOM image.
5. An unmanned aerial vehicle aerial image fast splicing method is characterized by comprising the following steps:
step S1, receiving video data and POS data and preprocessing;
step S2, displaying the preprocessed video data and POS data in real time;
and step S3, splicing the images displayed in real time, and correcting and fusing the spliced overlapped areas.
6. The unmanned aerial vehicle aerial image rapid stitching method as claimed in claim 5, wherein the preprocessing step in step S1 comprises:
s101, sampling video data and POS data of the unmanned aerial vehicle at regular time, zooming a sampled single-frame image to a set scale and storing the zoomed single-frame image;
step S102, extracting the characteristics of the single-frame image;
step S103, matching k frame images with the nearest current frame image time to form an image matching pair;
step S104, performing feature matching by using feature points among the images and filtering mismatching;
and S105, generating a track by using the matching relation among the characteristic points, performing triangulation on the generated track to generate a new three-dimensional space point, and performing error adjustment on the three-dimensional space point.
7. The unmanned aerial vehicle aerial image rapid stitching method as claimed in claim 6, wherein the step S2 comprises:
step S201, using the feature matching result of the step S104, triangulating feature points used for feature matching to generate a first 2D triangular mesh;
step S202, cutting a single-frame image into a plurality of image blocks by using a first 2D triangular mesh;
step S203, triangulating the three-dimensional space points generated in the step S105 by using a first 2D triangular mesh to generate a 3D triangular mesh;
step S204, removing the elevation dimension of the 3D triangular grid to generate a second 2D triangular grid;
step S205, generating DSM;
and S206, generating DOM and image four-origin point information and sending the information to the spherical model for real-time display.
8. The unmanned aerial vehicle aerial image rapid stitching method as claimed in claim 7, wherein the step S3 comprises:
step S301, directly splicing the DSMs generated in the step S205 by using four-way point information to form a complete DSM image;
step S302, carrying out smooth gradual change processing on the overlapped area of the complete DSM image;
step S303, directly splicing the plurality of DOMs generated in the step S206 by using four-way point information to form a complete DSM image;
and S304, performing smooth gradual change processing on the overlapped area of the complete DOM image.
9. The unmanned aerial vehicle aerial image fast stitching method according to claim 6, wherein an ORB method is adopted for feature extraction.
CN202011316356.8A 2020-11-19 2020-11-19 Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method Pending CN112288637A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011316356.8A CN112288637A (en) 2020-11-19 2020-11-19 Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011316356.8A CN112288637A (en) 2020-11-19 2020-11-19 Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method

Publications (1)

Publication Number Publication Date
CN112288637A true CN112288637A (en) 2021-01-29

Family

ID=74399670

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011316356.8A Pending CN112288637A (en) 2020-11-19 2020-11-19 Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method

Country Status (1)

Country Link
CN (1) CN112288637A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012084A (en) * 2021-03-04 2021-06-22 中煤(西安)航测遥感研究院有限公司 Unmanned aerial vehicle image real-time splicing method and device and terminal equipment
CN114170306A (en) * 2021-11-17 2022-03-11 埃洛克航空科技(北京)有限公司 Image attitude estimation method, device, terminal and storage medium
CN114200958A (en) * 2021-11-05 2022-03-18 国能电力技术工程有限公司 Automatic inspection system and method for photovoltaic power generation equipment
CN116883251A (en) * 2023-09-08 2023-10-13 宁波市阿拉图数字科技有限公司 Image orientation splicing and three-dimensional modeling method based on unmanned aerial vehicle video

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113012084A (en) * 2021-03-04 2021-06-22 中煤(西安)航测遥感研究院有限公司 Unmanned aerial vehicle image real-time splicing method and device and terminal equipment
CN114200958A (en) * 2021-11-05 2022-03-18 国能电力技术工程有限公司 Automatic inspection system and method for photovoltaic power generation equipment
CN114170306A (en) * 2021-11-17 2022-03-11 埃洛克航空科技(北京)有限公司 Image attitude estimation method, device, terminal and storage medium
CN116883251A (en) * 2023-09-08 2023-10-13 宁波市阿拉图数字科技有限公司 Image orientation splicing and three-dimensional modeling method based on unmanned aerial vehicle video
CN116883251B (en) * 2023-09-08 2023-11-17 宁波市阿拉图数字科技有限公司 Image orientation splicing and three-dimensional modeling method based on unmanned aerial vehicle video

Similar Documents

Publication Publication Date Title
CN110648398B (en) Real-time ortho image generation method and system based on unmanned aerial vehicle aerial data
Johnson‐Roberson et al. Generation and visualization of large‐scale three‐dimensional reconstructions from underwater robotic surveys
CN112085844B (en) Unmanned aerial vehicle image rapid three-dimensional reconstruction method for field unknown environment
KR101165523B1 (en) Geospatial modeling system and related method using multiple sources of geographic information
CN112288637A (en) Unmanned aerial vehicle aerial image rapid splicing device and rapid splicing method
Verhoeven et al. Undistorting the past: New techniques for orthorectification of archaeological aerial frame imagery
Barazzetti et al. True-orthophoto generation from UAV images: Implementation of a combined photogrammetric and computer vision approach
CN108168521A (en) One kind realizes landscape three-dimensional visualization method based on unmanned plane
EP3002552B1 (en) A method and a system for building a three-dimensional model from satellite images
CN112434709A (en) Aerial survey method and system based on real-time dense three-dimensional point cloud and DSM of unmanned aerial vehicle
Maurer et al. Tapping into the Hexagon spy imagery database: A new automated pipeline for geomorphic change detection
CN105466399B (en) Quickly half global dense Stereo Matching method and apparatus
CN105005964A (en) Video sequence image based method for rapidly generating panorama of geographic scene
US11972507B2 (en) Orthophoto map generation method based on panoramic map
CN113066112B (en) Indoor and outdoor fusion method and device based on three-dimensional model data
CN115641401A (en) Construction method and related device of three-dimensional live-action model
CN114143528A (en) Multi-video stream fusion method, electronic device and storage medium
KR100904078B1 (en) A system and a method for generating 3-dimensional spatial information using aerial photographs of image matching
US20230186561A1 (en) Method for 3d reconstruction from satellite imagery
CN111693025A (en) Remote sensing image data generation method, system and equipment
CN116883251B (en) Image orientation splicing and three-dimensional modeling method based on unmanned aerial vehicle video
Haala et al. High density aerial image matching: State-of-the-art and future prospects
CN115330594A (en) Target rapid identification and calibration method based on unmanned aerial vehicle oblique photography 3D model
CN108447042A (en) The fusion method and system of urban landscape image data
CN115049794A (en) Method and system for generating dense global point cloud picture through deep completion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination