CN115631094A - Unmanned aerial vehicle real-time image splicing method based on spherical correction - Google Patents
Unmanned aerial vehicle real-time image splicing method based on spherical correction Download PDFInfo
- Publication number
- CN115631094A CN115631094A CN202211400858.8A CN202211400858A CN115631094A CN 115631094 A CN115631094 A CN 115631094A CN 202211400858 A CN202211400858 A CN 202211400858A CN 115631094 A CN115631094 A CN 115631094A
- Authority
- CN
- China
- Prior art keywords
- image
- pic
- gpic
- frame
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000012937 correction Methods 0.000 title claims abstract description 58
- 238000000034 method Methods 0.000 title claims abstract description 53
- 230000009466 transformation Effects 0.000 claims abstract description 64
- 238000004422 calculation algorithm Methods 0.000 claims abstract description 36
- 230000005540 biological transmission Effects 0.000 claims abstract description 23
- 239000011159 matrix material Substances 0.000 claims abstract description 19
- 238000012216 screening Methods 0.000 claims abstract description 4
- 238000001914 filtration Methods 0.000 claims description 38
- 238000012545 processing Methods 0.000 claims description 25
- 238000004891 communication Methods 0.000 claims description 14
- 238000013519 translation Methods 0.000 claims description 8
- 238000004364 calculation method Methods 0.000 claims description 7
- 101100518501 Mus musculus Spp1 gene Proteins 0.000 claims description 4
- 238000000605 extraction Methods 0.000 claims description 4
- 230000000717 retained effect Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 claims description 3
- 238000006073 displacement reaction Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 8
- 238000004140 cleaning Methods 0.000 abstract description 2
- 238000005516 engineering process Methods 0.000 description 7
- 238000009825 accumulation Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 238000004458 analytical method Methods 0.000 description 3
- 238000005094 computer simulation Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 238000011161 development Methods 0.000 description 3
- 101100161752 Mus musculus Acot11 gene Proteins 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000001186 cumulative effect Effects 0.000 description 1
- 238000007689 inspection Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000013076 target substance Substances 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4038—Image mosaicing, e.g. composing plane images from plane sub-images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/80—Geometric correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/32—Indexing scheme for image data processing or generation, in general involving image mosaicing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Health & Medical Sciences (AREA)
- Image Processing (AREA)
Abstract
The invention discloses an unmanned aerial vehicle real-time image splicing method based on spherical correction, which comprises the steps of (1) judging and cleaning a real-time frame by using a fuzzy degree and an image content in order to filter a low-quality image; (2) In order to perform feature description on an image, SIFT feature description is created on the image; (3) In order to match the characteristics between images, selecting the optimal matching characteristics by using a brute force traversal mode, and screening the characteristic matching relation by using a RANSAC algorithm; (4) In order to eliminate transmission transformation errors, a homography matrix is obtained through a characteristic matching relation, a spherical correction model under geometric transformation parameters is constructed according to the geometric transformation parameters between the homography matrix and the images, and a correction ball is calculated; (5) And performing spherical projection transformation on the input image, and performing feature matching again by using the transformed image to eliminate the transmission transformation error in the splicing process.
Description
Technical Field
The invention relates to a ground unmanned aerial vehicle image splicing processing method, in particular to an unmanned aerial vehicle real-time image splicing method based on spherical correction.
Background
With the development maturity of the small unmanned aerial vehicle technology, the unmanned aerial vehicle has been widely applied to various fields, such as survey and drawing, supervision, military affairs etc. especially in aspects such as industry inspection, natural disaster supervision, city security protection, unmanned aerial vehicle picture transmission function plays crucial effect. Under some circumstances, need use unmanned aerial vehicle to carry out the map to the situation in certain region and pass the observation, and the mode of directly drawing unmanned aerial vehicle video stream is unfavorable for carrying out whole observation and analysis to the situation, consequently needs to use image stitching technique to generate whole situation picture.
The unmanned aerial vehicle image transmission system generally includes an Unmanned Aerial Vehicle (UAV), a wireless communication device, and an unmanned aerial vehicle Ground Control Station (GCS), as shown in fig. 1. The UAV is loaded with sensors with different application requirements, such as an image sensor for acquiring an image of a ground detection area, and image information acquired by the image sensor is received by the GCS through a wireless communication device. And the GCS performs image enhancement, image splicing and other processing on the received image information, and finally displays the real scene of the detection area on the display equipment of the GCS.
The traditional unmanned aerial vehicle image splicing technology is generally used for carrying out offline splicing on video data shot by an unmanned aerial vehicle during flying in a GCS (general packet switch) after the unmanned aerial vehicle finishes a flying task, the offline splicing technology has a good splicing effect at present, but the splicing speed is low generally, the splicing mode is lack of timeliness, splicing is carried out after analysis and calculation of all images, image splicing observation can not be carried out on a certain situation in real time and on line, and therefore the offline image splicing technology can not be applied to scenes with high timeliness requirements. From the present unmanned aerial vehicle image use angle, the object of image concatenation mainly has two kinds: one is an aerial photo shot by a digital aerial camera; the other is a video sequence image (including a visible light image and an infrared video image). The image stitching process is a process of stitching a group of images with overlapping degree into a seamless high-definition large-field image through automatic computer registration, geometric correction, image dodging and other processing, as shown in fig. 2.
With the development and maturity of 5G communication, the image transmission capability of the UAV is enhanced, the peak rate of a communication link of wireless communication equipment can reach 10 Gbit/s-20 Gbit/s, the air interface delay is as low as 1ms, and the real-time performance of wireless communication is greatly enhanced, so that the UAV can transmit stable and high-quality video stream information (namely HTTP data stream) back to a communication base station in real time, and therefore the real-time image splicing technology of the UAV based on the video stream becomes an important development direction. However, the unmanned aerial vehicle image splicing technology based on video streaming faces many challenges, and the video streaming transmission mode causes the image data quality to be reduced; the stability of the communication link influences the splicing stability and even directly influences the splicing success or failure; the flight state of the UAV is also closely related to the splicing quality. Therefore, in a video stream splicing mode in the GCS of 5G communication, how to simultaneously maintain the stability and timeliness of a splicing algorithm is a technical problem to be solved.
Disclosure of Invention
In order to solve the problem that high-precision splicing in the process of splicing images of unmanned aerial vehicle video streams needs high time consumption in a mode of splicing video streams in a Ground Control Station (GCS) of an unmanned aerial vehicle in 5G communication; on the other hand, the image quality is poor due to low-consumption time splicing; the third aspect is the technical problem that the splicing stability of the unmanned aerial vehicle video stream image splicing process is poor, and the invention provides an unmanned aerial vehicle real-time image splicing method based on spherical correction. According to the method, the image splicing algorithm based on feature matching and homography transformation is optimized through spherical transformation, and high-precision unmanned aerial vehicle image splicing can be completed in a low-consumption mode; meanwhile, the time consumption is low under high-precision splicing. The method is a processing method for directly splicing the unmanned aerial vehicle images of the unmanned aerial vehicle ground control station in real time on video stream information (namely HTTP data stream).
The invention is based on the network video stream transmitted by the unmanned aerial vehicle in real time, (1) in order to filter low-quality images, the real-time frames are judged and cleaned by using the fuzziness and the image content; (2) In order to carry out feature description on the image, SIFT feature description is created on the image; (3) In order to match features between images, selecting optimal matching features by using a brute force traversal mode, and screening feature matching relations by using a RANSAC algorithm; (4) In order to eliminate transmission transformation errors, a homography matrix is obtained through a characteristic matching relation, a spherical correction model under geometric transformation parameters is constructed according to the geometric transformation parameters between the homography matrix and the images, and a correction ball is calculated; (5) And performing spherical projection transformation on the input image, and performing feature matching again by using the transformed image to eliminate the transmission transformation error in the splicing process.
The invention relates to an unmanned aerial vehicle real-time image splicing method based on spherical correction, which comprises the following steps:
selecting a first frame image as a reference image;
step two, taking the current image frame after the initial frame image as an image to be registered;
step three, fuzzy filtering;
step 31, convolution processing;
step 32, negative feedback control of fuzzy filtering judgment;
step 33, judging whether the image frame is the last image frame;
step four, extracting features based on SIFT algorithm;
step five, a nearest neighbor distance ratio matching strategy defined by a threshold value;
step 51, calculating Euclidean distances of feature sets of two adjacent image frames;
step 52, calculating a nearest neighbor distance ratio;
step 53, judging image frame-feature matching;
step six, a random sample consistency algorithm;
step seven, calculating the radius of the correction sphere;
step 71, calculating geometric transformation parameters between images;
step 72, calculating the corrected sphere radius
Step eight, spherical projection
Step nine, feature matching;
step 91, extracting feature sets of two adjacent image frames after spherical transformation;
step 93, a random sample consistency algorithm;
step ten, homography transformation and weighted average processing.
The unmanned aerial vehicle real-time image splicing method based on spherical correction has the advantages that:
(1) the image splicing stability is high: the spherical transformation is used to eliminate the accumulation of the transmission error of the homography transformation, and the problem of the accumulation of the transmission error can not occur in a larger splicing range.
(2) The image splicing is convenient and efficient: can be when drawing unmanned aerial vehicle network video stream, online splice the image, need not to wait to splice the data of taking again after the unmanned aerial vehicle flight task ends, also need not unmanned aerial vehicle's flight parameter equally, as long as insert the video stream and just can splice in real time online.
(3) The network state tolerance is high during image splicing: aiming at the transmission characteristics of the network video stream, multiple data cleaning links are set, and low-quality images caused by network communication quality fluctuation can be filtered.
Drawings
Fig. 1 is a structure diagram of an unmanned aerial vehicle image transmission system.
Fig. 2 is a flow diagram of a conventional image stitching technique.
FIG. 3 is a flow chart of the unmanned aerial vehicle real-time image stitching method based on spherical correction.
FIG. 4 is a schematic structural diagram of a homography transformation splicing model in the method of the present invention.
FIG. 5 is a schematic structural diagram of a spherical correction model in the method of the present invention.
FIG. 6 is a schematic view of a spherical projection structure in the method of the present invention.
Fig. 7A is a homography transformed image mosaic at low distortion.
FIG. 7B is a photograph of a mosaic of images after spherical correction using the method of the present invention.
Fig. 8A is a homography transformed image mosaic at high distortion.
FIG. 8B is a photograph of a mosaic of images after spherical correction using the method of the present invention.
Fig. 9A is a comparison graph of the reprojection error for homography transformed image stitching.
FIG. 9B is a comparison graph of the re-projection error after spherical correction in the method of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples.
Referring to fig. 1, an Unmanned Aerial Vehicle (UAV) uses an image sensor (e.g., a camera) to acquire video data of a detection area. The unmanned aerial vehicle uses an RTMP protocol to push RTMP data streams to the communication base station and the cloud server in real time, and the unmanned aerial vehicle Ground Control Station (GCS) receives the HTTP data streams forwarded by the cloud server.
In the present invention, software in the drone Ground Control Station (GCS) uses python software to pull the HTTP data stream. The resulting image frame, denoted pic, is pulled by python software. For the image processor in the GCS, a plurality of image frames pic are spliced. The present invention is an improved method proposed for image pre-processing and image registration in fig. 2.
In the present invention, one HTTP data stream is denoted as PIC, and PIC = { PIC = 1 ,pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η }, in which:
pic 1 representing the 1 st image frame in the data stream.
pic 2 Representing the 2 nd image frame in the data stream.
pic i Representing the ith image frame in the data stream.
pic i-1 Number of representationsPlacement in a stream of pictures pic i The previous one image frame is simply referred to as a previous image frame.
pic i+1 Representing pictures pic in a data stream i The next image frame is simply referred to as the next image frame.
pic η Representing the last image frame in the data stream.
For convenience of explanation, the pic i Also referred to as the current image frame in the data stream. The subscript i represents the identification number of the image frames in the data stream and the subscript η represents the total number of image frames in the data stream.
In the present invention, the current image frame pic i The position of the pixel point is recorded as pic i (x, y), x being the pixel abscissa and y being the pixel ordinate. Similarly, the previous image frame pic i-1 The position of the pixel point is recorded as pic i-1 (x, y); the latter image frame pic i+1 The position of the pixel point is marked as pic i+1 (x, y), image frame pic η The position of the pixel point is recorded as pic η (x,y)。
Fuzzy filtering condition FS
For the current image frame pic i Performing convolution processing on the pixel points and the Laplace operator to obtain a pixel point Laplace convolution sum, and recording the sum as the Laplace convolution sum
For the previous image frame pic i-1 Performing convolution processing on the pixel points and the Laplacian operator to obtain a sum of the pixel points and the Laplacian convolution and recording the sum as
In the invention, convolution processing is carried out by using pixel points of an image frame and a Laplacian, and reference is made to an improved single image deblurring algorithm with a super Laplacian constraint, which is disclosed on a small-sized microcomputer system in volume 39, no. 5 in 2018, by which an author is good in Qin and Shunji.
In the present invention, the blur filtering condition is denoted as FS, and FS =0.4. The value of the optimal blur filtering condition FS is 0.4. When the value is 0.4, the fuzzy filtering processing can sensitively identify the fuzzy image, and meanwhile, the method has good image splicing stability and is not easily influenced by the change of the image content.
The flow of the unmanned aerial vehicle image splicing technology of the invention is shown in fig. 3, an image processor in a Ground Control Station (GCS) of an unmanned aerial vehicle sequentially splices the images to the last frame of image according to the sequence of the image frames in an HTTP data stream, namely splices the images of a panoramic unmanned aerial vehicle from the first frame to the last frame one by one, and specifically comprises the following steps:
selecting a first frame image as a reference image;
in the invention, the image processor first reads the HTTP data stream PIC = { PIC = 1 ,pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η The first frame image in (1), i.e., the 1 st image frame pic 1 And combining said pic 1 As a reference image; and then executing the step two.
In the present invention, the 1 st image frame pic 1 As a reference image, the start position of the image stitching is thus determined. The starting position may be the upper left corner of a panorama, or may be any position point of the panorama.
In the invention, HTTP data stream without first frame image is marked as image set PIC to be registered To be treated And PIC To be treated ={pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η }。
Step two, taking the current image frame after the initial frame image as an image to be registered;
in the invention, PIC is selected from the image set to be registered To be treated ={pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η Read the current image frame pic i And read the current image frame pic i As an image to be registered; then step three is performed.
Step three, fuzzy filtering;
in the invention, in order to filter low-quality images, the real-time frame is judged and cleaned by using the fuzziness and the image content, so that invalid images are eliminated, the time consumption of image splicing is reduced, and the method is also a means for completing high-precision unmanned aerial vehicle image splicing in low consumption.
Step 31, convolution processing;
for the current image frame pic i Convolution processing of pixel points and Laplace operators is carried out to obtain pic i The pixel point of (a) is-Laplace-convolution sum, and is recorded as
Step 32, negative feedback control of fuzzy filtering judgment;
to pic i Performing a fuzzy filtering calculation, i.e.Then judging by adopting a fuzzy filtering condition FSWhether filtration is required;
representing pic of a previous image frame i-1 And performing convolution processing on the pixel points and the Laplace operator to obtain a Laplace convolution sum of the pixel points.
if it isDiscarding current image frame pic i And selecting relay pic i The subsequent picture frame, i.e. pic i+1 And then executing the step two.
Step 33, judging whether the image frame is the last image frame;
repeating steps 31-32 until PIC is completed To be treated The last frame of image in (1), i.e. pic η ;
For the last frame image pic η Convolution processing of pixel points and Laplace operators is carried out to obtain pic η The pixel point of (a) is-Laplace-convolution sum, and is recorded as
To pic η Performing fuzzy filtering calculationThen judging by adopting a fuzzy filtering condition FSWhether filtration is required;
representing pic of images for the eta-1 th frame η-1 And performing convolution processing on the pixel points and the Laplace operator to obtain a Laplace convolution sum of the pixel points.
In the present invention, PIC To be treated ={pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η The coarse-mosaic image obtained after the fuzzy filtering treatment is denoted as PIC Coarse And is and
representing image frames pic 2 And (5) carrying out fuzzy filtering processing on the image frames.
Representing image frames pic i And (5) carrying out fuzzy filtering processing on the image frames.
Representing image frames pic i-1 And (5) carrying out fuzzy filtering processing on the image frames.
Representing image frames pic i+1 And (5) carrying out fuzzy filtering processing on the image frames.
Representing image frames pic η And (5) carrying out fuzzy filtering processing on the image frames.
In the present invention, fuzzy filtering conditions are utilizedThe negative feedback control of image splicing is carried out by adding a fuzzy filtering condition FS, filtering out fuzzy images generated due to camera shake carried by an unmanned aerial vehicle platform, network flow quality reduction and other reasons, and reducing error introduction from the source, so that the method can adapt to more complicated,A bad network situation. In addition, whether the current frame image is clear or not is judged by calculating the change of the convolution values of the image pixel points and the Laplacian operator in the fuzzy filtering process, the clear current frame image is reserved, the fuzzy current frame image is abandoned, and the generation of image splicing errors is effectively prevented.
Step four, extracting features based on SIFT algorithm;
in the present invention, for PIC Coarse Performing SIFT feature extraction on each image frame in the image. SIFT feature extraction reference is made to "Automatic Panoramic Image Stitching using investigational Features" published on International Journal of Computer, vol.74, 2007, author, matthew Brown, david G.Lowe.
Image frames using opencv library of Python softwareConversion to a grey scale map, denoted gpic i 。
Creation of attributes belonging to gpic using the SIFT feature creation function in the opencv library i Set of feature points of (1), asThe describedSimply referred to as the feature set of the current frame image.
Similarly, the image frameGray scale of (1), denoted as gpic 2 (ii) a Belonging to the genus gpic 2 Set of feature points of (2), denoted asThe describedSimply referred to as the feature set of the frame 2 image.
Similarly, the image frameGray scale of (1), denoted as gpic i-1 (ii) a Belonging to the general term gpic i-1 Set of feature points of (1), asThe above-mentionedSimply referred to as the feature set of the previous frame image.
Similarly, image framesGray scale of (1), denoted as gpic i+1 (ii) a Belonging to the genus gpic i+1 Set of feature points of (1), asThe above-mentionedSimply referred to as the feature set of the next frame image.
Similarly, the image frameGray scale of (2), denoted as gpic η (ii) a Belonging to the general term gpic η Set of feature points of (1), asThe above-mentionedSimply referred to as the feature set of the last frame image.
In the invention, PIC is carried out on the video stream according to SIFT algorithm Coarse The image frames in the image frame are subjected to feature extraction, and the obtained frame image-gray level-feature set is recorded asAnd isThen step five is performed.
Step five, a nearest neighbor distance ratio matching strategy defined by a threshold value;
in the invention, the Euclidean distance is used as the similarity measurement of the feature points, but a lot of error matching can be introduced by directly calculating the nearest matching feature points, so that the feature matching is carried out by using a nearest distance ratio strategy limited by a threshold value in a novel unmanned aerial vehicle aerial image fast splicing algorithm disclosed in 'computer simulation' at No. 5, volume 39, no. 5 of 2022.
Step 51, calculating Euclidean distances of feature sets of two adjacent image frames;
in the invention, feature sets of two adjacent image frames are used for matching, and the nearest Euclidean distance and the next nearest Euclidean distance of the two adjacent image frames are obtained by traversing feature points;
feature set for previous frame imageAnd feature set of current frame imageCarry out matching onAndtraversing all the feature points, and calculating the nearest Euclidean distance between the feature points during traversalTo the next nearest Oldham's distance
Step 52, calculating a nearest neighbor distance ratio;
calculating the nearest Euclidean distanceTo the next nearest Euclidean distanceIs recorded as the distance ratio ofAnd is
Step 53, judging image frame-feature matching;
when ratio ofLess than ratio threshold TT Threshold value Time of flightI.e. the features are considered to match. The matching set after completing the feature matching is recorded as the feature matching of two adjacent image frames
When ratio ofGreater than or equal to the ratio threshold TT Threshold value Time of flightI.e. feature set ending the previous frame imageAnd feature set of current frame imageIs performed.
In the present invention, the ratio threshold is expressed asTT Threshold value And TT Threshold value =0.4. When the ratio threshold is set to 0.4, the feature matching condition can be judged more accurately.
In the same way, can obtainPerforming feature set matching on two adjacent image frames to obtain feature matching sets of the two adjacent image frames
Step six, a random sample consistency algorithm;
matching sets in the step five according to a random sample consistency algorithmScreening, eliminating bad matches and obtaining effective matching setAnd homography matrixThe modelA three row three column matrix.
In the invention, the random sample consistency algorithm refers to a random sample consistency algorithm in a new unmanned aerial vehicle aerial image fast splicing algorithm which is disclosed on 'computer simulation' in No. 5 of No. 39 of No. 2022 month.
Step seven, calculating the radius of the correction sphere;
step 71, calculating geometric transformation parameters between images;
in the present invention, the homography transform matrixBy translation H of an image sensor (e.g. camera) on the drone Translation Zoom H Zoom Rotation H x rotation ,H y rotation ,H z rotation Miscut H x miscut ,H y miscut Is obtained, therefore, canExpressed as the product of translation-rotation-miscut, i.e. H Translation of ·H x rotation ·H y rotation ·H z rotation ·H Zooming ·H x miscut ·H y miscut 。
X is the pixel value of the image translated in the X-axis direction.
Y is the pixel value of the image shifted in the Y-axis direction.
W is the scale value at which the image is scaled in the x-axis direction.
V is the scale value at which the image is scaled in the y-axis direction.
α, β, γ are rotation angles of the image in x, y, z axis directions, respectively.
According to Newton methodCarrying out iterative solution to obtain an image gpic i Transformation to image gpic i-1 Geometric transformation parameters ofAnd is
Thus, it is possible to provideThe 9 values of the three rows and three columns and the 9 geometric parameters of step 71 form a set of equationsUsing Newton method to iteratively solve the equation set to obtain The value of (c).
As an image gpic i Transformation to image gpic i-1 Pixel values translated in the x-axis direction.
As an image gpic i Transformation to image gpic i-1 Pixel values translated in the y-axis direction.
In the invention, newton's method is referred to the iterative solution method-Newton method of the nonlinear equation set in chapter 4, section 2 of ' numerical analysis ' of 9 months, 4 th edition, yanqingjin, of Beijing university of aerospace, press, 2012.
Similarly, the geometric transformation parameters between the images are calculated for MDD to obtain HMDD, and
step 72, calculating the corrected sphere radius
The invention provides a spherical correction algorithm to relieve the error accumulation problem of homography transformation. The three-dimensional mosaic model is shown in fig. 4, mosaic images are not located on the same plane, so that the problem of transmission error accumulation exists during homography transformation, the mosaic model is vertically projected along the negative direction of the z axis, and an overlook two-dimensional graph is shown in fig. 5. The invention provides a spherical correction model, which introduces a splicing rule: with the first image frame gpic 1 The straight line is the reference line and is marked as L base Performing spherical projection transformation on other images in subsequent splicing to obtain the ith frame image gpic i The image after the spherical projection transformation is recorded as cpic i The radius of the sphere is marked asSo that the right end point is always kept at L when the transformed images are registered base The above.
Under this rule, let gpic i And gpic i-1 At an included angle ofgpic i And gpic i-1 Relative displacement in the y direction ofFor each pixel, then the cpic can be solved i Radius of sphereComprises the following steps:
wherein X i 、α i Solved in step 72, the transcendental equation is solved using an iterative method to calculate the corrected sphere radius r i 。
Step eight, spherical projection
In the invention, spherical projection is adopted to carry out spherical projection transformation on the input image, and the transformed image is used for carrying out feature matching again, thus eliminating the transmission transformation error in the splicing process.
For gpic i Is carried out toIs a spherical projection of radius, and will gpic i Projective transformation to cpic i . As shown in FIG. 6, assume that the image gpic i Located at a radius ofOn a sphere, this time gpic i At any point P 2 Pixel coordinate value of (gx) i ,gy i ,gz i ) The molecular weight distribution of the polymer in (0,) A light source point P is arranged, and a projection point P is obtained by projecting the point P to a plane with z =0 3 I.e. cpic i The pixel coordinate value of any one point of (2) is expressed as (cx) i ,cy i 0), let the projection scale factor be tk i Then, there are:
the gpic can be finally obtained i Cpic obtained by spherical projection i The pixel coordinate value of any one point is (tk) i ·gx i ,tk i ·gy i ) Wherein the projection scale factor
Similarly, RR pairs PIC are used Coarse Performing spherical projection calculation to obtain image CPIC subjected to spherical correction Correction of In which
Step nine, feature matching;
in the invention, in order to eliminate the transmission transformation error, a homography matrix is obtained through a characteristic matching relation, a spherical correction model under geometric transformation parameters is constructed according to the geometric transformation parameters between the homography matrix and the image, and a correction ball is calculated.
Step 91, extracting feature sets of two adjacent image frames after spherical transformation;
for the spherical correction image cpic obtained in the step eight i And the previous frame spherical correction image cpic i-1 Extracting the characteristics (by adopting the method of the step four), and obtaining the target substance which belongs to cpic i Feature set ofAnd belong to cpic i-1 Feature set of
In the invention, the image frame set CPIC is corrected to the sphere according to the SIFT algorithm Correction of The obtained correction frame image-gray-feature set is recorded asAnd is
Step 92, threshold-defined nearest neighbor distance ratio matching strategy;
to pairAndand (5) performing feature matching (by adopting the method of the step five), if the features are matched, finishing the matching set after the features are matched, and recording the matching set as two adjacent correction image frames for feature matchingIf the features are not matched, ending the feature set of the previous frame imageAnd feature set of current correction frame imageIs performed.
In the same way, can obtainPerforming feature set matching on two adjacent image frames to obtain two adjacent spherical correction image frames-feature matching sets
Step 93, a random sample consistency algorithm;
to pairOptimizing a random sample consistency algorithm (adopting the method of the step six) and generating a homography model to obtainEfficient correction matchingAnd homography model
In the same way, pairRepeating the step six to obtain an effective correction matching setWhereinAnd a homography matrix set CMDD in whichThe modelA three row three column matrix.
Step ten, homography transformation and weighted average processing are carried out;
the above-mentionedAs cpic i The homography matrix of the transmission transformation of (1), the cpic i Transmission transformation to base image RES and the pair cpic is completed i And (4) splicing.
In a similar way, theAs pic i+1 The homography matrix of the transmission transformation of (c), pic i+1 Transmission is transformed to the base image RES to complete pic alignment i+1 Splicing.
In a similar way, theAs pic η Homography matrix of transmission transformation of (c), and (c) η Transmission is transformed to the base image RES to complete pic alignment η And (4) splicing.
In the present invention, for CPIC Correction of All corrected images in (1) are homography transformed using CMDD and the final base image RES will contain CPIC Correction of All of the elements in (a). And then, fusing the spliced images RES by adopting a weighted average algorithm to complete the splicing of the HTTP data streams.
In the invention, the weighted average algorithm refers to a new unmanned aerial vehicle aerial image fast splicing algorithm which is disclosed on computer simulation at the No. 5 of volume 39 of No. 5 of No. 2022.
Example 1
In order to illustrate the application effect of the method, the invention uses the major Mavic 2Pro in the Qinhuai region of Nanjing city, jiangsu province, china, 118.813482 north latitude and 32.029366 east longitude, flies from west to east at the altitude of 45 meters and in the air of 24.7 meters on the ground at the speed of 10km/h, and carries out orthographic shooting on the ground, a section of video stream in a stable state is shot, the splicing effect of the video stream images is shown in figures 7A and 7B, when the spherical correction algorithm of the invention is not used, as shown in figure 7A, the homography transformation generated during the splicing of the images can be seen to cause the images to be slightly deformed, and the accumulation effect at the splicing end is more obvious; when the spherical correction algorithm is used, the splicing effect is shown in fig. 7B, the reprojection error is shown in fig. 9A and fig. 9B, it can be seen that the splicing precision is improved after spherical correction, the low-distortion splicing field is expanded by nearly three times, the homography error when splicing is continuously performed for two hundred times is still lower than the error when splicing is performed for the 70 th time without the correction algorithm, the time consumption is still maintained at a lower level under the condition that the precision is obviously improved, and the method can adapt to a real-time scene. When there is a large homography error, the cumulative effect of the homography transform is magnified as shown in FIG. 8A, while the homography transform error is greatly mitigated after sphere correction as shown in FIG. 8B. The SIFT algorithm has higher accuracy compared with other algorithms, the matching accuracy is obviously reduced along with the increase of times without the optimization of the method, the subsequent matching accuracy is obviously improved after the optimization of the method, and the method is especially obvious in the later stage of splicing and powerfully relieves the problem of homography error accumulation.
The invention provides an unmanned aerial vehicle real-time image splicing method based on spherical correction, which aims to solve the technical problems of how to improve the real-time splicing response speed and the image precision of an unmanned aerial vehicle video stream image under the condition of 5G communication; meanwhile, the time consumption is low under high-precision splicing.
Claims (4)
1. An unmanned aerial vehicle real-time image splicing method based on spherical correction is characterized in that an unmanned aerial vehicle uses an RTMP protocol to push RTMP data streams to a communication base station and a cloud server in real time, and an unmanned aerial vehicle ground control station receives HTTP data streams forwarded by the cloud server; the method is characterized in that: an image processor in the unmanned aerial vehicle ground control station sequentially splices the images of the last frame according to the sequence of the image frames in the HTTP data stream, namely splices the images of the panoramic unmanned aerial vehicle from the first frame to the last frame one by one, and the method specifically comprises the following steps:
selecting a first frame image as a reference image;
the image processor first reads the HTTP data stream PIC = { PIC = 1 ,pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η The first frame image in (1), i.e., the 1 st image frame pic 1 And combining said pic 1 As a reference image; then executing the step two;
the 1 st image frame pic 1 As a reference image, determining the starting position of image splicing;
HTTP data flow without first frame image, marked as image set PIC to be registered To be treated And PIC To be treated ={pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η };
Step two, taking the current image frame after the initial frame image as an image to be registered;
from a set of images to be registered PIC To be treated ={pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η Read the current image frame pic i And read the current image frame pic i As an image to be registered; then, executing the step three;
step three, fuzzy filtering;
step 31, convolution processing;
for the current image frame pic i Convolution processing of pixel points and Laplace operators is carried out to obtain pic i The pixel point of (a) -Laplace-convolution sum, is recorded as
Step 32, negative feedback control of fuzzy filtering judgment;
to pic i Performing a fuzzy filtering calculation, i.e.Then judging by adopting a fuzzy filtering condition FSWhether filtration is required;
representing pic of a previous image frame i-1 Performing convolution processing on the pixel points and the Laplace operator to obtain a Laplace convolution sum of the pixel points;
if it isDiscarding current image frame pic i And selecting a relay pic i The subsequent picture frame, i.e. pic i+1 Then executing the step two;
step 33, judging whether the image frame is the last image frame;
repeating steps 31-32 until PIC is completed To be treated The last frame of image in (1), i.e. pic η ;
For the last frame image pic η Convolution processing of pixel points and Laplace operators is carried out to obtain pic η The pixel point of (a) -Laplace-convolution sum, is recorded as
To pic η Performing fuzzy filtering calculationThen judging by adopting a fuzzy filtering condition FSWhether filtration is required;
representing pic of images for the eta-1 th frame η-1 Performing convolution processing on the pixel points and the Laplace operator to obtain a Laplace convolution sum of the pixel points;
PIC to be treated ={pic 2 ,…,pic i-1 ,pic i ,pic i+1 ,…,pic η The coarse-mosaic image obtained after the fuzzy filtering treatment is denoted as PIC Coarse And is and
step four, extracting features based on SIFT algorithm;
image frames using opencv library of Python softwareIs converted into a gray-scale image,is noted as gpic i ;
Creation of attributes belonging to gpic using the SIFT feature creation function in the opencv library i Set of feature points of (1), asThe above-mentionedThe feature set of the current frame image is simply referred to;
similarly, the image frameGray scale of (1), denoted as gpic 2 (ii) a Belonging to the general term gpic 2 Set of feature points of (1), asThe above-mentionedSimply referred to as the feature set of the 2 nd frame image;
similarly, the image frameGray scale of (2), denoted as gpic i-1 (ii) a Belonging to the general term gpic i-1 Set of feature points of (1), asThe describedThe feature set of the previous frame image is simply referred to;
similarly, image framesGray scale of (1), denoted as gpic i+1 (ii) a Belonging to the general term gpic i+1 Set of feature points of (2), denoted asThe describedThe feature set of the next frame image is simply referred to;
similarly, image framesGray scale of (1), denoted as gpic η (ii) a Belonging to the general term gpic η Set of feature points of (2), denoted asThe describedThe feature set of the last frame image is simply referred to;
PIC for video stream according to SIFT algorithm Coarse Performing feature extraction on each image frame in the image system, and recording the obtained frame image-gray-feature set asAnd is provided withThen executing the step five;
step five, a nearest neighbor distance ratio matching strategy defined by a threshold value;
step 51, calculating Euclidean distances of feature sets of two adjacent image frames;
feature set for previous frame imageAnd feature set of current frame imageMatch is carried out, andandtraversing all the feature points, and calculating the nearest Euclidean distance between the feature points during traversalTo the next nearest Oldham's distance
Step 52, calculating a nearest neighbor distance ratio;
calculating the nearest Euclidean distanceTo the next nearest Oldham's distanceIs recorded as the distance ratio ofAnd is provided with
Step 53, judging image frame-feature matching;
when ratio ofLess than a ratio threshold TT Threshold value Time of flightNamely, the characteristics are considered to be matched; the matching set after completing the feature matching is recorded as the feature matching of two adjacent image frames
When ratio ofGreater than or equal to the ratio threshold TT Threshold value Time of flightI.e. feature set ending the previous frame imageAnd feature set of current frame imageMatching the characteristics of (1);
in the same way, can obtainPerforming feature set matching on two adjacent image frames to obtain feature matching sets of the two adjacent image frames
Step six, a random sample consistency algorithm;
matching sets in the step five according to a random sample consistency algorithmScreening, eliminating bad matches and obtaining effective matching setAnd homography matrixThe modelA three-row three-column matrix is formed;
step seven, calculating the radius of a correction sphere;
step 71, calculating geometric transformation parameters between images;
homography transformation matrixBy translation H of an image sensor (e.g. camera) on the drone Translation Zoom H Zooming Rotation H x rotation ,H y rotation ,H z rotation Miscut H x type miscut ,H y type miscut Is obtained, thus canExpressed as the product of translation-rotation-miscut, i.e. H Translation ·H x rotation ·H y rotation ·H z rotation ·H Zoom ·H x miscut ·H y miscut ;
X is the pixel value of the image translated in the direction of the X axis;
y is the pixel value of the image translated in the Y-axis direction;
w is the scaling value of the image scaled in the x-axis direction;
v is a scaling value of the image in the y-axis direction;
alpha, beta and gamma are rotation angles of the image in the directions of the x axis, the y axis and the z axis respectively;
φ、the miscut angles of the image in the directions of the x axis and the y axis respectively are shown;
according to Newton methodIterative solution is carried out to obtain an image gpic i Transformation to image gpic i-1 Geometric transformation parameters ofAnd is provided with
Thus, it is possible to provideThe 9 values of the three rows and three columns and the 9 geometric parameters of step 71 form a set of equationsUsing Newton method to iteratively solve the equation set to obtain A value of (d);
as an image gpic i Transformation to image gpic i-1 Pixel values translated in the x-axis direction;
as an image gpic i Transformation to image gpic i-1 Pixel values translated in the y-axis direction;
as an image gpic i Transformation to image gpic i-1 Rotation angles in x, y, and z axis directions, respectively;
as an image gpic i Transformation to image gpic i-1 The miscut angles in the directions of the x axis and the y axis respectively;
similarly, the geometric transformation parameters between the images are calculated for MDD to obtain HMDD, and
step 72, calculating the corrected sphere radius
Let gpic i And gpic i-1 Included angle therebetween isgpic i And gpic i-1 Relative displacement in the y direction ofFor each pixel, then the cpic can be solved i Radius of sphereComprises the following steps:
wherein, X i 、α i Solved in step 72, the transcendental equation is solved using an iterative method to calculate the corrected sphere radius r i ;
Step eight, spherical projection
For gpic i Is carried out toIs a spherical projection of radius, and will gpic i Projective transformation to cpic i (ii) a Image gpic i Located at a radius ofOn a sphere, then gpic i At any point P 2 Pixel coordinate value of (gx) i ,gy i ,gz i ) In aA light source point P is arranged, and a projection point P is obtained by projecting the point P to a plane with z =0 3 I.e. cpic i The pixel coordinate value of any one point of (2) is expressed as (cx) i ,cy i 0), let the projection scale factor be tk i Then, there are:
finally, gpic can be obtained i Cpic obtained by spherical projection i The pixel coordinate value of any one point is (tk) i ·gx i ,tk i ·gy i ) Wherein the projection scale factor
Similarly, RR pairs PIC are used Coarse Performing spherical projection calculation to obtain image CPIC subjected to spherical correction Correction of Wherein
Step nine, feature matching;
step 91, extracting feature sets of two adjacent image frames after spherical transformation;
correcting a spherical image frame set CPIC according to SIFT algorithm Correction of The obtained correction frame image-gray-feature set is recorded asAnd is
Step 92, threshold-defined nearest neighbor distance ratio matching strategy;
to pairAndperforming feature matching, and if the features are matched, recording a matching set after the feature matching as two adjacent correction image frames for feature matchingIf the features are not matched, ending the feature set of the previous frame imageAnd feature set of current correction frame imageMatching the characteristics of the two groups;
in the same way, can obtainPerforming feature set matching on two adjacent image frames to obtain two adjacent spherical correction image frames-feature matching sets
Step 93, a random sample consistency algorithm;
for is toPerforming a random sample consistency algorithmTransforming and generating homography model to obtain effective correction matchingAnd homography model
In the same way, forCarrying out random sample consistency algorithm optimization to obtain an effective correction matching setWhereinAnd a set of homography matrices CMDD in whichThe modelA three-row three-column matrix is formed;
step ten, homography transformation and weighted average processing;
the describedAs cpic i The homography matrix of the transmission transformation of (1), the cpic i Transmission transformation to base image RES and the pair cpic is completed i Splicing;
for CPIC Correction of All corrected images in (1) are homography transformed using CMDD and the final base image RES will contain CPIC Correction of All of the elements in (1); and then, fusing the spliced images RES by adopting a weighted average algorithm to complete the splicing of the HTTP data streams.
2. The unmanned aerial vehicle real-time image stitching method based on spherical correction according to claim 1, characterized in that: the starting position may be the upper left corner of a panorama, or may be any position point of the panorama.
3. The unmanned aerial vehicle real-time image stitching method based on spherical correction according to claim 1, characterized in that: ratio threshold, noted TT Threshold value And TT Threshold value =0.4; when the ratio threshold is set to 0.4, the feature matching condition can be judged more accurately.
4. The unmanned aerial vehicle real-time image stitching method based on spherical correction according to claim 1, characterized in that: software in the drone ground control station uses python software to pull the HTTP data stream.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211400858.8A CN115631094A (en) | 2022-11-09 | 2022-11-09 | Unmanned aerial vehicle real-time image splicing method based on spherical correction |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211400858.8A CN115631094A (en) | 2022-11-09 | 2022-11-09 | Unmanned aerial vehicle real-time image splicing method based on spherical correction |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115631094A true CN115631094A (en) | 2023-01-20 |
Family
ID=84909131
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211400858.8A Pending CN115631094A (en) | 2022-11-09 | 2022-11-09 | Unmanned aerial vehicle real-time image splicing method based on spherical correction |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115631094A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188275A (en) * | 2023-04-28 | 2023-05-30 | 杭州未名信科科技有限公司 | Single-tower crane panoramic image stitching method and system |
CN117670667A (en) * | 2023-11-08 | 2024-03-08 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle real-time infrared image panorama stitching method |
-
2022
- 2022-11-09 CN CN202211400858.8A patent/CN115631094A/en active Pending
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116188275A (en) * | 2023-04-28 | 2023-05-30 | 杭州未名信科科技有限公司 | Single-tower crane panoramic image stitching method and system |
CN116188275B (en) * | 2023-04-28 | 2023-10-20 | 杭州未名信科科技有限公司 | Single-tower crane panoramic image stitching method and system |
CN117670667A (en) * | 2023-11-08 | 2024-03-08 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle real-time infrared image panorama stitching method |
CN117670667B (en) * | 2023-11-08 | 2024-05-28 | 广州成至智能机器科技有限公司 | Unmanned aerial vehicle real-time infrared image panorama stitching method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110211043B (en) | Registration method based on grid optimization for panoramic image stitching | |
CN115631094A (en) | Unmanned aerial vehicle real-time image splicing method based on spherical correction | |
CN108564617B (en) | Three-dimensional reconstruction method and device for multi-view camera, VR camera and panoramic camera | |
US8280194B2 (en) | Reduced hardware implementation for a two-picture depth map algorithm | |
WO2021098081A1 (en) | Trajectory feature alignment-based multispectral stereo camera self-calibration algorithm | |
CN111784585B (en) | Image splicing method and device, electronic equipment and computer readable storage medium | |
CN107767339B (en) | Binocular stereo image splicing method | |
CN109801215A (en) | The infrared super-resolution imaging method of network is generated based on confrontation | |
CN110956661A (en) | Method for calculating dynamic pose of visible light and infrared camera based on bidirectional homography matrix | |
CN113160053B (en) | Pose information-based underwater video image restoration and splicing method | |
CN111445537B (en) | Calibration method and system of camera | |
CN111553841B (en) | Real-time video splicing method based on optimal suture line updating | |
CN109472752B (en) | Multi-exposure fusion system based on aerial images | |
CN113221665A (en) | Video fusion algorithm based on dynamic optimal suture line and improved gradual-in and gradual-out method | |
CN111798373A (en) | Rapid unmanned aerial vehicle image stitching method based on local plane hypothesis and six-degree-of-freedom pose optimization | |
CN111899164A (en) | Image splicing method for multi-focal-zone scene | |
CN112862683A (en) | Adjacent image splicing method based on elastic registration and grid optimization | |
CN110223233B (en) | Unmanned aerial vehicle aerial photography image building method based on image splicing | |
CN114998773A (en) | Characteristic mismatching elimination method and system suitable for aerial image of unmanned aerial vehicle system | |
CN108109118B (en) | Aerial image geometric correction method without control points | |
CN114331835A (en) | Panoramic image splicing method and device based on optimal mapping matrix | |
Bergmann et al. | Gravity alignment for single panorama depth inference | |
CN116012517B (en) | Regularized image rendering method and regularized image rendering device | |
CN115456870A (en) | Multi-image splicing method based on external parameter estimation | |
CN115660995A (en) | Camera orthodontic method and system using linear patterns |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |