CN110889818A - Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium - Google Patents

Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium Download PDF

Info

Publication number
CN110889818A
CN110889818A CN201911157016.2A CN201911157016A CN110889818A CN 110889818 A CN110889818 A CN 110889818A CN 201911157016 A CN201911157016 A CN 201911157016A CN 110889818 A CN110889818 A CN 110889818A
Authority
CN
China
Prior art keywords
image
aerial vehicle
unmanned aerial
low
reference frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911157016.2A
Other languages
Chinese (zh)
Inventor
韩宇星
林良培
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China Agricultural University
Original Assignee
South China Agricultural University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China Agricultural University filed Critical South China Agricultural University
Priority to CN201911157016.2A priority Critical patent/CN110889818A/en
Publication of CN110889818A publication Critical patent/CN110889818A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform

Abstract

The invention discloses a low-altitude unmanned aerial vehicle image splicing method, a system, computer equipment and a storage medium, wherein the method comprises the following steps: acquiring an image sequence of a low-altitude unmanned aerial vehicle flight channel; registering images of the image sequence to generate a first homography conversion matrix of two adjacent images; determining a reference frame of an image sequence; establishing a characteristic matching point pair error function; calculating the error function of the feature matching point to obtain a second homography conversion matrix; converting the current frame into the reference frame by using the second homography conversion matrix, and performing fusion splicing on the overlapped area of the converted frame and the reference frame to obtain an intermediate image; and taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel. The invention can greatly shorten the time of image registration and image fusion, thereby greatly improving the image splicing efficiency.

Description

Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium
Technical Field
The invention relates to an image splicing method, in particular to a low-altitude unmanned aerial vehicle image splicing method, a low-altitude unmanned aerial vehicle image splicing system, computer equipment and a storage medium, and belongs to the field of image splicing and image acceleration.
Background
Unmanned aerial vehicle image sequence possesses characteristics such as high resolution and quantity are big, and the image of the high resolution of handling big data consumes a large amount of time, if can develop one set and shorten later stage image processing time, reaches unmanned aerial vehicle later stage quasi real-time image processing system. The real-time feedback data of the unmanned aerial vehicle is utilized, the hardware acceleration technology is combined, imaging can be performed rapidly, effective monitoring and management are carried out on a shooting site, and therefore the real-time processing effect is achieved.
Disclosure of Invention
In view of the above, the invention provides a low-altitude unmanned aerial vehicle image splicing method, a low-altitude unmanned aerial vehicle image splicing system, a computer device and a storage medium, which have automatic low-altitude panoramic image splicing capability independent of geographic coordinate information, can effectively solve the problems of large number of images and low splicing speed of the current unmanned aerial vehicle, greatly shorten the post-image processing time of the unmanned aerial vehicle, and greatly improve the post-image splicing efficiency of the low-altitude unmanned aerial vehicle.
The invention aims to provide a low-altitude unmanned aerial vehicle image splicing method.
The second objective of the present invention is to provide a low-altitude unmanned image stitching system.
It is a third object of the invention to provide a computer apparatus.
It is a fourth object of the present invention to provide a storage medium.
The first purpose of the invention can be achieved by adopting the following technical scheme:
a low-altitude unmanned aerial vehicle image splicing method comprises the following steps:
acquiring an image sequence of a low-altitude unmanned aerial vehicle flight channel;
generating a first homography conversion matrix of two adjacent images according to the image sequence;
determining a reference frame of an image sequence;
establishing a characteristic matching point pair error function according to the first homography conversion matrix and the reference frame;
calculating the error function by using a Levensberg-Marquardt algorithm to the feature matching point to obtain a second homography conversion matrix;
converting the current frame into the reference frame by using a second homography conversion matrix, and performing fusion splicing on the overlapped area of the conversion frame and the reference frame based on a pyramid fusion algorithm of a parallel computing architecture to obtain an intermediate image;
and taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
Further, the generating a first homography transformation matrix of two adjacent images according to the image sequence specifically includes:
extracting feature points of two adjacent images in the image sequence based on a parallel computing architecture;
matching all the characteristic points of two adjacent images to generate characteristic matching point pairs;
and purifying the feature matching point pairs by using a random sampling consistency algorithm to generate a first homography conversion matrix of two adjacent images.
Further, the extracting the feature points of two adjacent images in the image sequence specifically includes:
constructing a Gaussian image pyramid for two adjacent images respectively;
respectively carrying out Gaussian difference pyramid on the Gaussian image pyramid to determine characteristic candidate points;
carrying out feature point positioning on the feature candidate points, and extracting the main direction of the feature points;
and constructing a feature descriptor according to the main direction of the feature point.
Further, the establishing of the feature matching point pair error function is as follows:
Figure RE-GDA0002298166640000021
wherein M ═ Hir,Hir=Hij*Hjr,X=Hirx,HirFor the current frame SiRelative to a reference frame SrThe homography transformation matrix of (1), x is the current frame SiX is a reference frame SrMiddle and current frame SiCorresponding feature matching points, HijFor the current frame SiWith respect to adjacent frames SjThe first homography transformation matrix, HjrFor adjacent frames SjRelative to a reference frame SrHomography transformation matrix of (x ″)i,y`i) For the current frame SiCoordinates of the corresponding feature points after transformation, (X)i,Yi) For reference frame SrCoordinates of the middle feature point.
Further, the operating the error function on the feature matching point by using the levenberg-marquardt algorithm to obtain a second homography transformation matrix specifically includes:
given an error epsilon and a scaling coefficient mu, let k be 0, and use an initial iteration matrix M0Calculating new coordinates of the current frame;
according to the characteristic matching point pair error function E (M), calculating an image error function E (M)k);
Calculating a jacobian matrix j (m) as follows:
Figure RE-GDA0002298166640000031
wherein the content of the first and second substances,
Figure RE-GDA0002298166640000032
eight partial derivatives, i is 1, …, and N is the number of feature matching points;
calculating an image error increment function Δ M as follows:
ΔM=-(JT(M)J(M)+μI)-1JT(M)e(M)
wherein I is an identity matrix;
estimating an image error function E (M)k+1) The following formula:
Mk+1=Mk+ΔM
let epsilonk=E(Mk)-E(Mk+1) If epsilonkThe value is decreased, k is made k +1, the value of the scale factor mu is decreased, and the image error function E (M) is calculatedk) (ii) a If epsilonkIncreasing the value, increasing the value of the proportionality coefficient mu, let E (M)k+1)=E(Mk) Returning to calculate an image error increment function delta M; if epsilonk<Epsilon, a second homography transformation matrix is obtained.
Further, the method further comprises:
and forming panoramic images of the low-altitude unmanned aerial vehicle in different flight channels into a panoramic image sequence, and splicing all the panoramic images of the panoramic image sequence to obtain a panoramic image of the low-altitude unmanned aerial vehicle.
Further, in the determining of the reference frame of the image sequence, the reference frame is an intermediate frame of the image sequence.
The second purpose of the invention can be achieved by adopting the following technical scheme:
a low-altitude unmanned aerial vehicle image stitching system, the system comprising:
the acquisition module is used for acquiring an image sequence of the low-altitude unmanned aerial vehicle flight channel;
the registration module is used for generating a first homography conversion matrix of two adjacent images according to the image sequence;
the determining module is used for determining a reference frame of the image sequence;
the establishing module is used for establishing an error function of the feature matching point pair according to the first homography conversion matrix and the reference frame;
the operation module is used for operating the error function of the feature matching point pair by utilizing a LevenBeg-Marquardt algorithm to obtain a second homography conversion matrix;
the first splicing module is used for transforming the current frame into the reference frame by using the second homography transformation matrix, and performing fusion splicing on the overlapped area of the transformed frame and the reference frame based on a pyramid fusion algorithm of a parallel computing architecture to obtain an intermediate image;
and the second splicing module is used for taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
The third purpose of the invention can be achieved by adopting the following technical scheme:
the computer equipment comprises a processor and a memory for storing a program executable by the processor, and when the processor executes the program stored in the memory, the low-altitude unmanned aerial vehicle image stitching method is realized.
The fourth purpose of the invention can be achieved by adopting the following technical scheme:
a storage medium stores a program, and when the program is executed by a processor, the low-altitude unmanned aerial vehicle image stitching method is realized.
Compared with the prior art, the invention has the following beneficial effects:
the invention aims at the characteristics of large data volume, high resolution and the like of low-altitude unmanned aerial vehicle images, utilizes a parallel computing framework to register the images in an image sequence of a low-altitude unmanned aerial vehicle flight channel, generates a homography conversion matrix of two adjacent images, utilizes a Levense Begger-Marquardt algorithm to operate the characteristic matching point pair error function by establishing a characteristic matching point pair error function to obtain a more accurate homography conversion matrix, utilizes a pyramid fusion algorithm of the parallel computing framework to perform fusion splicing on an overlapped area of a transformation frame and a reference frame obtained after a current frame is transformed to obtain an intermediate image, uses the intermediate image as a new reference frame to perform continuous splicing until all the images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel, and solves the problem of low splicing speed caused by long image registration speed in the prior art, the problem that the high-resolution low-altitude unmanned aerial vehicle image cannot meet the splicing precision and the splicing speed at the same time is solved to a great extent, the requirements of a low-altitude unmanned aerial vehicle image rapid splicing technology are met, the ground environment condition can be known in time, and important image data sources provided by ground disaster monitoring, farmland pest and disease detection and national remote sensing resource monitoring are facilitated.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the structures shown in the drawings without creative efforts.
Fig. 1 is a flowchart of a low-altitude unmanned aerial vehicle image stitching method according to embodiment 1 of the present invention.
Fig. 2 is a schematic view of an image sequence of a low-altitude unmanned aerial vehicle flight channel according to embodiment 1 of the present invention.
Fig. 3 is a flowchart of generating a first homography transformation matrix for two adjacent images according to embodiment 1 of the present invention.
Fig. 4 is a flowchart illustrating extracting feature points of two adjacent images in an image sequence according to embodiment 1 of the present invention.
Fig. 5 is a schematic diagram of obtaining a panoramic image of a low-altitude unmanned aerial vehicle flight channel by stitching in embodiment 1 of the present invention.
Fig. 6 is a block diagram of a low-altitude unmanned aerial vehicle image stitching system according to embodiment 2 of the present invention.
Fig. 7 is a block diagram of a registration module according to embodiment 2 of the present invention.
Fig. 8 is a block diagram of the structure of an extraction unit in embodiment 2 of the present invention.
Fig. 9 is a block diagram of a computer device according to embodiment 3 of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer and more complete, the technical solutions in the embodiments of the present invention will be described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some embodiments of the present invention, but not all embodiments, and all other embodiments obtained by a person of ordinary skill in the art without creative efforts based on the embodiments of the present invention belong to the protection scope of the present invention.
Example 1:
as shown in fig. 1, the present embodiment provides a low-altitude unmanned aerial vehicle image stitching method, which is implemented by a computer loaded with a parallel computing Architecture (CUDA), and includes the following steps:
s101, obtaining an image sequence of the low-altitude unmanned aerial vehicle flight channel.
The method comprises the steps of obtaining an image sequence of a certain flight channel of the low-altitude unmanned aerial vehicle, wherein images in the image sequence are preprocessed, firstly, acquiring the images through a camera on the unmanned aerial vehicle, and then, correcting to obtain the preprocessed images.
The image sequence of the low-altitude unmanned aerial vehicle flight channel is shown in fig. 2, and the set of the image sequence is denoted as S (S)1,S2,S3,…,Sk,…,Sn) And n is the number of images of the low-altitude unmanned aerial vehicle flight channel.
S102, based on a parallel computing architecture, images of the image sequence are registered, and a first homography transformation matrix of two adjacent images is generated.
Further, as shown in fig. 3, the step S102 specifically includes:
and S1021, extracting the feature points of two adjacent images in the image sequence based on a parallel computing architecture.
Specifically, as shown in fig. 4, the step S1021 specifically includes:
s10211, constructing a gaussian image pyramid for two adjacent images on a GPU (Graphics Processing Unit).
S10212, respectively performing gaussian difference pyramid on the gaussian image pyramid on a CPU (Central Processing Unit), and determining feature candidate points.
S10213, positioning the characteristic points of the characteristic candidate points on a GPU, and extracting the main direction of the characteristic points.
S10214, constructing a 128-dimensional feature descriptor on the CPU according to the main direction of the feature point.
And S1022, matching all the feature points of the two adjacent images to generate feature matching point pairs.
Specifically, rough matching is performed on feature descriptors of two adjacent images, and feature matching point pairs are generated.
S1023, a Random Sample Consensus (RANSAC) algorithm is used for purifying the feature matching point pairs to generate a first homography conversion matrix H of two adjacent images.
The image sequence S according to step S1011,S2,S3,…,Sk,…,SnThe first image S1And a second image S2Is a first homography transformation matrix of H12To obtain a first homography transformation matrix H12Storing the first image S in array List1And a second image S2The corresponding feature matching point pairs are stored in ArrayFeature, and so on for the k-th image SkAnd the (k + 1) th image Sk+1And the corresponding pairs of feature point matching points are stored in ArrayList and Arrayfeature arrays, where k is ≦ n-1.
S103, determining a reference frame of the image sequence.
Reference frame S in this steprIs an intermediate frame of the image sequence, the intermediate frame is denoted as SkWherein
Figure RE-GDA0002298166640000061
According to the computer's rounding rule, k is obtained as an integer regardless of whether n is even or odd.
And S104, establishing an error function of the feature matching point pair according to the first homography conversion matrix and the reference frame.
In this step, an error function of the feature matching point pair is established as follows:
Figure RE-GDA0002298166640000062
wherein M ═ Hir,Hir=Hij*Hjr,X=Hirx,HirIs a current frame (target image) SiWith respect to a reference frame (reference picture) SrThe homography transformation matrix of (1), x is the current frame SiX is a reference frame SrMiddle and current frame SiCorresponding feature matching points, HijFor the current frame SiWith respect to adjacent frames SjThe first homography transformation matrix, HjrFor adjacent frames SjRelative to a reference frame SrHomography transformation matrix of (x ″)i,y`i) For the current frame SiCoordinates of the corresponding feature points after transformation, (X)i,Yi) For reference frame SrCoordinates of the middle feature point.
And S105, operating the error function of the feature matching points by using a Levenberg-Marquardt (LM for short) algorithm to obtain a second homography conversion matrix.
The iterative formula of the levenberg-marquardt algorithm is as follows:
Mk+1=Mk+ΔM (2)
ΔM=-(JT(M)J(M)+μI)-1JT(M)e(M) (3)
wherein, mu scale coefficient, I is a unit matrix, and Jacobian matrix is J (M):
Figure RE-GDA0002298166640000071
wherein the content of the first and second substances,
Figure RE-GDA0002298166640000072
and the eight partial derivatives are represented by i being 1, …, and N is the number of feature matching points.
The step S105 specifically includes:
1) given an error epsilon and a scaling coefficient mu, let k be 0, and use an initial iteration matrix M0New coordinates of the current frame are calculated.
2) Calculating an image error function E (M) according to the above formula (1)k)。
3) Calculating the Jacobian matrix J (M) according to the above formula (4)
4) The image error increment function Δ M is calculated according to the above equation (3).
5) Based on the above formula (4), an image error function E (M) is estimatedk+1)。
6) Let epsilonk=E(Mk)-E(Mk+1) If epsilonkDecreasing the value, making k equal to k +1, decreasing the value of the proportionality coefficient mu, and returning to the step 2); if epsilonkIncreasing the value, increasing the value of the proportionality coefficient mu, let E (M)k+1)=E(Mk) And returning to the step 4); if epsilonk<E, step 7) is entered.
7) And ending to obtain a more accurate second homography conversion matrix H.
And S106, converting the current frame into the reference frame by using the second homography conversion matrix, and performing fusion splicing on the overlapped area of the converted frame and the reference frame based on a pyramid fusion algorithm of a parallel computing architecture to obtain an intermediate image.
In this step, the pyramid fusion algorithm based on the parallel computing architecture is a multi-bandwidth fusion algorithm based on parallel acceleration of the parallel computing architecture in the Opencv library.
And S107, taking the intermediate image as a new reference frame, returning to the step S104, reestablishing the characteristic matching point pair error function, and executing the steps S105-S106 until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
The principle of steps S106 and S107 is shown in fig. 5, in this embodiment, the images on the right side of the reference frame are first stitched, and during the first stitching, the reference frame S is usedrAs an intermediate image SkThen the current frame SiIs Sk+1Obtaining an intermediate image after splicing, using the imageMiddle as a reference frame for next splicing, and splicing the current frame S for each time subsequentlyiIs Sk+2、Sk+3……SnAfter the images on the right side of the reference frame are all spliced, splicing the images on the left side of the reference frame, wherein the current frame S is spliced every timeiIs Sk-1、Sk-2……S1After all the images are spliced, obtaining a panoramic image imageFinal of the low-altitude unmanned aerial vehicle flight channel; it can be understood that the left image of the reference frame may be stitched first, and the reference frame S is stitched for the first timerAs an intermediate image SkThen the current frame SiIs Sk-1Splicing to obtain an intermediate image imageMiddle, and taking imagemediate asReference frame for next splicing, current frame S for each subsequent splicingiIs Sk-2、Sk-3……S1After the images on the left side of the reference frame are all spliced, the images on the right side of the reference frame are spliced, and the current frame S is spliced every timeiIs Sk+1、Sk+2……SnAnd after all the images are spliced, obtaining the panoramic image imageFinal of the low-altitude unmanned aerial vehicle flight channel.
And S108, forming panoramic image sequences by panoramic images of the low-altitude unmanned aerial vehicle in different flight channels, and splicing all the panoramic images of the panoramic image sequences to obtain a panoramic image of the low-altitude unmanned aerial vehicle.
Specifically, the steps generate panoramic image imageFinal [ i ] of different flight channels]Wherein i is the ith flight channel, and the panoramic image imageFinal [ i ] of the low-altitude unmanned aerial vehicle with different flight channels]Form a panoramic image sequence whose set is C (C)1,C2,C3,…,Ck,…,Cn) And splicing all panoramic images of the panoramic image sequence according to the method of the steps S101 to S104, and outputting to obtain a low-altitude unmanned panoramic image Final without splicing gaps.
Those skilled in the art will appreciate that all or part of the steps in the method for implementing the above embodiments may be implemented by a program to instruct related hardware, and the corresponding program may be stored in a computer-readable storage medium.
It should be noted that although the method operations of the above-described embodiments are depicted in the drawings in a particular order, this does not require or imply that these operations must be performed in this particular order, or that all of the illustrated operations must be performed, to achieve desirable results. Rather, the depicted steps may change the order of execution. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Example 2:
as shown in fig. 6, the present embodiment provides a low-altitude unmanned aerial vehicle image stitching system, which includes an obtaining module 601, a registration module 602, a determining module 603, an establishing module 604, an operation module 605, a first stitching module 606, a second stitching module 607, and a third stitching module 608, and the specific functions of each module are as follows:
the acquiring module 601 is configured to acquire an image sequence of a low-altitude unmanned aerial vehicle flight channel.
The registration module 602 is configured to register images of an image sequence based on a parallel computing architecture, and generate a first homography transformation matrix of two adjacent images.
The determining module 603 is configured to determine a reference frame of an image sequence.
The establishing module 604 is configured to establish an error function of the feature matching point pair according to the first homography transformation matrix and the reference frame.
The operation module 605 is configured to operate the error function on the feature matching point by using a levenberg-marquardt algorithm to obtain a second homography transformation matrix.
The first stitching module 606 is configured to transform the current frame into the reference frame by using the second homography transformation matrix, and perform fusion stitching on an overlapping area of the transformed frame and the reference frame based on a pyramid fusion algorithm of a parallel computing architecture to obtain an intermediate image.
The second stitching module 607 is configured to use the intermediate image as a new reference frame, return to reestablish the feature matching point pair error function, and perform subsequent operations until all images of the image sequence are stitched, so as to obtain a panoramic image of the low-altitude unmanned aerial vehicle flight channel.
The third splicing module 608 is configured to form panoramic image sequences from panoramic images of the low-altitude unmanned aerial vehicle on different flight channels, and splice all panoramic images of the panoramic image sequences to obtain a panoramic image of the low-altitude unmanned aerial vehicle.
Further, as shown in fig. 7, the registration module 602 specifically includes:
the extracting unit 6021 is configured to extract feature points of two adjacent images in the image sequence based on a parallel computing architecture.
A first generating unit 6022, configured to match all feature points of two adjacent images to generate a pair of feature matching points;
the second generating unit 6023 is configured to refine the feature matching point pairs by using a random sampling consensus algorithm to generate a first homography transformation matrix of two adjacent images.
Further, as shown in fig. 8, the extraction unit 6021 specifically includes:
the first constructing subunit 60211 is configured to construct a gaussian image pyramid for two adjacent images respectively.
The determining subunit 60212 is configured to perform a gaussian difference pyramid on the gaussian image pyramid respectively, and determine feature candidate points.
An extracting subunit 60213 is configured to perform feature point positioning on the feature candidate points and extract a feature point principal direction.
A second construction subunit 60214, configured to construct the feature descriptor according to the feature point principal direction.
The specific implementation of each module in this embodiment may refer to embodiment 1, which is not described herein any more; it should be noted that the system provided in this embodiment is only illustrated by the division of the functional modules, and in practical applications, the functions may be distributed by different functional modules according to needs, that is, the internal structure is divided into different functional modules to complete all or part of the functions described above.
It will be understood that the terms "first," "second," and the like as used in the above-described systems may be used to describe various modules, but these modules are not limited by these terms. These terms are only used to distinguish one module from another. For example, a first splice module may be referred to as a second splice module, and similarly, a second splice unit may be referred to as a first splice module, both the first and second splice modules being splice modules, but not the same splice module, without departing from the scope of the present invention.
Example 3:
the present embodiment provides a computer device, which is a computer equipped with a parallel computing architecture, as shown in fig. 9, and is configured to include a processor 902, a memory, an input device 903, a display 904, and a network interface 905 that are connected by a system bus 901, where the processor is configured to provide computing and control capabilities, the memory includes a nonvolatile storage medium 906 and an internal memory 907, the nonvolatile storage medium 906 stores an operating system, a computer program, and a database, the internal memory 907 provides an environment for an operating system and a computer program in the nonvolatile storage medium to run, and when the processor 902 executes the computer program stored in the memory, the low-altitude unmanned aerial vehicle image stitching method of embodiment 1 is implemented as follows:
acquiring an image sequence of a low-altitude unmanned aerial vehicle flight channel;
registering images of the image sequence based on a parallel computing architecture to generate a first homography conversion matrix of two adjacent images;
determining a reference frame of an image sequence;
establishing a characteristic matching point pair error function according to the first homography conversion matrix and the reference frame;
calculating the error function by using a Levensberg-Marquardt algorithm to the feature matching point to obtain a second homography conversion matrix;
based on a parallel computing framework, converting the current frame into the reference frame by using a second homography conversion matrix, and performing fusion splicing on the overlapped area of the conversion frame and the reference frame by using a pyramid fusion algorithm to obtain an intermediate image;
and taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
Further, the generating a first homography transformation matrix of two adjacent images according to the image sequence specifically includes:
extracting feature points of two adjacent images in the image sequence based on a parallel computing architecture;
matching all the characteristic points of two adjacent images to generate characteristic matching point pairs;
and purifying the feature matching point pairs to generate a first homography conversion matrix of two adjacent images.
Further, the extracting the feature points of two adjacent images in the image sequence specifically includes:
constructing a Gaussian image pyramid for two adjacent images respectively;
respectively carrying out Gaussian difference pyramid on the Gaussian image pyramid to determine characteristic candidate points;
carrying out feature point positioning on the feature candidate points, and extracting the main direction of the feature points;
and constructing a feature descriptor according to the main direction of the feature point.
Further, the method can also comprise the following steps: and forming panoramic images of the low-altitude unmanned aerial vehicle in different flight channels into a panoramic image sequence, and splicing all the panoramic images of the panoramic image sequence to obtain a panoramic image of the low-altitude unmanned aerial vehicle.
Example 4:
the present embodiment provides a storage medium, which stores one or more programs, and when the programs are executed by a processor, the low-altitude unmanned aerial vehicle image stitching method of embodiment 1 is implemented as follows:
acquiring an image sequence of a low-altitude unmanned aerial vehicle flight channel;
registering images of the image sequence based on a parallel computing architecture to generate a first homography conversion matrix of two adjacent images;
determining a reference frame of an image sequence;
establishing a characteristic matching point pair error function according to the first homography conversion matrix and the reference frame;
calculating the error function by using a Levensberg-Marquardt algorithm to the feature matching point to obtain a second homography conversion matrix;
based on a parallel computing framework, converting the current frame into the reference frame by using a second homography conversion matrix, and performing fusion splicing on the overlapped area of the conversion frame and the reference frame by using a pyramid fusion algorithm to obtain an intermediate image;
and taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
Further, the generating a first homography transformation matrix of two adjacent images according to the image sequence specifically includes:
extracting feature points of two adjacent images in the image sequence based on a parallel computing architecture;
matching all the characteristic points of two adjacent images to generate characteristic matching point pairs;
and purifying the feature matching point pairs to generate a first homography conversion matrix of two adjacent images.
Further, the extracting the feature points of two adjacent images in the image sequence specifically includes:
constructing a Gaussian image pyramid for two adjacent images respectively;
respectively carrying out Gaussian difference pyramid on the Gaussian image pyramid to determine characteristic candidate points;
carrying out feature point positioning on the feature candidate points, and extracting the main direction of the feature points;
and constructing a feature descriptor according to the main direction of the feature point.
Further, the method can also comprise the following steps: and forming panoramic images of the low-altitude unmanned aerial vehicle in different flight channels into a panoramic image sequence, and splicing all the panoramic images of the panoramic image sequence to obtain a panoramic image of the low-altitude unmanned aerial vehicle.
The storage medium of this embodiment may be a magnetic disk, an optical disk, a computer Memory, a Read-only Memory (ROM), a Random Access Memory (RAM), a usb disk, a removable hard disk, or other media.
In summary, the invention, aiming at the characteristics of large data volume and high resolution of low altitude unmanned aerial vehicle images, uses a parallel computing architecture to register the images in the image sequence of the low altitude unmanned aerial vehicle flight channel, generates a homography conversion matrix of two adjacent images, calculates the characteristic matching point pair error function by establishing a characteristic matching point pair error function and using a Levense Begger-Marquardt algorithm to obtain a more accurate homography conversion matrix, uses a pyramid fusion algorithm of the parallel computing architecture to fuse and splice the overlapped area of a transformation frame and a reference frame obtained after the current frame is transformed to obtain an intermediate image, uses the intermediate image as a new reference frame, and continuously splices the intermediate image until all the images of the image sequence are spliced to obtain a panoramic image of the low altitude unmanned aerial vehicle flight channel, thereby solving the problem of slow splicing speed caused by long image registration speed in the prior art, the problem that the high-resolution low-altitude unmanned aerial vehicle image cannot meet the splicing precision and the splicing speed at the same time is solved to a great extent, the requirements of a low-altitude unmanned aerial vehicle image rapid splicing technology are met, the ground environment condition can be known in time, and important image data sources provided by ground disaster monitoring, farmland pest and disease detection and national remote sensing resource monitoring are facilitated.
The above description is only for the preferred embodiments of the present invention, but the protection scope of the present invention is not limited thereto, and any person skilled in the art can substitute or change the technical solution and the inventive concept of the present invention within the scope of the present invention.

Claims (10)

1. A low-altitude unmanned aerial vehicle image splicing method is characterized by comprising the following steps:
acquiring an image sequence of a low-altitude unmanned aerial vehicle flight channel;
registering images of the image sequence based on a parallel computing architecture to generate a first homography conversion matrix of two adjacent images;
determining a reference frame of an image sequence;
establishing a characteristic matching point pair error function according to the first homography conversion matrix and the reference frame;
calculating the error function by using a Levensberg-Marquardt algorithm to the feature matching point to obtain a second homography conversion matrix;
converting the current frame into the reference frame by using a second homography conversion matrix, and performing fusion splicing on the overlapped area of the conversion frame and the reference frame based on a pyramid fusion algorithm of a parallel computing architecture to obtain an intermediate image;
and taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
2. The method for stitching low-altitude unmanned aerial vehicle images according to claim 1, wherein the registering images of the image sequence based on the parallel computing architecture to generate a first homography transformation matrix of two adjacent images specifically comprises:
extracting feature points of two adjacent images in the image sequence based on a parallel computing architecture;
matching all the characteristic points of two adjacent images to generate characteristic matching point pairs;
and purifying the feature matching point pairs by using a random sampling consistency algorithm to generate a first homography conversion matrix of two adjacent images.
3. The low-altitude unmanned aerial vehicle image stitching method according to claim 2, wherein the extracting the feature points of two adjacent images in the image sequence specifically comprises:
constructing a Gaussian image pyramid for two adjacent images respectively;
respectively carrying out Gaussian difference pyramid on the Gaussian image pyramid to determine characteristic candidate points;
carrying out feature point positioning on the feature candidate points, and extracting the main direction of the feature points;
and constructing a feature descriptor according to the main direction of the feature point.
4. The low-altitude unmanned aerial vehicle image stitching method according to any one of claims 1 to 3, wherein the establishing of the feature matching point pair error function is as follows:
Figure FDA0002285070050000021
wherein M ═ Hir,Hir=Hij*Hjr,X=Hirx,HirFor the current frame SiRelative to a reference frame SrThe homography transformation matrix of (1), x is the current frame SiX is a reference frame SrMiddle and current frame SiCorresponding feature matching points, HijFor the current frame SiWith respect to adjacent frames SjThe first homography transformation matrix, HjrFor adjacent frames SjRelative to a reference frame SrHomography transformation matrix of (x ″)i,y`i) For the current frame SiCoordinates of the corresponding feature points after transformation, (X)i,Yi) For reference frame SrCoordinates of the middle feature point.
5. The method for stitching low-altitude unmanned aerial vehicle images according to claim 4, wherein the obtaining of the second homography transformation matrix by operating the error function on the feature matching points by using the Levenseberg-Marquardt algorithm specifically comprises:
given an error epsilon and a scaling coefficient mu, let k be 0, and use an initial iteration matrix M0Calculating new coordinates of the current frame;
according to the characteristic matching point pair error function E (M), calculating an image error function E (M)k);
Calculating a jacobian matrix j (m) as follows:
Figure FDA0002285070050000022
wherein the content of the first and second substances,
Figure FDA0002285070050000023
the eight partial derivatives are obtained, wherein i is 1.. and N is the number of feature matching points;
calculating an image error increment function Δ M as follows:
ΔM=-(JT(M)J(M)+μI)-1JT(M)e(M)
wherein I is an identity matrix;
estimating an image error function E (M)k+1) The following formula:
Mk+1=Mk+ΔM
let epsilonk=E(Mk)-E(Mk+1) If epsilonkThe value is decreased, k is made k +1, the value of the scale factor mu is decreased, and the image error function E (M) is calculatedk) (ii) a If epsilonkIncreasing the value, increasing the value of the proportionality coefficient mu, let E (M)k+1)=E(Mk) Returning to calculate an image error increment function delta M; if epsilonkIf the result is less than epsilon, a second homography conversion matrix is obtained.
6. The low-altitude unmanned aerial vehicle image stitching method according to any one of claims 1 to 4, further comprising:
and forming panoramic images of the low-altitude unmanned aerial vehicle in different flight channels into a panoramic image sequence, and splicing all the panoramic images of the panoramic image sequence to obtain a panoramic image of the low-altitude unmanned aerial vehicle.
7. The method for stitching low altitude unmanned aerial vehicle images according to any one of claims 1-4, wherein the reference frame of the determined image sequence is an intermediate frame of the image sequence.
8. A low-altitude unmanned aerial vehicle image stitching system is characterized by comprising:
the acquisition module is used for acquiring an image sequence of the low-altitude unmanned aerial vehicle flight channel;
the registration module is used for registering images of the image sequence based on a parallel computing architecture to generate a first homography conversion matrix of two adjacent images;
the determining module is used for determining a reference frame of the image sequence;
the establishing module is used for establishing an error function of the feature matching point pair according to the first homography conversion matrix and the reference frame;
the operation module is used for operating the error function of the feature matching point pair by utilizing a LevenBeg-Marquardt algorithm to obtain a second homography conversion matrix;
the first splicing module is used for transforming the current frame into the reference frame by using the second homography transformation matrix, and performing fusion splicing on the overlapped area of the transformed frame and the reference frame based on a pyramid fusion algorithm of a parallel computing architecture to obtain an intermediate image;
and the second splicing module is used for taking the intermediate image as a new reference frame, returning to reestablish the characteristic matching point pair error function, and executing subsequent operation until all images of the image sequence are spliced to obtain the panoramic image of the low-altitude unmanned aerial vehicle flight channel.
9. A computer device comprising a processor and a memory for storing a program executable by the processor, wherein the processor implements the low-altitude unmanned image stitching method according to any one of claims 1 to 7 when executing the program stored in the memory.
10. A storage medium storing a program, wherein the program, when executed by a processor, implements the low-altitude unmanned aerial vehicle image stitching method according to any one of claims 1 to 7.
CN201911157016.2A 2019-11-22 2019-11-22 Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium Pending CN110889818A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911157016.2A CN110889818A (en) 2019-11-22 2019-11-22 Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911157016.2A CN110889818A (en) 2019-11-22 2019-11-22 Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN110889818A true CN110889818A (en) 2020-03-17

Family

ID=69748477

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911157016.2A Pending CN110889818A (en) 2019-11-22 2019-11-22 Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110889818A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696040A (en) * 2020-05-30 2020-09-22 南京理工大学 Video image fast splicing method and system based on feature extraction and matching
CN113096016A (en) * 2021-04-12 2021-07-09 广东省智能机器人研究院 Low-altitude aerial image splicing method and system
CN116228539A (en) * 2023-03-10 2023-06-06 贵州师范大学 Unmanned aerial vehicle remote sensing image stitching method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN103886569A (en) * 2014-04-03 2014-06-25 北京航空航天大学 Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images
CN104156968A (en) * 2014-08-19 2014-11-19 山东临沂烟草有限公司 Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN105719314A (en) * 2016-01-30 2016-06-29 西北工业大学 Homography estimation and extended Kalman filter based localization method for unmanned aerial vehicle (UAV)
CN107507132A (en) * 2017-09-12 2017-12-22 成都纵横自动化技术有限公司 A kind of real-time joining method of unmanned plane aerial photography image

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102201115A (en) * 2011-04-07 2011-09-28 湖南天幕智能科技有限公司 Real-time panoramic image stitching method of aerial videos shot by unmanned plane
CN103886569A (en) * 2014-04-03 2014-06-25 北京航空航天大学 Parallel and matching precision constrained splicing method for consecutive frames of multi-feature-point unmanned aerial vehicle reconnaissance images
CN104156968A (en) * 2014-08-19 2014-11-19 山东临沂烟草有限公司 Large-area complex-terrain-region unmanned plane sequence image rapid seamless splicing method
CN105719314A (en) * 2016-01-30 2016-06-29 西北工业大学 Homography estimation and extended Kalman filter based localization method for unmanned aerial vehicle (UAV)
CN107507132A (en) * 2017-09-12 2017-12-22 成都纵横自动化技术有限公司 A kind of real-time joining method of unmanned plane aerial photography image

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
于瑶瑶: "无人机影像快速拼接关键技术研究", 《中国优秀博硕士学位论文全文数据库(硕士) 工程科技Ⅱ辑》 *
张楚东 等: "基于稀疏光流的道路航拍图像拼接算法", 《舰船电子工程》 *
张欢: "无人机遥感影像快速处理关键技术研究及实现", 《中国优秀博硕士学位论文全文数据库(硕士) 信息科技辑》 *
杨明东 等: "基于匹配策略融合的低误差快速图像拼接算法", 《计算机应用研究》 *
贾银江: "无人机遥感图像拼接关键技术研究", 《中国优秀博硕士学位论文全文数据库(博士) 信息科技辑》 *
陈国青 等: "《中国信息系统研究:理论与实践 下》", 31 October 2007, 云南科技出版社 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111696040A (en) * 2020-05-30 2020-09-22 南京理工大学 Video image fast splicing method and system based on feature extraction and matching
CN111696040B (en) * 2020-05-30 2023-06-30 南京理工大学 Video image rapid splicing method and system based on feature extraction matching
CN113096016A (en) * 2021-04-12 2021-07-09 广东省智能机器人研究院 Low-altitude aerial image splicing method and system
CN116228539A (en) * 2023-03-10 2023-06-06 贵州师范大学 Unmanned aerial vehicle remote sensing image stitching method

Similar Documents

Publication Publication Date Title
EP3621034B1 (en) Method and apparatus for calibrating relative parameters of collector, and storage medium
CN111598993B (en) Three-dimensional data reconstruction method and device based on multi-view imaging technology
Sharma Comparative assessment of techniques for initial pose estimation using monocular vision
CN110335317B (en) Image processing method, device, equipment and medium based on terminal equipment positioning
CN110889818A (en) Low-altitude unmanned aerial vehicle image splicing method and system, computer equipment and storage medium
CN109544615A (en) Method for relocating, device, terminal and storage medium based on image
CN105354841B (en) A kind of rapid remote sensing image matching method and system
CN114063098A (en) Multi-target tracking method, device, computer equipment and storage medium
AliAkbarpour et al. Parallax-tolerant aerial image georegistration and efficient camera pose refinement—without piecewise homographies
Zheng et al. Minimal solvers for 3d geometry from satellite imagery
CN112465877A (en) Kalman filtering visual tracking stabilization method based on motion state estimation
CN108444452B (en) Method and device for detecting longitude and latitude of target and three-dimensional space attitude of shooting device
Guan et al. Minimal cases for computing the generalized relative pose using affine correspondences
CN112629565B (en) Method, device and equipment for calibrating rotation relation between camera and inertial measurement unit
Wang et al. Lrru: Long-short range recurrent updating networks for depth completion
Tian et al. Aerial image mosaicking based on the 6-DoF imaging model
CN111721283B (en) Precision detection method and device for positioning algorithm, computer equipment and storage medium
CN109919998B (en) Satellite attitude determination method and device and terminal equipment
CN112991388B (en) Line segment feature tracking method based on optical flow tracking prediction and convex geometric distance
CN115294280A (en) Three-dimensional reconstruction method, apparatus, device, storage medium, and program product
Walvoord et al. Geoaccurate three-dimensional reconstruction via image-based geometry
CN114088103A (en) Method and device for determining vehicle positioning information
CN114549945A (en) Remote sensing image change detection method and related device
Wang et al. Slam-based cooperative calibration for optical sensors array with gps/imu aided
CN113610952A (en) Three-dimensional scene reconstruction method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination