CN114936971A - Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area - Google Patents

Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area Download PDF

Info

Publication number
CN114936971A
CN114936971A CN202210640221.XA CN202210640221A CN114936971A CN 114936971 A CN114936971 A CN 114936971A CN 202210640221 A CN202210640221 A CN 202210640221A CN 114936971 A CN114936971 A CN 114936971A
Authority
CN
China
Prior art keywords
image
point
images
pixel
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210640221.XA
Other languages
Chinese (zh)
Inventor
张维维
陈洪立
乔欣
蔡万铭
徐憧意
夏仁森
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Sci Tech University ZSTU
Original Assignee
Zhejiang Sci Tech University ZSTU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Sci Tech University ZSTU filed Critical Zhejiang Sci Tech University ZSTU
Priority to CN202210640221.XA priority Critical patent/CN114936971A/en
Publication of CN114936971A publication Critical patent/CN114936971A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/02Affine transformations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/80Geometric correction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/7715Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20016Hierarchical, coarse-to-fine, multiscale or multiresolution image processing; Pyramid transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a water area-oriented remote sensing multispectral image splicing method and system for an unmanned aerial vehicle, and belongs to the technical field of water area image processing. The invention relates to a water area-oriented unmanned aerial vehicle remote sensing multispectral image splicing method, which comprises the steps of carrying out geometric correction preprocessing on an image according to longitude and latitude POS information to obtain a corrected image, and adding geographic coordinate projection information to the corrected image; then, according to the geographic coordinate projection information, carrying out coarse registration on the images to obtain registered images; calculating the position of the overlapping area by using computer graphics knowledge according to the registered image to obtain an image to be spliced; further constructing a scale-invariant feature transform (SIFT) model based on the principal component images, and extracting feature points of the images to be spliced; taking the spatial distance as a similarity evaluation index of the feature points, and screening out matching points meeting the requirements; and finally, constructing a multi-resolution fusion model, fusing the graphs to be spliced, and realizing splicing of the remote sensing multispectral image of the unmanned aerial vehicle.

Description

Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area
Technical Field
The invention relates to a water area-oriented remote sensing multispectral image splicing method and system for an unmanned aerial vehicle, and belongs to the technical field of water area image processing.
Background
Chinese patent (publication No. CN 110503604A) discloses an aviation area array image real-time orthoscopic splicing method based on high-precision POS, which comprises the steps of performing camera calibration and calibration of a camera and a POS system before flight; acquiring high-precision POS data of each original image, and removing image distortion of the original image by using a camera calibration result; taking the average surface elevation as an object space plane; calculating a projection matrix of each image, and calculating a uniquely determined positive definite matrix from an object space plane to an original image space plane through the average elevation of the earth surface; determining a uniform resolution ratio to obtain a transformation matrix from a corrected image plane to an original image plane; updating the range of the total corrected image, and recalculating the width and height of the total corrected image; obtaining the position of the current correction image in the total correction image, carrying out image splicing, and directly using the part contained on the DOM of the image for the part of the overlapped area; the iteration is performed until the final total correction image is obtained. The invention improves the accuracy of orthorectified image splicing and meets the requirement of real-time image splicing.
According to the scheme, the images are spliced by utilizing the geographic coordinate information, high-precision GPS information needs to be provided, otherwise, the geographic information acquired by the remote sensing images has large errors, so that the spliced images are staggered, and the stability, wind resistance and GPS measuring equipment performance of the unmanned aerial vehicle ensure that the longitude and latitude POS information precision corresponding to each image is difficult to guarantee, so that the splicing precision is seriously influenced.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a method for carrying out geometric correction pretreatment on an image according to longitude and latitude POS information to obtain a corrected image and adding geographic coordinate projection information to the corrected image; then, according to the geographic coordinate projection information, carrying out coarse registration on the images to obtain registered images; then according to the registered images, a computer graphics knowledge model is constructed, the position of an overlapped area is calculated, and the images of the overlapped area are scratched to obtain images to be spliced; further constructing a scale-invariant feature transform (SIFT) model based on the principal component images, and extracting feature points of the images to be spliced; taking the spatial distance as a similarity evaluation index of the characteristic points, and screening out matching points meeting the requirements; and finally, a multi-resolution fusion model based on the Laplacian pyramid is constructed, the graphs to be spliced are fused, splicing of the unmanned aerial vehicle remote sensing multispectral images is realized, the scheme is scientific and reasonable, and the feasible unmanned aerial vehicle remote sensing multispectral image splicing method facing the water area is feasible.
Aiming at the defects of the prior art, the invention provides a computer device which realizes the splicing method of the remote sensing multispectral image of the unmanned aerial vehicle facing the water area through a processor, can effectively improve the image splicing precision, has a simple scheme, is practical and is convenient to popularize and use.
In order to achieve one of the above objects, a first technical solution of the present invention is:
an unmanned aerial vehicle remote sensing multispectral image splicing method facing a water area,
the method comprises the following steps:
the method comprises the steps of firstly, receiving image data and synthesizing a multiband image;
simultaneously acquiring longitude and latitude POS information corresponding to the multiband image, and eliminating longitude and latitude height data abnormal values;
secondly, carrying out geometric correction preprocessing on the image according to the corresponding longitude and latitude POS information in the first step to obtain a corrected image;
thirdly, calculating geographic coordinate projection information based on longitude and latitude and yaw angle information corresponding to the image, and adding the geographic coordinate projection information to the corrected image in the second step;
fourthly, selecting an unmanned aerial vehicle image sequence according to a splicing strategy, and carrying out coarse registration on the images according to the geographical coordinate projection information in the third step to obtain a translation transformation matrix and two registration images;
fifthly, calculating the position of the overlapped area by using computer graphics knowledge according to the registered image in the fourth step, and matting the image of the overlapped area to obtain an image to be spliced;
sixthly, extracting a first principal component image from the image to be spliced in the fifth step by utilizing a Principal Component Analysis (PCA), and constructing a scale invariant feature model (SIFT) for the first principal component image to obtain feature points of the first principal component image;
seventhly, judging the number of the feature points in the sixth step, and if the number of the feature points is less than N, taking the translation transformation matrix in the fourth step as a homography change matrix in the later period;
otherwise, establishing a one-to-many hypothesis matching set by adopting a query index KD tree mode, taking the spatial distance as the similarity evaluation index of the feature points in the sixth step, and screening out matching points meeting the requirements; eliminating mismatching point pairs by using the distance histogram to restrict the global similarity attribute, and calculating a homography transformation matrix;
eighthly, constructing a multi-resolution fusion model based on the Laplacian pyramid for the two registration images in the fourth step, and fusing the two registration images in the fourth step by combining the homography transformation matrix in the seventh step to realize the splicing of the remote sensing multispectral images of the unmanned aerial vehicle;
and then returning to the fourth step until the splicing of all the sequence images is completed and then exiting.
Continuously exploring and testing, carrying out geometric correction preprocessing on the image according to the longitude and latitude POS information to obtain a corrected image, and adding geographic coordinate projection information to the corrected image; then, according to the geographic coordinate projection information, carrying out coarse registration on the images to obtain registered images; then according to the registered images, a computer graphics knowledge model is constructed, the position of an overlapped area is calculated, and the images of the overlapped area are scratched to obtain images to be spliced; further constructing a scale-invariant feature transform (SIFT) model based on the principal component images, and extracting feature points of the images to be spliced; taking the spatial distance as a similarity evaluation index of the feature points, and screening out matching points meeting the requirements; and finally, constructing a multi-resolution fusion model based on the Laplacian pyramid, fusing the graphs to be spliced, and splicing the remote sensing multispectral images of the unmanned aerial vehicle.
Furthermore, the scheme of fusing the POS information by the feature points can effectively fuse the characteristic of high registration accuracy of the feature point method and the characteristic of high registration speed of the pose splicing method, the splicing scheme can serve as the basis of real-time registration, and the precision requirement on the POS information is reduced; meanwhile, error accumulation occurring in large-scale image splicing can be reduced by combining the characteristic points with the longitude and latitude POS information, so that the registration precision of the spliced image is obviously improved, the finally obtained spliced and fused image not only contains geographic coordinate information, but also has a good visual effect, the image splicing requirement in the process of general unmanned aerial vehicle water quality remote sensing monitoring is met, and the scheme is scientific, reasonable and feasible.
Furthermore, the invention is suitable for the fishery aquaculture water area environment, in the characteristic matching algorithm, the amplitude of the characteristic point detected by aiming at the aquaculture water area image is small, the direction is not clear, and a plurality of characteristic points with local similar attributes exist in the image, thereby providing a better elimination method and ensuring that the obtained homography change matrix is more accurate.
Furthermore, the method can be used for processing the splicing of image sequences such as long strips, planned routes and the like, and the final spliced image has geographic coordinates.
Preferably, N is 4.
As a preferable technical measure:
in the first step, the method for synthesizing the multiband image comprises the following steps:
receiving multispectral images remotely sensed by the unmanned aerial vehicle, and synthesizing single-waveband images at the same position and shot at the same time to obtain multiband images;
removing longitude and latitude height data abnormal values during take-off and return voyage according to longitude and latitude POS information corresponding to the acquired single-waveband image;
the longitude and latitude POS information comprises longitude lon, latitude lat, navigation height H, course angle gamma, pitch angle alpha, roll angle beta, ground resolution epsilon and a geographical coordinate system and a projection coordinate system of the measuring system;
the calculation formula of the ground resolution epsilon is as follows:
∈=pixelsize*H/f (1)。
as a preferable technical measure:
in the second step, the method for geometry correction includes the following steps:
step 21, establishing an image coordinate system O-xy and a camera coordinate system S-X s Y s Z s Coordinate system P-X of body p Y p Z p Geographic coordinate system E-X E Y E Z E
Step 22, selecting the image coordinate system in the step 21 to perform image point G (X, y) under the image coordinate system and corresponding object point G (X) under the geographic coordinate system according to the relative position relation between the camera and the body and the flight attitude of the unmanned aerial vehicle when the image main point is taken as the origin, the flight direction is taken as the positive direction of the t axis, and the direction perpendicular to the flight direction is taken as the positive direction of the X axis E ,Y E ,Z E ) Converting the change relation of;
the calculation formula of the transformation of the variation relationship is as follows:
[X E ,Y E ,Z E ] E T =λR EP R PS R SO [x,y] O T =λR EP R PS [x,y,-f] O T (2)
wherein f is a focal length and is used for representing the translation amount of the image coordinate system and the camera coordinate system in the Z-axis direction;
step 23, because the optical center of the lens of the camera and the mass center of the unmanned aerial vehicleCoincidence, and the camera coordinate axis system is completely coincident with the body coordinate system after translation, namely R PS Is an identity matrix I;
according to the translation amount in the step 22, a camera imaging model is constructed, and the calculation formula is as follows:
[X E ,Y E ,Z E ] E T =λR EP [x,y,-f] O T (3)
wherein R is EP R (H) R (γ) R (β) R (α), λ is a proportionality coefficient, i.e. λ H/f;
under the orthoscopic photographing condition, the image point g (x, y) in the remote sensing image and the corrected image point g ' (x ', y ') satisfy the following relational expression:
(x′,y′,-f) O T =R EP (x,y,-f) O T (4)
wherein, R (alpha), R (beta), R (gamma), R (H) are respectively a correction rotation matrix based on a pitch angle, a correction rotation matrix based on a roll angle, a correction rotation matrix based on a yaw angle and a correction matrix based on a altitude;
step 24, respectively establishing a pitch angle correction matrix, a roll angle correction matrix, a yaw angle correction matrix and a height correction matrix for R (alpha), R (beta), R (gamma) and R (H) in the step 23 to obtain a correction mathematical model;
pitch angle correction matrix:
Figure BDA0003683636260000041
Figure BDA0003683636260000042
roll angle correction matrix:
Figure BDA0003683636260000043
Figure BDA0003683636260000044
yaw angle correction matrix:
Figure BDA0003683636260000051
Figure BDA0003683636260000052
Figure BDA0003683636260000053
height correction matrix:
Figure BDA0003683636260000054
wherein, theta represents an included angle between a connecting line of any pixel point and the focus and the central visual axis;
and 25, calculating new coordinates after the four-corner coordinates of the image are corrected according to the corrected mathematical model in the step 24, solving a transformation matrix through the old coordinates and the new coordinates, calculating a new corrected image by adopting a bilinear interpolation resampling method, finishing the geometric correction of the image, and updating the geographic coordinate information of the image.
As a preferable technical measure:
in the third step, the calculation method of the geographic coordinate projection information is as follows:
the remote sensing image coordinate and the geographic coordinate are converted by utilizing affine matrix parameters, and the affine matrix parameters comprise 6 parameters which are respectively X E ,X pixel ,R γ ,Y E ,Y pixel ,R γ Describing the relationship between the image row and column numbers and the geographic coordinates,
wherein, X E 、Y E Geographical projection coordinates, X, representing the top left corner element of the image pixel 、Y pixel Respectively represents the ground resolution of image pixels in the longitude and latitude directions, R γ A sine value representing an image rotation angle;
clockwise rotating the image by gamma degrees around a central O point, and then passing through the unmanned aerial vehicle according to the longitude and latitude recorded by the unmanned aerial vehicleCalculating to obtain O point coordinate (X) by coordinate projection OE ,Y OE ) Then, the calculation formula of the coordinate of the image point G at the upper left corner of the image is as follows:
Figure BDA0003683636260000055
wherein w and h represent the width and height of the image size, respectively;
ground resolution X pixel =-Y pixel In the case of R ∈ γ =0;
Geographic coordinates G (X) of any point G (row, col) under the image coordinate system E ′,Y E ') the calculation formula is as follows:
Figure BDA0003683636260000056
as a preferable technical measure:
in the fourth step, the process of coarse registration is as follows:
using a frame-to-frame splicing strategy in the first few rounds of splicing, and subsequently selecting a splicing map-to-splicing map splicing strategy to complete image splicing;
acquiring adjacent images I on the same route 1 And image I 2 The geographic coordinates of the upper left corner of the image are respectively G 1 (X E1 ,Y E1 ) And G 2 (X E2 ,Y E2 ) By an image I 1 For reference, find image I 2 Relative to image I 1 The offset of (2);
the offset is calculated as follows:
Figure BDA0003683636260000061
according to the offset, the geometrically corrected image I 1 And image I 2 Respectively wound around O 1 And O 2 Rotate gamma to course direction, image I 2 At any point in the image I 1 Of the image coordinateThe calculation formula is as follows:
Figure BDA0003683636260000062
wherein H rigid Representing a translational variation matrix;
geographic coordinates G (X) of top-left pixel element of affine matrix parameters of registered images E new ,Y E new ) The calculation formula of (a) is as follows:
Figure BDA0003683636260000063
ground resolution X of registered images pixel 、Y pixel And R γ All remain unchanged.
As a preferable technical measure:
in the fifth step, the scratching of the images of the overlapped areas comprises the following contents:
step 51, image I 1 And I 2 Respectively wound around O 1 And O 2 Rotating gamma degrees to the direction of the air route;
step 52, after the rotation in step 51 is completed, calculating the image I according to the knowledge of computer graphics 1 Quadrangle I formed by four corners 1A I 1B I 1C I 1D And image I 2 Quadrangle I formed by four corners 2A I 2B I 2C I 2D The polygon overlap region of (a);
step 53, adding a certain bias δ to the outside of the polygon overlapping area in step 52, and finally obtaining the coordinates of the polygon overlapping area ABCD;
step 54 of creating a mask image from the coordinates of the polygon overlap area ABCD in step 53, and extracting the mask image from the image I 1 And I 2 To obtain an image containing only the overlapping region, and recording as image I 1 ' and I 2 ′。
As a preferable technical measure:
in the sixth step, the method for constructing the scale-invariant feature transform SIFT model of the first principal component image is as follows:
step 61, inputting a multispectral image matrix I with the size of w × h and the number of wave bands of c, and recombining I into a w × h row c column matrix I reshape To 1, pair reshape Normalization of each column gave I' reshape
I′ reshape The calculation formula of (a) is as follows:
Figure BDA0003683636260000071
step 62, according to I 'in step 61' reshape Calculating a covariance matrix Cov;
the calculation formula of the covariance matrix Cov is as follows:
Figure BDA0003683636260000072
step 63, calculating the eigenvector Vec corresponding to the maximum eigenvalue of the covariance matrix Cov in the step 62;
step 64, for the I of the reconstructed image reshape Extracting the main component image of the characteristic vector Vec in the step 63 and recombining the main component image into an image I with the size of w multiplied by h PCA
Image I PCA The calculation formula of (c) is as follows:
I PCA =I reshape ·Vec (20)
step 65, taking the principal component image in the step 64 as input for extracting Scale Invariant Feature Transform (SIFT) feature points, then constructing a DOG scale space, detecting extreme points of the DOG scale space, deleting unstable feature points, assigning values to the directions of the feature points and generating feature point descriptors;
the construction of the DOG scale space comprises the following steps:
the method comprises the following steps of obtaining an image pyramid by carrying out Gaussian blur and downsampling on a source image, wherein the downsampling calculation formula is as follows:
Figure BDA0003683636260000073
o is an integer of 1 to 21
Secondly, performing Gaussian blur on each layer of image of the image pyramid by using different sigma parameters, wherein the obtained multiple blurred images form a Gaussian pyramid, and the calculation formula is as follows:
Figure BDA0003683636260000074
Figure BDA0003683636260000075
then subtracting the images of two adjacent Gaussian spaces to obtain a DOG image, wherein the calculation formula is as follows:
Figure BDA0003683636260000076
wherein Down represents Down-sampling, wherein G 0 I; i (x, y) represents the source image, L (x, y, sigma) represents the Gaussian scale space after convolution of the source image,
Figure BDA0003683636260000077
represents convolution operation, and sigma represents a scale factor of a Gaussian convolution kernel; g (x, y, σ) represents a gaussian kernel function, (m, n) represents the size of the convolution kernel; k represents a scale factor of the adjacent scale space, taken
Figure BDA0003683636260000078
Detecting extreme points of a DOG scale space, comprising the following contents:
comparing each pixel point with 8 pixel points in the same scale field and 18 pixel points in upper and lower adjacent scales, wherein the pixel point is an extreme point in a DOG scale space only when the DOG value of the pixel point is all larger than or smaller than the DOG values of the compared 26 pixel points;
deleting unstable feature points, including the following:
firstly, in order to obtain the accurate position of an extreme point, a pixel difference value is required to be carried out on a discrete space, then a three-dimensional quadratic function is fitted by analogy to obtain the accurate position of the extreme point, and finally the extreme point with low contrast is removed;
meanwhile, discarding extreme points with large edge response;
assigning the direction of the feature points, which comprises the following steps:
the calculation formula for calculating the modulus m (x, y) and the direction θ (x, y) of each feature point is as follows:
Figure BDA0003683636260000081
Figure BDA0003683636260000082
in the formula, l (x, y) is a scale space value where the characteristic point is located; calculating the neighborhood gradient amplitude and direction with 3 sigma as a radius by taking the feature point as a center, counting the gradient direction distribution of pixel points in the field by utilizing a histogram, dividing the gradient direction into 36 equal parts from 0-360 degrees, determining the direction of a main peak value in the histogram as the main direction of the feature point, and taking the equal parts as the auxiliary direction of the feature point if the peak value in one equal part direction is greater than 80% of the main peak value;
generating a feature point descriptor, comprising the following:
rotating the image coordinate axis of the region to coincide with the gradient direction of the characteristic point;
then, a 16 × 16 neighborhood window with the feature point as the center is taken, the gradient of each pixel is calculated, the closer the feature point is, the larger the weight is, the Gaussian weighting is carried out on the pixel gradient of the sub-region according to sigma d/2, and the weight calculation formula is as follows;
Figure BDA0003683636260000083
the region is subdivided into 4 × 4 subregions, gradient histograms in 8 directions in each subregion are counted by a gradient value weighting method to form 8-bit vector description, a 4 × 4 × 8-128-dimensional description vector is formed, and finally the vectors are normalized to be the descriptors of the scale invariant feature transform SIFT feature points.
As a preferable technical measure:
in the seventh step, the screening method of the matching points is as follows:
selecting and imaging I by spatial distance by adopting a query index KD tree mode 1 Image I with each feature point closest thereto 2 The n feature matching points in the set form a one-to-many hypothesis matching set, and a spatial distance is taken as an evaluation index of the similarity of the feature points, wherein the spatial distance comprises a Euclidean distance D p And a pixel coordinate distance D d The calculation formula of the weighted sum is as follows:
Figure BDA0003683636260000084
Figure BDA0003683636260000085
wherein, E 1 And E 2 Respectively are feature descriptor vectors of two feature points;
G 1 and G 2 The position coordinates of the pixel points are obtained;
calculating the space distance D between each feature point and the hypothesis matching set point s The calculation formula is as follows:
D s =α·D d +(1-α)·D p (14)
wherein alpha is a pixel coordinate distance term influence factor of the matching point;
the matching screening strategy adopts the ratio of the nearest neighbor space distance to the next nearest neighbor space distance, and the calculation formula is as follows:
r=min_fst/min_scd (15)
wherein min _ fst is the nearest spatial distance, min _ scd time nearest spatial distance; when r is less than T, and T is a predetermined ratio, the two angular points are matched point pairs, so that coarse matching is performed on all characteristic points;
for the feature points with local similar attributes, the screening process of the matching points is as follows:
calculating the distance between the matching point pairs, uniformly dividing the maximum and minimum values of the distance values into 10 sections, wherein the frequency of each section is P ═ P 1 ,…,p 10 The frequency of the peak interval is max (P), the corresponding interval is the ith, and the interval is [ i-1, i +1 ]]The matching point pair in the step (1) is a correct matching point pair, and the matching point pair set is an accurate matching point pair searched;
and then eliminating the error characteristic point pairs according to a random sampling consensus RANSAC algorithm, thereby calculating a homography change matrix, and multiplying the homography change matrix by the image to obtain the image to be fused.
As a preferable technical measure:
in the eighth step, the construction method of the multi-resolution fusion model based on the Laplacian pyramid is as follows:
firstly, two images I are established for the fusion of every two images 1 And I 2 Gaussian pyramid G 1 、G 2 Then, establishing a corresponding 4-layer Laplace pyramid image Lap 1 、Lap 2
Making a sum image I 1 Mask image I of the same size mask This mask image represents the location of the fusion, and the Gaussian pyramid G of the mask image is then solved mask Which represents the fusion weight of each pixel point;
at each scale, i.e. resolution, G according to the current scale mask The Laplace pyramid image Lap of the two images 1 ,Lap 2 Adding the two images to finally obtain a spliced Laplace pyramid image Lap fused
With Lap fused The lowest resolution map of the image is used as a starting map, and the splicing result of the highest resolution is obtained through reconstruction.
In order to achieve one of the above objects, a second technical solution of the present invention is:
a computer device, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, the one or more programs cause the one or more processors to implement a method of remote sensing multispectral image stitching for a water-oriented drone as described above.
The unmanned aerial vehicle remote sensing multispectral image splicing method for the water area is realized through the processor, the image splicing precision can be effectively improved, and the method is simple in scheme, practical and convenient to popularize and use.
Compared with the prior art, the invention has the following beneficial effects:
continuously exploring and testing, carrying out geometric correction preprocessing on the image according to the longitude and latitude POS information to obtain a corrected image, and adding geographic coordinate projection information to the corrected image; then, according to the geographic coordinate projection information, carrying out coarse registration on the images to obtain registered images; then according to the registered images, a computer graphics knowledge model is constructed, the position of an overlapping region is calculated, and the images of the overlapping region are scratched to obtain images to be spliced; further constructing a scale-invariant feature transform (SIFT) model based on the principal component images, and extracting feature points of the images to be spliced; taking the spatial distance as a similarity evaluation index of the feature points, and screening out matching points meeting the requirements; and finally, constructing a multi-resolution fusion model based on the Laplacian pyramid, fusing the graphs to be spliced, and splicing the remote sensing multispectral images of the unmanned aerial vehicle.
Furthermore, the scheme of fusing the POS information by the feature points can effectively fuse the characteristic of high registration accuracy of the feature point method and the characteristic of high registration speed of the pose splicing method, the splicing scheme can serve as the basis of real-time registration, and the precision requirement on the POS information is reduced; meanwhile, error accumulation occurring in large-scale image splicing can be reduced by combining the characteristic points with the longitude and latitude POS information, so that the registration precision of the spliced image is obviously improved, the finally obtained spliced and fused image not only contains geographic coordinate information, but also has a good visual effect, the image splicing requirement in the process of general unmanned aerial vehicle water quality remote sensing monitoring is met, and the scheme is scientific, reasonable and feasible.
Furthermore, the invention is suitable for the fishery aquaculture water area environment, in the characteristic matching algorithm, the amplitude of the characteristic point detected by aiming at the aquaculture water area image is small, the direction is not clear, and a plurality of characteristic points with local similar attributes exist in the image, thereby providing a better elimination method and ensuring that the obtained homography change matrix is more accurate.
Furthermore, the method can be used for processing the splicing of image sequences such as long strips, planned routes and the like, and the final spliced image has geographic coordinates.
Drawings
FIG. 1 is a flow chart of a method of an embodiment of the present invention;
FIG. 2 is a schematic diagram of the relationship of various coordinate systems of the present invention;
FIG. 3 is a schematic view of the imaging geometry of the profile of the unmanned aerial vehicle along its longitudinal axis as the pitch angle of the unmanned aerial vehicle changes in accordance with the present invention;
FIG. 4 is a schematic diagram of the three-dimensional imaging geometry of the unmanned aerial vehicle of the present invention with varying pitch angles;
FIG. 5 is a schematic diagram of the geometric relationship between the image plane positions when the pitch angle and the roll angle of the unmanned aerial vehicle change;
FIG. 6 is a schematic view of the three-dimensional imaging geometry of the unmanned aerial vehicle of the present invention with varying yaw angle;
FIG. 7 is a schematic diagram of the affine matrix parameter calculation of the image of the present invention;
FIG. 8 is a schematic representation of the image coarse registration calculation according to geographic coordinates of the present invention;
FIG. 9 is a schematic diagram of the construction process of the DOG scale space of the present invention;
FIG. 10 is a schematic diagram of the neighborhood coordinate axes of the rotational key points of the present invention;
FIG. 11 is a schematic diagram of SIFT feature descriptor structure
FIG. 12 is a graph of image feature point matching connections according to the present invention;
FIG. 13 is a schematic diagram of the image frame-to-frame stitching strategy of the present invention;
FIG. 14 is a schematic illustration of the stitching strategy from image stitching to stitching of the present invention;
FIG. 15 is a schematic diagram of the present invention for building a Gaussian pyramid and Laplacian pyramid Laplace.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
On the contrary, the invention is intended to cover alternatives, modifications, equivalents and alternatives which may be included within the spirit and scope of the invention as defined by the appended claims. Furthermore, in the following detailed description of the present invention, certain specific details are set forth in order to provide a thorough understanding of the present invention. It will be apparent to one skilled in the art that the present invention may be practiced without these specific details.
The invention relates to a water area-oriented unmanned aerial vehicle remote sensing multispectral image splicing method, which comprises the following specific embodiments:
an unmanned aerial vehicle remote sensing multispectral image splicing method facing a water area,
the method comprises the following steps:
the method comprises the steps of firstly, receiving image data and synthesizing a multiband image;
simultaneously acquiring longitude and latitude POS information corresponding to the multiband image, and eliminating longitude and latitude height data abnormal values;
secondly, carrying out geometric correction pretreatment on the image according to the corresponding longitude and latitude POS information in the first step to obtain a corrected image;
thirdly, calculating geographic coordinate projection information based on longitude and latitude and yaw angle information corresponding to the image, and adding the geographic coordinate projection information to the corrected image in the second step;
fourthly, performing coarse registration on the images according to the geographical coordinate projection information in the third step to obtain registered images;
fifthly, constructing a computer graphics knowledge model according to the registration image in the fourth step, calculating the position of an overlapping area, and matting the image of the overlapping area to obtain an image to be spliced;
sixthly, constructing a Scale Invariant Feature Transform (SIFT) model based on the principal component images, and extracting feature points of the images to be spliced in the fifth step;
step seven, establishing a one-to-many hypothesis matching set by adopting a query index KD tree mode, and screening matching points meeting requirements by taking the spatial distance as a similarity evaluation index of the feature points in the step six; the distance histogram is used for restraining the global similarity attribute to eliminate the mismatching point pairs, and a homography transformation matrix is calculated;
and eighthly, selecting an unmanned aerial vehicle image sequence according to the matching points in the seventh step and a splicing strategy, constructing a multi-resolution fusion model based on the Laplacian pyramid, fusing the graphs to be spliced, and splicing the remote sensing multispectral images of the unmanned aerial vehicle.
As shown in fig. 1 and 2, a second specific embodiment of the water-area-oriented remote sensing multispectral image stitching method for the unmanned aerial vehicle comprises the following steps:
an unmanned aerial vehicle remote sensing multispectral image splicing method facing a water area,
the method comprises the following steps:
step (1), receiving data to synthesize a multiband image, acquiring POS information corresponding to the image, and removing longitude and latitude height data abnormal values;
step (2), carrying out geometric correction preprocessing on the image according to the corresponding POS information;
step (3), calculating geographic coordinate projection and adding geographic coordinate projection information to the image based on longitude and latitude and yaw angle information corresponding to the image;
step (4), carrying out coarse registration on the images according to the geographic coordinate information of the two images which are mutually registered;
step (5), according to the coarse registration result, calculating the position of the overlapped area by using computer graphics knowledge, and matting the image of the overlapped area;
extracting SIFT feature points of the two images to be spliced based on the first principal component image;
step (7), establishing a one-to-many hypothesis matching set by adopting a KD tree index mode, screening out the best matching points by taking the spatial distance as an evaluation index of the similarity of the characteristic points, further using a distance histogram to restrict the global similarity attribute to eliminate mismatching point pairs, and then calculating a homography transformation matrix;
and (8) selecting an unmanned aerial vehicle image sequence according to a splicing strategy, and adopting a multi-resolution fusion algorithm of 4 layers of Laplacian pyramids between every two images.
The specific mode of the step (1) is as follows:
receiving multispectral images remotely sensed by an unmanned aerial vehicle, synthesizing single-band images at the same position and shot at the same time, and eliminating longitude and latitude height data abnormal values during takeoff and return according to longitude and latitude POS information corresponding to the obtained images, wherein the longitude and latitude POS information comprises a longitude (lon) dimension (lay) height (H), a course angle (gamma), a pitch angle (alpha) and a roll angle (beta), and a geographic coordinate system and a projection coordinate system of a measuring system. In order to obtain the ground resolution (e), the focal length (f) of the camera and the pixel size (pixelsize) of the sensor are also obtained, and the calculation formula is as follows. For the convenience of subsequent geometric correction, the length (w) and width (h) of the image are also needed to be known;
∈=pixelsize*H/f (1)
the specific mode of the step (2) is as follows:
in the process of geometric correction, an image coordinate system O-xy and a camera coordinate system S-X are established s Y s Z s Coordinate system P-X of body p Y p Z p Geographic coordinate system E-X E Y E Z E . When the image coordinate system selects the image principal point as the origin, the positive direction of the y axis along the flight direction and the positive direction of the X axis perpendicular to the flight direction, the image point G (X, y) and the corresponding object point G (X, y) are determined according to the relative position relation between the camera and the body and the flight attitude of the unmanned aerial vehicle E ,Y E ,Z E ) The variation relationship of (2) can be converted by the equation, and the image coordinate system and the camera coordinate system have a translation in the Z-axis direction, and the translation amount is the focal length f.
[X E ,Y E ,Z E ] E T =λR EP R PS R SO [x,y] O T =λR EP R PS [x,y,-f] O T (2)
Suppose that the lens optical center of the camera coincides with the mass center of the unmanned aerial vehicle, and the camera coordinate axis system completely coincides with the body coordinate system after translation, namely R PS Is an identity matrix I. The camera imaging model can thus be simplified to:
[X E ,Y E ,Z E ] E T =λR EP [x,y,-f] O T (3)
wherein R is EP R (H) R (γ) R (β) R (α), λ is a proportionality coefficient, i.e., λ is H/f.
Therefore, under the orthoscopic photographing condition, the image point g (x, y) in the remote sensing image and the corrected image point g ' (x ', y ') satisfy:
(x′,y′,-f) O T =R EP (x,y,-f) O T (4)
wherein, R (α), R (β), R (γ), R (h) are a correction rotation matrix based on a pitch angle, a correction rotation matrix based on a roll angle, a correction rotation matrix based on a yaw angle, and a correction matrix based on a altitude, respectively. Under the condition that other flight attitude parameters of the unmanned aerial vehicle are not changed, only considering the situation when a certain attitude changes, the geometric relations shown in fig. 3, 4, 5 and 6 can be respectively established for R (alpha), R (beta), R (gamma) and R (H), and then the following correction mathematical model is obtained through calculation.
Pitch angle correction matrix:
Figure BDA0003683636260000131
Figure BDA0003683636260000132
roll angle correction matrix:
Figure BDA0003683636260000133
Figure BDA0003683636260000134
yaw angle correction matrix:
Figure BDA0003683636260000135
Figure BDA0003683636260000136
Figure BDA0003683636260000141
height correction matrix:
Figure BDA0003683636260000142
and finally, calculating new coordinates after the four-corner coordinates of the image are corrected, solving a transformation matrix through the old coordinates and the new coordinates, and calculating a new corrected image by adopting a bilinear interpolation resampling method to finish geometric correction.
The specific mode of the step (3) is as follows:
affine matrix parameters can be used to convert the remote sensing image coordinates to geographic coordinates, and comprise 6 parameters (X) E ,X pixel ,R γ ,Y E ,Y pixel ,R γ ) Described is the relationship between the image row and column numbers and the geographic coordinates, where X E 、Y E Geographical projection coordinates, X, representing the upper left corner element of the image pixel 、Y pixel Respectively representing the ground resolution, R, of image elements in the longitudinal and latitudinal directions γ Representing the sine of the image rotation angle. As shown in fig. 7, the image is rotated clockwise by γ degrees around the O point, and then the O point coordinate (X) is obtained by coordinate projection calculation according to the longitude and latitude recorded by the unmanned aerial vehicle OE ,Y OE ) The coordinates of the upper left corner image point G can be obtained by equation (13) and the ground resolution X pixel =-Y pixel In the case of R ∈ γ 0. Where w and h represent the width and height of the image size, respectively.
Figure BDA0003683636260000143
And the geographic coordinate G (X) of any point G (row, col) under the image coordinate system E ′,Y E ') can be calculated by equation (14).
Figure BDA0003683636260000144
The specific mode of the step (4) is as follows:
as shown in FIG. 8, assume that adjacent images I on the same flight line 1 And image I 2 The geographic coordinates of the upper left corner of the image are respectively G 1 (X E1 ,Y E1 ) And G 2 (X E2 ,Y E2 ) By an image I 1 As a reference, an image I can be determined 2 Relative to image I 1 The formula of (c) is as follows:
Figure BDA0003683636260000145
the geometrically corrected image I 1 And image I 2 Respectively wound around O 1 And O 2 Rotated gamma to the course direction, which is still a standard orthoscopic photograph, the visual axes are parallel to each other and the distance from the center of the camera coordinate system to the ground plane is equal, so image I 1 And image I 2 The stitching of (a) can be considered as a rigid translation change, image I 2 Can be calculated in the image I by the formula (16) 1 Of (2) is calculated.
Figure BDA0003683636260000146
Its upper left corner pixel ground registering affine matrix parameters of the imagePhysical coordinate G (X) E new ,Y E new ) The ground resolution X of the registration result image can be obtained according to the formula (17) pixel 、Y pixel And R γ Remain unchanged.
Figure BDA0003683636260000151
The concrete mode of the step (5) is as follows:
image I 1 And I 2 Respectively wound around O 1 And O 2 After rotating gamma degree to the course direction, the image I can be calculated according to the knowledge of computer graphics 1 Quadrangle I formed by four corners 1A I 1B I 1C I 1D And image I 2 Quadrangle I formed by four corners 2A I 2B I 2C I 2D The polygon overlap area of (2) is added with a certain bias delta outwards from the overlap area, and finally the coordinates of the polygon overlap area ABCD are obtained, as shown in fig. 8. Then, a mask image is created using the coordinates of the polygon overlap area ABCD, and the image I can be obtained 1 And I 2 To obtain an image containing only the overlapping region, and recording as image I 1 ' and I 2 ′。
The specific mode of the step (6) is as follows:
1) the specific steps of extracting the first principal component image of the multispectral image are as follows:
a) inputting a multispectral image matrix I with the size of w × h and the number of wave bands of c, recombining the multispectral image matrix I into a w × h row-c column matrix I reshape To 1, pair reshape Normalization of each column gave I' reshape
Figure BDA0003683636260000152
b) Calculating I' reshape Covariance matrix Cov of (c):
Figure BDA0003683636260000153
c) solving an eigenvector Vec corresponding to the maximum eigenvalue of the covariance matrix Cov;
d) for the recombined image I reshape Extracting the main component image of the characteristic vector Vec and recombining the main component image into an image I with the size of w multiplied by h PCA
I PCA =I reshape ·Vec (20)
2) Taking the first principal component image extracted by PCA as the input of SIFT feature point extraction, then constructing a DOG scale space, detecting extreme points of the DOG scale space, deleting unstable feature points, assigning the directions of the feature points and generating feature point descriptors, and the method is specifically implemented as follows:
constructing a DOG scale space:
obtaining an image pyramid by performing Gaussian blur and downsampling on a source image, wherein the downsampling process can be represented by a formula (21); secondly, carrying out Gaussian blur on each layer of image of the image pyramid by using different sigma parameters, wherein the obtained multiple blurred images form a Gaussian pyramid and can be represented by a formula (22) and a formula (23); then, subtracting the two adjacent images in the gaussian space to obtain a DOG image, which can be represented by equation (24), and the whole construction process can be represented by fig. 9.
Figure BDA0003683636260000154
o is an integer of 1 to 21
Figure BDA0003683636260000161
Figure BDA0003683636260000162
Figure BDA0003683636260000163
Wherein Down represents Down-sampling, wherein G 0 =I;I (x, y) represents a source image, L (x, y, sigma) represents a Gaussian scale space after convolution of the source image,
Figure BDA0003683636260000164
represents convolution operation, and sigma represents a scale factor of a Gaussian convolution kernel; g (x, y, sigma) represents a Gaussian kernel function, and particularly, as shown in a formula (24), (m, n) represents the size of a convolution kernel; k represents a scale factor of the adjacent scale space, and is generally taken
Figure BDA0003683636260000165
Detecting extreme points of the DOG scale space:
comparing each pixel point with 8 pixel points in the same scale field and 18 pixel points in upper and lower adjacent scales, and only when the DOG value of the pixel point is all larger than or smaller than the DOG values of the compared 26 pixel points, determining that the point is an extreme point in the DOG scale space.
Deleting unstable feature points:
firstly, in order to obtain the accurate position of an extreme point, a pixel difference value is required to be carried out on a discrete space, then a three-dimensional quadratic function is fitted by analogy to obtain the accurate position of the extreme point, and finally the extreme point with low contrast is removed; in addition, extreme points that are less stable due to a relatively large edge response are also discarded.
Assigning the directions of the feature points:
in order to make the feature points have rotational invariance, the maximum gradient direction of the feature points needs to be determined, so in the gaussian image, the norm m (x, y) and the direction θ (x, y) of each feature point are defined as follows:
Figure BDA0003683636260000166
Figure BDA0003683636260000167
in the formula, L (x, y) is a scale space value where the feature point is located. Taking the feature point as a center, calculating the gradient amplitude and the direction of a neighborhood taking 3 sigma as a radius, counting the gradient direction distribution of pixels in the field by utilizing a histogram, dividing the gradient direction into 36 equal parts from 0-360 degrees, determining the direction of a main peak in the histogram as the main direction of the feature point, and taking the equal part direction as the auxiliary direction of the feature point if the peak in one equal part direction is greater than 80% of the main peak.
Generating a characteristic point descriptor:
in order to ensure that the descriptor has rotation invariance, the image coordinate axes of the region need to be rotated to coincide with the feature point gradient direction, as shown in fig. 10, and the red arrow represents the feature point gradient direction.
Then, a 16 × 16 neighborhood window centered on the feature point is taken, the gradient of each pixel is calculated, the closer the feature point is, the larger the weight is, and according to the study of Lowe, the gaussian weighting is performed on the pixel gradients of the sub-regions according to σ ═ d/2, and the weight calculation formula is as follows.
Figure BDA0003683636260000171
The region is subdivided into 4 × 4 sub-regions, gradient histograms in 8 directions in each sub-region are counted by weighting the gradient values to form 8-bit vector descriptions, which form a 4 × 4 × 8-128-dimensional description vector, and finally the vectors are normalized to be descriptors of SIFT feature points, as shown in fig. 11.
The specific mode of the step (7) is as follows:
selecting and imaging image I by Euclidean distance in KD tree index mode 1 Image I with each feature point closest thereto 2 The n feature matching points form a one-to-many hypothesis matching set, the spatial distance is taken as the evaluation index of the similarity of the feature points, and the feature description vectors of the two feature points are assumed to be E respectively 1 And E 2 Position coordinate is G 1 And G 2 Then Euclidean distance D p And a pixel coordinate distance D d The calculation formula of (a) is as follows:
Figure BDA0003683636260000172
Figure BDA0003683636260000173
in order to limit the matching points from searching for the matching points in the global matching, a pixel coordinate distance item between the matching points is introduced when the spatial distance is calculated, the influence factor is alpha, and then the spatial distance D between each characteristic point and the hypothesis matching set point is calculated s The calculation formula is as follows:
D s =α·D d +(1-α)·D p (14)
the matching screening strategy adopts the ratio of the nearest neighbor space distance to the next nearest neighbor space distance, namely:
r=min_fst/min_scd (15)
where min _ fst is the nearest spatial distance, min _ scd is the nearest spatial distance. When r < T (T is a predetermined ratio), the two angular points are matched point pairs, and coarse matching of all the characteristic points is realized.
The above coarse matching only constrains global matching, but for feature points of local similarity attributes, it can be assumed that there are many correct matching pairs in the coarse matching set, and the false matching is randomly distributed, so that a peak will appear in the distance histogram of the matching point pair, and the correct matching will be distributed around the peak. The specific steps can be expressed as: calculating the distance between the matching point pairs, uniformly dividing the maximum and minimum values of the distance values into 10 sections, wherein the frequency of each section is P ═ P 1 ,…,p 10 The frequency of the peak interval is max (P), the corresponding interval is the ith, and the interval is [ i-1, i +1 ]]The matching point pair in (2) is regarded as a correct matching point pair, and the matching point pair set is an accurate matching point pair which is searched. And finally, further eliminating the error characteristic point pairs according to the RANSAC algorithm, thereby calculating a homography change matrix, and multiplying the homography change matrix by the image to obtain the image to be fused. According to the method, the generated feature matching pointsThe line of pairs is shown in fig. 12.
The specific mode of the step (8) is as follows:
and using a frame-to-frame splicing strategy in the first 4 rounds of splicing, and subsequently selecting a splicing map-to-splicing map splicing strategy to complete image splicing, wherein the frame-to-frame splicing strategy is shown in FIG. 13, and the splicing map-to-splicing map splicing strategy is shown in FIG. 14. The fusion of every two images firstly establishes two images I 1 And I 2 G of Gaussian pyramid 1 、G 2 Then, establishing a corresponding 4-layer Laplace pyramid image Lap 1 、Lap 2 . Making a sum image I 1 Mask image I of the same size mask This mask image represents the location of the fusion, and the Gaussian pyramid G of the mask image is then determined mask Which represents the fusion weight of each pixel point. At each scale (resolution), G according to the current scale mask The Laplace pyramid image Lap of the two images 1 ,Lap 2 Adding the obtained data to obtain a spliced Laplace pyramid image Lap fused . With Lap fused The lowest resolution map of (2) is used as a starting map, the reconstruction obtains the splicing result of the highest resolution, and the whole construction structure is shown in fig. 15.
An embodiment of a device to which the method of the invention is applied:
a computer apparatus, comprising:
one or more processors;
storage means for storing one or more programs;
when executed by the one or more processors, the one or more programs cause the one or more processors to implement a method of remote sensing multispectral image stitching for a water-oriented drone as described above.
An embodiment of a computer medium to which the method of the invention is applied:
a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements a method for remote sensing multispectral image stitching for a water-oriented drone.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solutions of the present invention and not for limiting the same, and although the present invention is described in detail with reference to the above embodiments, those of ordinary skill in the art should understand that: modifications and equivalents may be made to the embodiments of the invention without departing from the spirit and scope of the invention, which is to be covered by the claims.

Claims (10)

1. An unmanned aerial vehicle remote sensing multispectral image splicing method facing a water area is characterized in that,
the method comprises the following steps:
the method comprises the steps of firstly, receiving image data and synthesizing a multiband image;
simultaneously acquiring longitude and latitude POS information corresponding to the multiband image, and eliminating longitude and latitude height data abnormal values;
secondly, carrying out geometric correction preprocessing on the image according to the corresponding longitude and latitude POS information in the first step to obtain a corrected image;
thirdly, calculating geographic coordinate projection information based on longitude and latitude and yaw angle information corresponding to the image, and adding the geographic coordinate projection information to the corrected image in the second step;
fourthly, selecting an unmanned aerial vehicle image sequence according to a splicing strategy, and carrying out coarse registration on the images according to the geographical coordinate projection information in the third step to obtain a translation transformation matrix and two registration images;
fifthly, calculating the position of the overlapped area by using computer graphics knowledge according to the registered image in the fourth step, and matting the image of the overlapped area to obtain an image to be spliced;
sixthly, extracting a first principal component image from the image to be spliced in the fifth step by using a Principal Component Analysis (PCA), and constructing a scale invariant feature model (SIFT) for the first principal component image to obtain feature points of the first principal component image;
seventhly, judging the number of the feature points in the sixth step, and if the number of the feature points is less than N, taking the translation transformation matrix in the fourth step as a homography change matrix in the later period;
otherwise, establishing a one-to-many hypothesis matching set by adopting a query index KD tree mode, and screening matching points meeting the requirements by taking the spatial distance as the similarity evaluation index of the feature points in the sixth step; eliminating mismatching point pairs by using the distance histogram to restrict the global similarity attribute, and calculating a homography transformation matrix;
eighthly, constructing a multi-resolution fusion model based on the Laplacian pyramid for the two registration images in the fourth step, and fusing the two registration images in the fourth step by combining the homography transformation matrix in the seventh step to realize the splicing of the remote sensing multispectral images of the unmanned aerial vehicle;
and then returning to the fourth step until the splicing of all the sequence images is completed and then exiting.
2. The water-area-oriented remote sensing multispectral image stitching method for unmanned aerial vehicles according to claim 1,
in the first step, the method for synthesizing the multiband image comprises the following steps:
receiving multispectral images remotely sensed by the unmanned aerial vehicle, and synthesizing single-band images at the same position and shot at the same time to obtain multiband images;
removing longitude and latitude height data abnormal values during take-off and return voyage according to longitude and latitude POS information corresponding to the acquired single-waveband image;
the longitude and latitude POS information comprises longitude lon, latitude lat, navigation height H, course angle gamma, pitch angle alpha, roll angle beta, ground resolution epsilon and a geographical coordinate system and a projection coordinate system of the measuring system;
the calculation formula of the ground resolution epsilon is as follows:
∈=pixelsize*H/f (1)。
3. the method for splicing remote sensing multispectral images of unmanned aerial vehicles facing water areas as claimed in claim 1,
in the second step, the method for geometry correction includes the following steps:
step 21, establishing a graphLike coordinate system O-xy, camera coordinate system S-X s Y s Z s Coordinate system P-X of body p Y p Z p Geographic coordinate system E-X E Y E Z E
Step 22, selecting the image coordinate system in the step 21 to perform image point G (X, y) under the image coordinate system and corresponding object point G (X) under the geographic coordinate system according to the relative position relation between the camera and the body and the flight attitude of the unmanned aerial vehicle when the image main point is taken as the origin, the flight direction is taken as the positive direction of the y axis, and the direction perpendicular to the flight direction is taken as the positive direction of the X axis E ,Y E ,Z E ) Converting the change relation of;
the calculation formula of the transformation of the variation relationship is as follows:
[X E ,Y E ,Z E ] E T =λR EP R PS R SO [x,y] O T =λR EP R PS [x,y,-f] o T (2)
wherein f is a focal length and is used for representing the translation amount of the image coordinate system and the camera coordinate system in the Z-axis direction;
step 23, because the optical center of the lens of the camera coincides with the mass center of the unmanned aerial vehicle, and the coordinate axis system of the camera completely coincides with the coordinate system of the unmanned aerial vehicle after being translated, that is, R PS Is an identity matrix I;
according to the translation amount in the step 22, a camera imaging model is constructed, and the calculation formula is as follows:
[X E ,Y E ,Z E ] E T =λR EP [x,y,-f] O T (3)
wherein R is EP R (H) R (γ) R (β) R (α), λ is a proportionality coefficient, i.e., λ is H/f;
under the orthoscopic condition, the image point g (x, y) in the remote sensing image and the corrected image point g ' (x ', y ') satisfy the following relational expression:
(x′,y′,-f) O T =R EP (x,y,-f) O T (4)
wherein, R (alpha), R (beta), R (gamma), R (H) are respectively a correction rotation matrix based on a pitch angle, a correction rotation matrix based on a roll angle, a correction rotation matrix based on a yaw angle and a correction matrix based on a altitude;
step 24, respectively establishing a pitch angle correction matrix, a roll angle correction matrix, a yaw angle correction matrix and a height correction matrix for R (alpha), R (beta), R (gamma) and R (H) in the step 23 to obtain a correction mathematical model;
pitch angle correction matrix:
Figure FDA0003683636250000031
Figure FDA0003683636250000032
roll angle correction matrix:
Figure FDA0003683636250000033
Figure FDA0003683636250000034
yaw angle correction matrix:
Figure FDA0003683636250000035
Figure FDA0003683636250000036
Figure FDA0003683636250000037
height correction matrix:
Figure FDA0003683636250000038
wherein theta represents an included angle between a connecting line of any pixel point and the focus and the central visual axis;
and 25, calculating new coordinates after the four-corner coordinates of the image are corrected according to the corrected mathematical model in the step 24, solving a transformation matrix through the old coordinates and the new coordinates, calculating a new corrected image by adopting a bilinear interpolation resampling method, finishing the geometric correction of the image, and updating the geographic coordinate information of the image.
4. The water-area-oriented remote sensing multispectral image stitching method for the unmanned aerial vehicle is characterized in that in the third step, the calculation method of the geographic coordinate projection information is as follows:
the remote sensing image coordinate and the geographic coordinate are converted by utilizing affine matrix parameters, and the affine matrix parameters comprise 6 parameters which are respectively X E ,X pixel ,R γ ,Y E ,Y pixel ,R γ Describing the relationship between the image row and column numbers and the geographic coordinates,
wherein, X E 、Y E Geographical projection coordinates, X, representing the top left corner element of the image pixel 、Y pixel Respectively representing the ground resolution, R, of image elements in the longitudinal and latitudinal directions γ A sine value representing an image rotation angle;
clockwise rotating the image around a central O point by gamma degrees, and then calculating through coordinate projection according to longitude and latitude recorded by the unmanned aerial vehicle to obtain an O point coordinate (X) OE ,Y OE ) Then, the calculation formula of the coordinate of the image point G at the upper left corner of the image is as follows:
Figure FDA0003683636250000041
wherein w and h represent the width and height of the image size, respectively;
ground resolution X pixel =-Y pixel In case of ═ epsilon, then R γ =0;
Geographic coordinates G (X) of any point G (row, col) under the image coordinate system E ′,Y E ') calculationThe formula is as follows:
Figure FDA0003683636250000042
5. the method for splicing remote sensing multispectral images of unmanned aerial vehicles facing water areas as claimed in claim 1,
in the fourth step, the process of coarse registration is as follows:
using a frame-to-frame splicing strategy in the first rounds of splicing, and subsequently selecting a splicing strategy from a splicing image to complete the splicing of the images;
acquiring adjacent images I on the same route 1 And image I 2 The geographic coordinates of the upper left corner of the image are respectively G 1 (X E1 ,Y E1 ) And G 2 (X E2 ,Y E2 ) In the form of an image I 1 For reference, find image I 2 Relative to image I 1 The offset of (2);
the offset is calculated as follows:
Figure FDA0003683636250000043
according to the offset, the geometrically corrected image I 1 And image I 2 Respectively wound around O 1 And O 2 Rotate gamma to course direction, image I 2 At any point in the image I 1 The calculation formula of the image coordinates in (1) is as follows:
Figure FDA0003683636250000044
wherein H rigid Representing a translational variation matrix;
geographic coordinates G (X) of top-left pixel element of affine matrix parameters of registered images E new ,Y E new ) The calculation formula of (a) is as follows:
Figure FDA0003683636250000051
ground resolution X of registered images pixel 、Y pixel And R γ All remain unchanged.
6. The method for splicing remote sensing multispectral images of unmanned aerial vehicles facing water areas as claimed in claim 1,
in the fifth step, the images of the overlapped areas are scratched, and the method comprises the following steps:
step 51, image I 1 And I 2 Respectively wound around O 1 And O 2 Rotating gamma degrees to the direction of the air route;
step 52, after the rotation in step 51 is completed, calculating the image I according to the knowledge of computer graphics 1 Quadrangle I formed by four corners 1A I 1B I 1C I 1D And image I 2 Quadrangle I formed by four corners 2A I 2B I 2C I 2D The polygon overlap region of (a);
step 53, adding a certain bias δ to the outside of the polygon overlapping area in step 52, and finally obtaining the coordinates of the polygon overlapping area ABCD;
step 54 of creating a mask image from the coordinates of the polygon overlap area ABCD in step 53, and extracting the mask image from the image I 1 And I 2 To obtain an image containing only the overlapping region, and recording as image I 1 ' and I 2 ′。
7. The method for splicing remote sensing multispectral images of unmanned aerial vehicles facing water areas as claimed in claim 1,
in the sixth step, the method for constructing the scale-invariant feature transform SIFT model of the first principal component image is as follows:
step 61, inputting a multispectral image matrix I with the size of w × h and the number of wave bands of c, and recombining I into a w × h row c column matrix I reshape To 1, pair reshape All the lines are normalized to obtain I' reshape
I′ reshape The calculation formula of (a) is as follows:
Figure FDA0003683636250000052
step 62, according to I 'in step 61' reshape Calculating a covariance matrix Cov;
the calculation formula of the covariance matrix Cov is as follows:
Figure FDA0003683636250000053
step 63, calculating the eigenvector Vec corresponding to the maximum eigenvalue of the covariance matrix Cov in the step 62;
step 64, for the I of the reconstructed image reshape Extracting the main component image of the characteristic vector Vec in the step 63 and recombining the main component image into an image I with the size of w multiplied by h PCA
Image I PCA The calculation formula of (a) is as follows:
I PCA =I reshape ·Vec (20)
step 65, taking the principal component image in the step 64 as input for extracting Scale Invariant Feature Transform (SIFT) feature points, then constructing a DOG scale space, detecting extreme points of the DOG scale space, deleting unstable feature points, assigning values to the directions of the feature points and generating feature point descriptors;
the construction of the DOG scale space comprises the following steps:
the image pyramid is obtained by performing Gaussian blur and downsampling on a source image, and the downsampling calculation formula is as follows:
Figure FDA0003683636250000061
secondly, performing Gaussian blur on each layer of image of the image pyramid by using different sigma parameters, wherein the obtained multiple blurred images form a Gaussian pyramid, and the calculation formula is as follows:
Figure FDA0003683636250000062
Figure FDA0003683636250000063
then subtracting the images of two adjacent Gaussian spaces to obtain a DOG image, wherein the calculation formula is as follows:
Figure FDA0003683636250000064
in which Down denotes Down-sampling, where G 0 I ═ I; i (x, y) represents the source image, L (x, y, sigma) represents the Gaussian scale space after convolution of the source image,
Figure FDA0003683636250000065
representing convolution operation, wherein sigma represents a scale factor of a Gaussian convolution kernel; g (x, y, σ) represents a gaussian kernel function, (m, n) represents the size of the convolution kernel; k represents a scale factor of the adjacent scale space, taken
Figure FDA0003683636250000066
Detecting extreme points of a DOG scale space, comprising the following contents:
comparing each pixel point with 8 pixel points in the same scale field and 18 pixel points in upper and lower adjacent scales, wherein the pixel point is an extreme point in a DOG scale space only when the DOG value of the pixel point is all larger than or smaller than the DOG values of the compared 26 pixel points;
deleting unstable feature points, including the following:
firstly, in order to obtain the accurate position of an extreme point, pixel difference values need to be carried out on a discrete space, then a three-dimensional quadratic function is fitted in an analog mode to obtain the accurate position of the extreme point, and finally the extreme point with low contrast is removed;
meanwhile, discarding extreme points with large edge response;
assigning the direction of the feature points, which comprises the following steps:
the calculation formula for calculating the modulus m (x, y) and the direction θ (x, y) of each feature point is as follows:
Figure FDA0003683636250000067
Figure FDA0003683636250000071
in the formula, L (x, y) is a scale space value where the characteristic point is located; calculating the neighborhood gradient amplitude and direction with 3 sigma as a radius by taking the feature point as a center, counting the gradient direction distribution of pixel points in the field by utilizing a histogram, dividing the gradient direction into 36 equal parts from 0-360 degrees, determining the direction of a main peak value in the histogram as the main direction of the feature point, and taking the equal parts as the auxiliary direction of the feature point if the peak value in one equal part direction is greater than 80% of the main peak value;
generating a feature point descriptor, comprising the following:
rotating the image coordinate axis of the region to coincide with the gradient direction of the characteristic point;
then, a 16 × 16 neighborhood window with the feature point as the center is taken, the gradient of each pixel is calculated, the closer the feature point is, the larger the weight is, the Gaussian weighting is carried out on the pixel gradient of the sub-region according to sigma d/2, and the weight calculation formula is as follows;
Figure FDA0003683636250000072
the region is subdivided into 4 × 4 subregions, gradient histograms in 8 directions in each subregion are counted by a gradient value weighting method to form 8-bit vector description, a 4 × 4 × 8-128-dimensional description vector is formed, and finally the vectors are normalized to be the descriptors of the scale invariant feature transform SIFT feature points.
8. The method for splicing remote sensing multispectral images of unmanned aerial vehicles facing water areas as claimed in claim 1,
in the seventh step, the screening method of the matching points is as follows:
selecting and imaging I by spatial distance by adopting a query index KD tree mode 1 Image I with each feature point closest thereto 2 The n feature matching points in the set form a one-to-many hypothesis matching set, and a spatial distance is taken as an evaluation index of the similarity of the feature points, wherein the spatial distance comprises a Euclidean distance D p And a pixel coordinate distance D d The calculation formula of the weighted sum is as follows:
Figure FDA0003683636250000073
Figure FDA0003683636250000074
wherein E is 1 And E 2 Respectively are feature descriptor vectors of two feature points;
G 1 and G 2 The position coordinates of the pixel points are obtained;
calculating the space distance D between each feature point and the hypothesis matching set point s The calculation formula is as follows:
D s =α·D d +(1-α)·D p (14)
wherein alpha is a pixel coordinate distance term influence factor of the matching point;
the matching screening strategy adopts the ratio of the nearest neighbor space distance to the next nearest neighbor space distance, and the calculation formula is as follows:
r=min_fst/min_scd (15)
wherein min _ fst is the nearest spatial distance, min _ scd time nearest spatial distance; when r is less than T and T is a predetermined ratio, the two angular points are matched point pairs, so that coarse matching of all characteristic points is realized;
for the feature points with local similar attributes, the screening process of the matching points is as follows:
calculating the distance between the matching point pairs, uniformly dividing the maximum and minimum values of the distance values into 10 sections, wherein the frequency of each section is P ═ P 1 ,…,p 10 The frequency of the peak interval is max (P), the corresponding interval is the ith, and the interval is [ i-1, i +1 ]]The matching point pair in the step (1) is a correct matching point pair, and the matching point pair set is an accurate matching point pair searched;
and then eliminating the error characteristic point pairs according to a random sampling consensus RANSAC algorithm, thereby calculating a homography change matrix, and multiplying the homography change matrix by the image to obtain the image to be fused.
9. The method for splicing remote sensing multispectral images of unmanned aerial vehicles facing water areas as claimed in claim 1,
in the eighth step, the construction method of the multi-resolution fusion model based on the Laplacian pyramid is as follows:
the fusion of every two images firstly establishes two images I 1 And I 2 Gaussian pyramid G 1 、G 2 Then, establishing a corresponding 4-layer Laplace pyramid image Lap 1 、Lap 2
Making a sum image I 1 Mask image I of the same size mask This mask image represents the location of the fusion, and the Gaussian pyramid G of the mask image is then solved mask Which represents the fusion weight of each pixel point;
at each scale, i.e. resolution, G according to the current scale mask The Laplace pyramid image Lap of the two images 1 ,Lap 2 Adding the two images to finally obtain a spliced Laplace pyramid image Lap fused
With Lap fused The lowest resolution map of the image is used as a starting map, and the splicing result of the highest resolution is obtained through reconstruction.
10. A computer device, comprising:
one or more processors;
storage means for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement a method of water-oriented drone remote sensing multispectral image stitching according to any one of claims 1-9.
CN202210640221.XA 2022-06-08 2022-06-08 Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area Pending CN114936971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210640221.XA CN114936971A (en) 2022-06-08 2022-06-08 Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210640221.XA CN114936971A (en) 2022-06-08 2022-06-08 Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area

Publications (1)

Publication Number Publication Date
CN114936971A true CN114936971A (en) 2022-08-23

Family

ID=82867391

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210640221.XA Pending CN114936971A (en) 2022-06-08 2022-06-08 Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area

Country Status (1)

Country Link
CN (1) CN114936971A (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115861136A (en) * 2022-11-17 2023-03-28 中国科学院空天信息创新研究院 Image resolution reconstruction method based on aerial remote sensing system
CN115909113A (en) * 2023-01-09 2023-04-04 广东博幻生态科技有限公司 Method for surveying forestry pests through remote sensing monitoring of unmanned aerial vehicle
CN116228539A (en) * 2023-03-10 2023-06-06 贵州师范大学 Unmanned aerial vehicle remote sensing image stitching method
CN116359836A (en) * 2023-05-31 2023-06-30 成都金支点科技有限公司 Unmanned aerial vehicle target tracking method and system based on super-resolution direction finding
CN116543309A (en) * 2023-06-28 2023-08-04 华南农业大学 Crop abnormal information acquisition method, system, electronic equipment and medium
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117079166A (en) * 2023-10-12 2023-11-17 江苏智绘空天技术研究院有限公司 Edge extraction method based on high spatial resolution remote sensing image
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117173580A (en) * 2023-11-03 2023-12-05 芯视界(北京)科技有限公司 Water quality parameter acquisition method and device, image processing method and medium
CN117291980A (en) * 2023-10-09 2023-12-26 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN117876222A (en) * 2024-03-12 2024-04-12 昆明理工大学 Unmanned aerial vehicle image stitching method under weak texture lake water surface scene

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115393196B (en) * 2022-10-25 2023-03-24 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115393196A (en) * 2022-10-25 2022-11-25 之江实验室 Infrared multi-sequence image seamless splicing method for unmanned aerial vehicle area array swinging
CN115861136B (en) * 2022-11-17 2023-09-19 中国科学院空天信息创新研究院 Image resolution reconstruction method based on aerial remote sensing system
CN115861136A (en) * 2022-11-17 2023-03-28 中国科学院空天信息创新研究院 Image resolution reconstruction method based on aerial remote sensing system
CN115909113A (en) * 2023-01-09 2023-04-04 广东博幻生态科技有限公司 Method for surveying forestry pests through remote sensing monitoring of unmanned aerial vehicle
CN116228539A (en) * 2023-03-10 2023-06-06 贵州师范大学 Unmanned aerial vehicle remote sensing image stitching method
CN116359836A (en) * 2023-05-31 2023-06-30 成都金支点科技有限公司 Unmanned aerial vehicle target tracking method and system based on super-resolution direction finding
CN116359836B (en) * 2023-05-31 2023-08-15 成都金支点科技有限公司 Unmanned aerial vehicle target tracking method and system based on super-resolution direction finding
CN117036666A (en) * 2023-06-14 2023-11-10 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN117036666B (en) * 2023-06-14 2024-05-07 北京自动化控制设备研究所 Unmanned aerial vehicle low-altitude positioning method based on inter-frame image stitching
CN116543309B (en) * 2023-06-28 2023-10-27 华南农业大学 Crop abnormal information acquisition method, system, electronic equipment and medium
CN116543309A (en) * 2023-06-28 2023-08-04 华南农业大学 Crop abnormal information acquisition method, system, electronic equipment and medium
CN117291980A (en) * 2023-10-09 2023-12-26 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN117291980B (en) * 2023-10-09 2024-03-15 宁波博登智能科技有限公司 Single unmanned aerial vehicle image pixel positioning method based on deep learning
CN117079166A (en) * 2023-10-12 2023-11-17 江苏智绘空天技术研究院有限公司 Edge extraction method based on high spatial resolution remote sensing image
CN117079166B (en) * 2023-10-12 2024-02-02 江苏智绘空天技术研究院有限公司 Edge extraction method based on high spatial resolution remote sensing image
CN117132913A (en) * 2023-10-26 2023-11-28 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117132913B (en) * 2023-10-26 2024-01-26 山东科技大学 Ground surface horizontal displacement calculation method based on unmanned aerial vehicle remote sensing and feature recognition matching
CN117173580A (en) * 2023-11-03 2023-12-05 芯视界(北京)科技有限公司 Water quality parameter acquisition method and device, image processing method and medium
CN117173580B (en) * 2023-11-03 2024-01-30 芯视界(北京)科技有限公司 Water quality parameter acquisition method and device, image processing method and medium
CN117876222A (en) * 2024-03-12 2024-04-12 昆明理工大学 Unmanned aerial vehicle image stitching method under weak texture lake water surface scene

Similar Documents

Publication Publication Date Title
CN114936971A (en) Unmanned aerial vehicle remote sensing multispectral image splicing method and system for water area
CN108648240B (en) Non-overlapping view field camera attitude calibration method based on point cloud feature map registration
CN111583110B (en) Splicing method of aerial images
CN108765298A (en) Unmanned plane image split-joint method based on three-dimensional reconstruction and system
CN103337052B (en) Automatic geometric correcting method towards wide cut remote sensing image
Zhang et al. Vision-based pose estimation for textureless space objects by contour points matching
CN111507901B (en) Aerial image splicing and positioning method based on aerial GPS and scale invariant constraint
CN107578376B (en) Image splicing method based on feature point clustering four-way division and local transformation matrix
CN108759788B (en) Unmanned aerial vehicle image positioning and attitude determining method and unmanned aerial vehicle
CN113313659B (en) High-precision image stitching method under multi-machine cooperative constraint
CN108765489A (en) A kind of pose computational methods, system, medium and equipment based on combination target
CN112183171A (en) Method and device for establishing beacon map based on visual beacon
CN109871739B (en) Automatic target detection and space positioning method for mobile station based on YOLO-SIOCTL
CN115187798A (en) Multi-unmanned aerial vehicle high-precision matching positioning method
CN110084743B (en) Image splicing and positioning method based on multi-flight-zone initial flight path constraint
CN113624231A (en) Inertial vision integrated navigation positioning method based on heterogeneous image matching and aircraft
JP2023530449A (en) Systems and methods for air and ground alignment
CN113963067B (en) Calibration method for calibrating large-view-field visual sensor by using small target
Jhan et al. A generalized tool for accurate and efficient image registration of UAV multi-lens multispectral cameras by N-SURF matching
CN112750075A (en) Low-altitude remote sensing image splicing method and device
CN114897676A (en) Unmanned aerial vehicle remote sensing multispectral image splicing method, device and medium
Zhao et al. Digital Elevation Model‐Assisted Aerial Triangulation Method On An Unmanned Aerial Vehicle Sweeping Camera System
CN117218201A (en) Unmanned aerial vehicle image positioning precision improving method and system under GNSS refusing condition
CN117073669A (en) Aircraft positioning method
Hu et al. Planetary3D: A photogrammetric tool for 3D topographic mapping of planetary bodies

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination