CN112102379B - Unmanned aerial vehicle multispectral image registration method - Google Patents

Unmanned aerial vehicle multispectral image registration method Download PDF

Info

Publication number
CN112102379B
CN112102379B CN202010884720.4A CN202010884720A CN112102379B CN 112102379 B CN112102379 B CN 112102379B CN 202010884720 A CN202010884720 A CN 202010884720A CN 112102379 B CN112102379 B CN 112102379B
Authority
CN
China
Prior art keywords
image
multispectral
registered
extraction unit
feature extraction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010884720.4A
Other languages
Chinese (zh)
Other versions
CN112102379A (en
Inventor
周纪
孟令宣
王子卫
孙浩然
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010884720.4A priority Critical patent/CN112102379B/en
Publication of CN112102379A publication Critical patent/CN112102379A/en
Application granted granted Critical
Publication of CN112102379B publication Critical patent/CN112102379B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10036Multispectral image; Hyperspectral image

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an unmanned aerial vehicle multispectral image registration method, and belongs to the technical field of image processing. The registration processing of the invention is based on deep learning, and comprises the establishment and training of a multispectral image registration network model and the image registration processing based on the trained multispectral image registration network model. The registration method of the multispectral images of the unmanned aerial vehicle can quickly and robustly register the images of different lenses of the same multispectral camera, provides support for the application of multispectral remote sensing of the unmanned aerial vehicle, and can be used for processing such as land utilization classification and vegetation pest detection.

Description

Unmanned aerial vehicle multispectral image registration method
Technical Field
The invention relates to the technical field of image processing, in particular to a method for registering multispectral images of an unmanned aerial vehicle.
Background
Unmanned aerial vehicle multispectral image registration is an important problem in the fields of computer vision and remote sensing. The multispectral image registration of the unmanned aerial vehicle is an important early step of tasks such as change detection, semantic segmentation, target detection and the like.
Because the different wave bands of the multispectral image have different responses to the ground object target, the pixel values of the multispectral image have no corresponding relation, and the texture structure information has great difference. The conventional feature extraction and matching algorithms based on SIFT (Scale-innovative feature transform), SURF (speed Up Robust Features), ORB (ordered FAST and related BRIEF) and the like are difficult to find the correct registration point in the multispectral image, so the image registration success rate is low. However, the existing methods for image registration based on deep learning generally assume that the pixel values of the images have similarity, so these methods are difficult to be used for registering multispectral images.
Disclosure of Invention
The invention aims to: aiming at the technical problems of poor registration precision and poor resultant power during the registration of the multispectral images of the existing unmanned aerial vehicle, the method for registering the multispectral images of the unmanned aerial vehicle is provided.
The invention discloses a method for registering multispectral images of an unmanned aerial vehicle, which comprises the following steps:
step 1: setting and training a multispectral image registration network model:
the multispectral image registration network model comprises a characteristic extraction unit and a deviation regression unit;
the characteristic extraction unit extracts the image characteristics of the input image block based on a convolutional neural network;
the input image block of the feature extraction unit is: extracting rectangular image blocks from the reference image and the current image to be registered at the same image position to obtain a reference image block and an image block to be registered, and recording the extraction positions of the image blocks; performing image stacking processing on the reference image block and the image block to be registered, and taking the stacked image block as an input image block of a feature extraction network; the reference image and the image to be registered are spectral images of different wave bands acquired at the same time; the extraction positions of the image blocks comprise four vertexes of the image blocks;
the deviation regression unit comprises a pooling layer and two full-connection layers, wherein the input of the pooling layer is the image characteristics output by the characteristic extraction unit, and the adopted pooling mode is self-adaptive average pooling; the output of the pooling layer passes through two full-connection layers and is used for outputting coordinate deviations of four vertexes of the image block to be registered and the reference image block;
the training process of the multispectral image registration network model comprises the following steps:
for images I to be registered in training data set t And a reference picture I c Extracting the input image block of the feature extraction unit, sending the input image block into the feature extraction unit, and recording the extraction position as C 4pt
Inputting the image features output by the feature extraction unit into a deviation regression unit, and recording the coordinate deviation output by the deviation regression unit as O 4pt (ii) a Based on C 4pt And O 4pt Calculating an image I to be registered t And a reference picture I c Homography (Homography) matrix H between; adjusting network parameters of a spatial transformation network for homography transformation according to the homography matrix H;
then the current image I to be registered is processed t Input spatial transformation network (i.e. based on current homography matrices H to I t Performing homographic transformation), obtaining a transformed image I based on the output of the spatial transformation network t_w
Computing an image I t_w And a reference picture I c Pyramid structure similarity loss is generated between the multispectral images, and a loss function of the multispectral image registration network model is set based on the pyramid structure similarity loss; when the value of the loss function meets a preset convergence condition, storing the trained multispectral image registration network model;
the loss function of the multispectral image registration network model is as follows: for image I t_w And I c Respectively carrying out K-level down-sampling treatment, gradually increasing or decreasing the down-sampling multiplying power, and calculating the image I after the K-level down-sampling t_w And I c Image similarity SSIM between k (ii) a And 1-SSIM k As a loss function value of the kth stage; superposing the K-level loss function values to obtain a current loss function value of the multispectral image registration network model;
and 2, step: carrying out spectral image registration processing based on the trained multispectral image registration network model:
reference image I 'collected at the same time' c And image I 'to be registered' t Extracting an input image block of the feature extraction unit, sending the input image block to the feature extraction unit, and recording the position coordinate information of the image block as C' 4pt
The image features output by the feature extraction unit are input into a deviation regression unit, and the coordinate deviation output by the deviation regression unit is recorded as O' 4pt (ii) a Based on C' 4pt And O' 4pt Calculating to-be-registered image I' t And reference picture I' c A homography matrix H'; and treating the registered image I 'based on the homography matrix H' t And performing homography transformation to obtain a registration result. I.e. the transformed image I' t_w And a reference picture I c I.e. the registered image pair.
In summary, due to the adoption of the technical scheme, the invention has the beneficial effects that: the invention provides an effective registration method of multispectral images of an unmanned aerial vehicle, which can quickly and robustly register images of different lenses of the same multispectral camera and provide support for application of multispectral remote sensing of the unmanned aerial vehicle (such as land utilization classification and vegetation and pest detection).
Drawings
FIG. 1 is a schematic view of a lens of a multi-spectral camera;
FIG. 2 is a schematic view of an image captured by a multi-spectral camera;
fig. 3 is a schematic diagram of a network structure of a multispectral image registration network model;
fig. 4 is a schematic diagram of loss calculation of the multispectral image registration network model.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail with reference to the following embodiments and accompanying drawings.
Currently, the multispectral camera of unmanned aerial vehicle generally includes 4 camera lenses or 5 camera lenses, and each camera lens is responsible for gathering the image of different wave bands. The multispectral camera used in this embodiment is shown in fig. 1 and includes 5 lenses, and images captured by the lenses at different capturing times are shown in fig. 2, where lines where (a), (b), and (c) are located indicate different capturing times, and columns indicate different cameras. In the present embodiment, the image captured by the lens 5 is used as the reference image, and the images of the non-reference lenses are registered toward the image of the reference lens.
The registration processing of the invention is based on deep learning, and comprises the establishment and training of a multispectral image registration network model and the image registration processing based on the trained multispectral image registration network model.
The network framework of the multispectral image registration network model is shown in fig. 3, and includes: the device comprises a feature extraction unit, a deviation regression unit, an image transformation unit and a loss calculation unit.
The feature extraction unit extracts image features of the input image block based on a convolutional neural network;
cutting original images (marked as Image1 and Image2 (reference images)) to be registered at the same position, marking the cut images as Image blocks Patch 1 and Patch 2, and marking the cut coordinates as C 4pt (i.e., the positional coordinates of the four vertices of the rectangular image block). And stacking the cut image blocks Patch 1 and Patch 2, and then sending the image blocks into a feature extraction unit to obtain image features (feature maps) through a convolutional neural network.
Wherein, the feature extraction unit comprises convolution, batch normalization, maximum pooling, non-linearity (RELU layer) and residual error. The convolution is used for extracting spatial features, and the size of a convolution kernel represents the receptive field for extracting the features; batch normalization and residual errors can reduce the model optimization difficulty and make the model convergence faster; the maximum pooling step length can be set to be 2, and is used for down-sampling the size of the feature map to be 1/2 of the original size, expanding the receptive field of the features and reducing the model operation amount; the non-linearity may map features non-linearly, allowing better and more complex description capabilities for features.
In the specific embodiment, the network adopted by the feature extraction unit is ResNet-34, which comprises five convolution blocks (Conv 1 to Conv 5) and five maximum lagged layer pool1 to pool5, and a maximum lagged layer is connected behind each convolution block; wherein the convolution block Conv1 comprises 1 convolution layer; the convolution block Conv2 comprises 3 convolution layers: conv2_1 to Conv2_3; the convolution block Conv3 comprises 4 convolution layers: conv3_1 to Conv3_4; the convolutional block Conv4 comprises 6 convolutional layers: conv4_1 to Conv4_6; the convolution block Conv5 comprises 3 convolution layers: conv5_1 to Conv5_3; wherein the convolutional layer comprises: convolution, normalization and activation functions, i.e. the operational structure of Conv + BN + Relu. Defining the width and high resolution of the image block as w and h, the input size of the feature extraction unit is: 2 h w, output 512 h/8 w/8 signature.
A deviation regression unit: firstly, using an adaptive mean pooling layer to pool the 512H/8W/8 feature maps extracted by the feature extraction unit into 512H 1W 1 dimensions, and then using 2 full-connected layer regressions to obtain 8 values, wherein the 8 values mean 4 points (the coordinate is C) in Image1 4pt ) The deviation from the coordinates of the corresponding point in Image2 (the deviation is divided into an x-direction deviation and a y-direction deviation, so that the 4-point deviation has 8 values in total). These 8 values are denoted as O 4pt
An image conversion unit: using C 4pt And O 4pt And calculating homography matrixes of the Image1 to the Image2, wherein the homography matrixes are marked as H matrixes. Using the obtained H matrix, image1 is converted into Warped Image1. Since the deep learning model optimization process requires micro-processing (backward propagation) everywhere in the network, image1 is transformed in the network using a Spatial Transform Network (STN).
A loss calculation unit: pyramid structure similarity loss was calculated using the Warped Image1 and Image 2. The structural similarity index (structural similarity index) can be used to measure the similarity between two images, and is denoted as SSIM. The loss of structural similarity (SSIM loss) is taken as a loss function, and the value of the loss function is 1-SSIM and is marked as SSIM loss. The smaller the deviation between the two images, the smaller the SSIM loss. As a result of experiments, when the deviation between the two images is large (for example, larger than 10 pixels), the SSIM loss does not decrease with the decrease of the image deviation. Therefore, when the deviation between the multispectral images is large, the original image is directly used for calculating the SSIM loss, and the network is not easy to optimize. Therefore, the invention adopts pyramid structure similarity loss: the SSIM loss is calculated on the graph of 4 times, 8 times and 16 times of the original image, and then the graphs of 4 times, 8 times and 16 times of the original image are respectively marked as SSIM loss1 to SSIM loss3, as shown in fig. 4. And finally, accumulating SSIM loss1 to SSIM loss3 to obtain a final loss function value.
Wherein, SSIM evaluates the similarity between two images from 3 aspects of brightness, contrast and structure. The calculation formula is as follows:
SSIM(I1,I2)=l(I1,I2)*c(I1,I2)*s(I1,I2) (1)
where l (I1, I2) represents the luminance similarity between the images I1 and I2, c (I1, I2) represents the contrast similarity between the images I1 and I2, and s (I1, I2) represents the structural similarity between the images I1 and I2.
The calculation formula of the brightness similarity l (I1, I2) is as follows:
Figure GDA0003785158560000041
Figure GDA0003785158560000051
where N represents the number of pixels in the image, x i Representing the pixel value, μ, of each location I Representing the average luminance of the image, C1 is a very small value (i.e., a preset constant) for avoiding the score of 0.
The calculation formula of the contrast similarity c (I1, I2) is:
Figure GDA0003785158560000052
Figure GDA0003785158560000053
wherein N, x i ,μ I The meaning of C2 is similar to that of N, x in the formulas (2) and (3) i ,μ I And C1 has the same meaning. Sigma I Representing the standard deviation of the image.
The formula for calculating the structural similarity formula s (I1, I2) is:
Figure GDA0003785158560000054
Figure GDA0003785158560000055
wherein Vec I Is a vector formed by normalizing the pixel value of each position in the image I, and s (I1, I2) is two vectors (Vec) I1 And Vec I2 ) Cosine similarity of (c).
Namely, after the multispectral image registration network model is built, training is carried out based on the set training samples, and therefore the trained multispectral image registration network model is obtained.
In this embodiment, one way to set the training samples is as follows: the unmanned aerial vehicle carries the multispectral camera and flies at a certain height, so that multispectral images of the unmanned aerial vehicle are collected and divided into a training set and a verification set according to a certain proportion.
Then, two channels (namely a reference lens and a non-reference lens) of the multispectral image needing to be registered are selected, rectangular image blocks extracted from the same image position of the image shot by the two channels at the same time in a training set are stacked together and sent to a feature extraction unit; and (3) performing feature extraction, deviation regression, image transformation and loss calculation on the stacked image blocks, optimizing a multispectral image registration network by using the obtained loss through back propagation, and testing the registration effect on a verification set every time an epoch is trained. And observing the value of the loss function of the verification set in the training process, when the loss of the verification set is not reduced any more (the change of the values of the last two times is lower than a preset threshold), indicating that the multispectral image registration is trained to be convergent, stopping the training, and keeping the model obtained by the training at the moment as a trained model and storing the trained model, namely only storing the network parameters of the feature extraction unit and the deviation regression unit.
For a current multispectral image pair to be registered (I) A ,I B ) Wherein one image is a reference image; inputting a pair of rectangular image blocks at the same extraction position after stacking into a trained feature extraction unit to obtain a feature map; simultaneously recording the coordinate information of the extracted position (position coordinates of 4 vertexes of the rectangular image block)
Inputting the characteristic graph into a trained deviation regression unit to obtain corresponding coordinate deviation; then combining the recorded coordinate information of the extracted position to calculate the current homography matrix, and finally, based on the homography matrix, aligning the image pair (I) A ,I B ) The non-reference image in (2) is homography transformed to obtain a transformed image, and the transformed image and the image pair (I) are A ,I B ) The reference image in (1) is the registered image.
Where mentioned above are merely embodiments of the invention, any feature disclosed in this specification may, unless stated otherwise, be replaced by alternative features serving equivalent or similar purposes; all of the disclosed features, or all of the method or process steps, may be combined in any combination, except mutually exclusive features and/or steps.

Claims (5)

1. An unmanned aerial vehicle multispectral image registration method is characterized by comprising the following steps:
step 1: setting and training a multispectral image registration network model:
the multispectral image registration network model comprises a characteristic extraction unit and a deviation regression unit;
the characteristic extraction unit extracts image characteristics of the input image block based on a convolutional neural network;
the input image block of the feature extraction unit is: extracting rectangular image blocks from the reference image and the current image to be registered at the same image position to obtain a reference image block and an image block to be registered, and recording the extraction positions of the image blocks; performing image stacking processing on the reference image block and the image block to be registered, and taking the stacked image block as an input image block of a feature extraction network; the reference image and the image to be registered are spectral images of different wave bands collected at the same time; the extraction positions of the image blocks comprise four vertexes of the image blocks;
the deviation regression unit comprises a pooling layer and two full-connection layers, wherein the input of the pooling layer is the image characteristics output by the characteristic extraction unit, and the adopted pooling mode is self-adaptive average pooling; the output of the pooling layer passes through two full-connection layers and is used for outputting coordinate deviations of four vertexes of the image block to be registered and the reference image block;
the training process of the multispectral image registration network model comprises the following steps:
for images I to be registered in the training data set t And a reference picture I c Extracting the input image block of the feature extraction unit, sending the input image block into the feature extraction unit, and recording the extraction position as C 4pt
Inputting the image features output by the feature extraction unit into a deviation regression unit, and recording the coordinate deviation output by the deviation regression unit as O 4pt (ii) a Based on C 4pt And O 4pt Calculating an image I to be registered t And a reference picture I c A homography matrix H between; adjusting network parameters of a spatial transformation network for homography transformation according to the homography matrix H;
then the current image I to be registered is processed t Inputting the spatial transform network, obtaining a transformed image I based on the output of the spatial transform network t_w
Computing an image I t_w And a reference picture I c Pyramid structure similarity loss is generated, and a loss function of the multispectral image registration network model is set based on the pyramid structure similarity loss; when the value of the loss function meets a preset convergence condition, storing the trained multispectral image registration network model;
the loss function of the multispectral image registration network model is as follows: for image I t_w And I c Respectively carrying out K-level down-sampling treatment, gradually increasing or decreasing the down-sampling multiplying power, and calculating the image I after the K-level down-sampling t_w And I c Image similarity SSIM between k (ii) a And 1-SSIM k As a loss function value of the kth stage; accumulating the loss function values of the K levels to obtain the current loss function value of the multispectral image registration network model;
step 2: carrying out spectral image registration processing based on the trained multispectral image registration network model:
for reference image I 'acquired at the same time' c And image I 'to be registered' t Extracting an input image block of the feature extraction unit, sending the input image block to the feature extraction unit, and recording the position coordinate information of the image block as C' 4pt
Inputting the image feature output by the feature extraction unit into a deviation regression unit, and recording the coordinate deviation output by the deviation regression unit as O' 4pt (ii) a Based on C' 4pt And O' 4pt Calculating to-be-registered image I' t And reference picture I' c A homography matrix H'; and treating the registered image I based on the homography matrix H t ' performing homography transformation to obtain a registration result.
2. The method of claim 1, wherein in step 1, the image similarity SSIM k Which is the product of luminance similarity, contrast similarity and structural similarity between images.
3. The method of claim 1, wherein the network structure of the feature extraction unit employs ResNet-34.
4. The method of claim 3, wherein the step size of the maximum pooling layer of the feature extraction unit is set to 2.
5. The method of claim 1, wherein in step 1, the image I is processed t_w And I c Respectively carrying out 3-level downsampling treatment, wherein the downsampling multiplying power at each level is as follows in sequence: 4. 8 and 16 times.
CN202010884720.4A 2020-08-28 2020-08-28 Unmanned aerial vehicle multispectral image registration method Active CN112102379B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010884720.4A CN112102379B (en) 2020-08-28 2020-08-28 Unmanned aerial vehicle multispectral image registration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010884720.4A CN112102379B (en) 2020-08-28 2020-08-28 Unmanned aerial vehicle multispectral image registration method

Publications (2)

Publication Number Publication Date
CN112102379A CN112102379A (en) 2020-12-18
CN112102379B true CN112102379B (en) 2022-11-04

Family

ID=73758226

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010884720.4A Active CN112102379B (en) 2020-08-28 2020-08-28 Unmanned aerial vehicle multispectral image registration method

Country Status (1)

Country Link
CN (1) CN112102379B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113689479B (en) * 2021-07-23 2023-05-23 电子科技大学 Unmanned aerial vehicle thermal infrared visible light image registration method
CN114241022B (en) * 2022-02-28 2022-06-03 北京艾尔思时代科技有限公司 Unmanned aerial vehicle image automatic registration method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993727A (en) * 2019-03-06 2019-07-09 中国人民解放军61540部队 A kind of method for detecting change of remote sensing image based on deep learning
CN110288518A (en) * 2019-06-28 2019-09-27 北京三快在线科技有限公司 Image processing method, device, terminal and storage medium
CN110533620A (en) * 2019-07-19 2019-12-03 西安电子科技大学 The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN111079556A (en) * 2019-11-25 2020-04-28 航天时代飞鸿技术有限公司 Multi-temporal unmanned aerial vehicle video image change area detection and classification method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8620092B2 (en) * 2010-03-04 2013-12-31 Hewlett-Packard Development Company, L.P. Determining similarity of two images
CA2752370C (en) * 2011-09-16 2022-07-12 Mcgill University Segmentation of structures for state determination

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109993727A (en) * 2019-03-06 2019-07-09 中国人民解放军61540部队 A kind of method for detecting change of remote sensing image based on deep learning
CN110288518A (en) * 2019-06-28 2019-09-27 北京三快在线科技有限公司 Image processing method, device, terminal and storage medium
CN110533620A (en) * 2019-07-19 2019-12-03 西安电子科技大学 The EO-1 hyperion and panchromatic image fusion method of space characteristics are extracted based on AAE
CN111079556A (en) * 2019-11-25 2020-04-28 航天时代飞鸿技术有限公司 Multi-temporal unmanned aerial vehicle video image change area detection and classification method

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
《Automatic Registration of Tree Point Clouds From Terrestrial LiDAR Scanning for Reconstructing the Ground Scene of Vegetated Surfaces》;Guiyun Zhou等;《IEEE Geoscience and Remote Sensing Letters》;20140930;第1654 - 1658页 *
《基于结构相似性的拉普拉斯金字塔优化算法》;万靓;《中国优秀硕士学位论文全文数据库 (信息科技辑)》;20190930;第I138-1050页 *
《小波包和非下采样轮廓波相结合的红外与可见光图像融合方法》;殷向 等;《激光杂志》;20180131;第123-127页 *

Also Published As

Publication number Publication date
CN112102379A (en) 2020-12-18

Similar Documents

Publication Publication Date Title
CN112270249B (en) Target pose estimation method integrating RGB-D visual characteristics
WO2020151109A1 (en) Three-dimensional target detection method and system based on point cloud weighted channel feature
WO2020228446A1 (en) Model training method and apparatus, and terminal and storage medium
CN104573731B (en) Fast target detection method based on convolutional neural networks
CN108108764B (en) Visual SLAM loop detection method based on random forest
CN113065546B (en) Target pose estimation method and system based on attention mechanism and Hough voting
CN111709980A (en) Multi-scale image registration method and device based on deep learning
US11810366B1 (en) Joint modeling method and apparatus for enhancing local features of pedestrians
CN110766723B (en) Unmanned aerial vehicle target tracking method and system based on color histogram similarity
US20220383525A1 (en) Method for depth estimation for a variable focus camera
CN112102379B (en) Unmanned aerial vehicle multispectral image registration method
CN112712518B (en) Fish counting method and device, electronic equipment and storage medium
CN111626927B (en) Binocular image super-resolution method, system and device adopting parallax constraint
CN111369605A (en) Infrared and visible light image registration method and system based on edge features
CN114627154B (en) Target tracking method deployed in frequency domain, electronic equipment and storage medium
CN112396036A (en) Method for re-identifying blocked pedestrians by combining space transformation network and multi-scale feature extraction
Potje et al. Extracting deformation-aware local features by learning to deform
Yuan et al. ROBUST PCANet for hyperspectral image change detection
Tian et al. Convolutional neural networks for steganalysis via transfer learning
CN106650629A (en) Kernel sparse representation-based fast remote sensing target detection and recognition method
CN110197184A (en) A kind of rapid image SIFT extracting method based on Fourier transformation
CN115410014A (en) Self-supervision characteristic point matching method of fisheye image and storage medium thereof
CN114463534A (en) Target key point detection method, device, equipment and storage medium
Chen et al. GADO-Net: an improved AOD-Net single image dehazing algorithm
CN113689479B (en) Unmanned aerial vehicle thermal infrared visible light image registration method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant