CN108846822B - Fusion method of visible light image and infrared light image based on hybrid neural network - Google Patents

Fusion method of visible light image and infrared light image based on hybrid neural network Download PDF

Info

Publication number
CN108846822B
CN108846822B CN201810558973.5A CN201810558973A CN108846822B CN 108846822 B CN108846822 B CN 108846822B CN 201810558973 A CN201810558973 A CN 201810558973A CN 108846822 B CN108846822 B CN 108846822B
Authority
CN
China
Prior art keywords
image
neural network
layer
hybrid neural
light image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810558973.5A
Other languages
Chinese (zh)
Other versions
CN108846822A (en
Inventor
江泽涛
刘小艳
张少钦
胡硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guilin University of Electronic Technology
Original Assignee
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guilin University of Electronic Technology filed Critical Guilin University of Electronic Technology
Priority to CN201810558973.5A priority Critical patent/CN108846822B/en
Publication of CN108846822A publication Critical patent/CN108846822A/en
Application granted granted Critical
Publication of CN108846822B publication Critical patent/CN108846822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

The invention discloses a fusion method of visible light images and infrared light images based on a hybrid neural network, which relates to the technical field of image recognition and solves the problem that an image fusion algorithm does not need to supervise learning and extract features for classification so as to realize image fusion, and the method comprises the following steps: (1) establishing a hybrid neural network structure formed by stacking basic units; (2) preprocessing the training and testing images; (3) training a hybrid neural network model using the visible light and infrared light images; (4) and testing the mixed neural network model to obtain a final fusion image. The image fusion algorithm adopting the technical scheme of the invention does not need to supervise learning and extract features for classification, and improves the dependency of the fusion algorithm on prior knowledge.

Description

Fusion method of visible light image and infrared light image based on hybrid neural network
Technical Field
The invention relates to the technical field of image recognition, in particular to a fusion method of a visible light image and an infrared light image based on a hybrid neural network.
Background
The image fusion in the prior art can not realize complete unsupervised learning and feature extraction for classification to realize image fusion, and the problem often exists in supervised learning in real life: lack sufficient a priori knowledge and therefore difficult or too costly to recognize for manual category labeling. In order to meet the technical requirements of the current all-intelligent (AI), the following problems of the prior art need to be solved: information is extracted from a scene by using single image data, so that complete description of the scene is difficult to obtain even cannot be independently obtained, and a visible light image is a reflection image, has more high-frequency components, can reflect details of the scene under certain illumination, but has lower contrast of visible light (namely, a low-light image) when the illumination is poor; the infrared image is a radiation image, the gray level is determined by the temperature difference between the target and the background, and the real scene cannot be reflected.
Disclosure of Invention
Aiming at the defects of the prior art, the invention solves the problem that the image fusion algorithm does not need to supervise, learn and extract features for classification so as to realize image fusion.
In order to solve the technical problems, the technical scheme adopted by the invention is a method for fusing a visible light image and an infrared light image based on a hybrid neural network, which comprises the following steps:
(1) establishing a hybrid neural network structure formed by stacking basic units, which comprises the following steps:
1) selecting a convolution layer with proper parameters as a first layer of the hybrid neural network;
2) selecting a suitable down-sampling layer as a second layer of the hybrid neural network;
3) selecting a full connection layer to recombine a down-sampling layer to obtain characteristics;
4) a deep belief network is adopted as a classification layer, and the characteristics are subjected to characteristic classification matching;
5) the activation function of the deep belief network is a sigmoid function or a ReLU function, the category layer is a linear node, and the activation functions are soft-max functions;
6) fully connecting the characteristic layers and the category layers between the hybrid neural networks to form a combination layer, automatically generating corresponding image blocks, and calculating the state values of all nodes in the network through forward propagation in the network;
7) initializing a weight in the hybrid convolutional neural network, fully connecting a hidden layer and a visible layer, and using an energy function to keep the stability of parameters;
8) adjusting network connection parameters by using a loss function, wherein a back propagation loss function is used;
9) a random gradient descent method is used to minimize the loss function.
(2) The method comprises the following steps of preprocessing a training image and a test image, wherein the steps are as follows:
1) preprocessing a picture to enable the picture to be stretched, rotated, scaled and contrasted, firstly randomly rotating the picture, wherein the rotation angle is 10 degrees each time, then randomly stretching the finishing strength of the picture, the stretching amplitude is 20 percent, then performing cutting-off transformation on the picture, the transformation amplitude is 10 percent, and finally changing the overall brightness;
2) for a plurality of images with different sizes, the images with the normalization processing of 256 multiplied by 256 are adopted to obtain 'new' data.
(3) Training a mixed neural network model by using visible light and infrared light images, which comprises the following specific steps:
1) stacking the preprocessed basic units to construct a hybrid neural network;
2) preprocessing an original visible light image and an infrared light image;
3) inputting the image obtained by preprocessing into the substep 1), and training a basic unit by adopting a back propagation algorithm;
4) and (3) stacking the trained basic units to form a hybrid neural network, and finely adjusting the parameters of the whole network in an end-to-end mode.
(4) Testing the mixed neural network model to obtain a final fusion image, which comprises the following steps:
1) randomly extracting 10 images of the original visible image and the infrared image, wherein most of the original visible image and the infrared image are training sets, and the small part of the original visible image and the infrared image are testing sets;
2) preprocessing the extracted picture by adopting an image enhancement method;
3) putting the preprocessed image into a constructed hybrid neural network for training;
4) fusing the constructed characteristic images in the mixed neural network according to an image fusion rule after the characteristic images pass through the mixed network; the image fusion rule is preferably a pixel fusion rule;
5) and (4) putting the fusion result into a mixed neural network model for reconstruction to obtain a final fusion image. The technical scheme adopted by the invention has the beneficial effects that:
(1) initializing a convolution kernel in the hybrid neural network is beneficial to improving the stability and generalization capability of the network, solving the time and efficiency for realizing fusion and facilitating the extraction and classification of image features;
(2) stacking a plurality of basic units by adopting a stacked self-coding neural network (SAE) idea to train to obtain an integrated convolutional neural network;
(3) the neural network is utilized to process input images respectively to obtain respective feature classification images, and then the feature classification images are put into the last layer of network according to the fusion rule, so that the images can be subjected to self-adaptive decomposition and reconstruction to obtain the final fusion image. During fusion, only one infrared image and one visible image are needed, the number and the type of the filters do not need to be manually defined, the number of decomposition layers and the number of filtering directions of the images do not need to be selected, and the dependency of a fusion algorithm on priori knowledge can be greatly improved.
Drawings
FIG. 1 is a flow chart of the operation of the present invention;
FIG. 2 is a block diagram of a hybrid neural network of the present invention;
fig. 3 is a block diagram of a selected convolutional neural network.
Detailed Description
The following further describes the embodiments of the present invention with reference to the drawings, but the present invention is not limited thereto.
Fig. 1 shows a fusion method of a visible light image and an infrared light image based on a hybrid neural network, comprising the following steps:
(1) establishing a hybrid neural network structure formed by stacking basic units, as shown in fig. 2, includes the following steps:
1) selecting a convolution layer with proper parameters as a first layer of the hybrid neural network, specifically as follows;
step1: FIG. 3 illustrates the convolutional neural network structure used in the present model, with six convolutional layers and three downsampled layers selected;
step2: adjusting the convolutional layer, using 32 convolutional kernels, wherein the size of the convolutional kernels is 32 multiplied by 32, selecting a ReLU function as an activation function, and setting the size of an input image as the length and the width of the preprocessed image;
2) selecting a proper down-sampling layer as a second layer of the hybrid neural network, specifically as follows;
step1: adjusting the downsampling layer in the convolutional neural network structure in fig. 3, and selecting maximum value pooling;
step2: the integer tuple with the length of 2 represents a down-sampling factor in the vertical direction and the horizontal direction, the size of a down-sampling block adopted in the model is (2, 2), and the dimension of the picture is changed into half of the original dimension;
3) selecting a full connection layer to recombine a down-sampling layer to obtain characteristic information;
4) the deep belief network is adopted as a classification layer, and the features are subjected to feature classification matching, wherein the method specifically comprises the following steps:
step1: taking as input a joint representation of the infrared and visible light images, the calculation formula is:
Smatch=Ws(δ(Wh(VJR)+bh)+bs
step2, the formula delta (.) in Step1 is a nonlinear activation function sigmoid or ReLU function;
Step3:Whand bsFor mapping VJRA representation to a hidden layer;
Step4:Wsand bsFor calculating a matching score between the visible and infrared images;
5) the activation function of the deep belief network is a sigmoid function or a ReLU function, the category layer is a linear node, and the activation functions are soft-max functions;
6) fully connecting the characteristic layers and the category layers between the hybrid neural networks to form a combination layer, automatically generating corresponding image blocks, and calculating the state values of all nodes in the network through forward propagation in the network, wherein the specific method comprises the following steps:
step1: including assuming a loss function Γ (P):
Figure BDA0001682650840000051
step2: in the formula XijcAnd q is a joint layer network output value, wherein cosine values reflect the similarity of vectors through angles between the vectors.
Step 3: where β is the correlation ratio and is usually taken to be a small constant, usually 0.001.
7) Initializing a weight in the hybrid convolutional neural network, fully connecting the hidden layer and the visible layer, and using an energy function to keep the stability of parameters, wherein the method specifically comprises the following steps:
step1: there is no connection between nodes of the visible layer, and the node v of the visible layer belongs to {0, 1}DHidden layer node, h ∈ {0, 1}PThe joint configuration { v, h } between the hidden layer and the visible layer nodes has joint energy;
step2: the joint energy formula used is as follows:
E(v,h;θ)=-∑bivi-∑bjvj-∑bjvjwij
step 3: the model parameters in the setting formula are theta ═ { w, b }, viAnd hjThe binary states of the visible layer node i and the hidden layer node j are respectively, bi and bj respectively correspond to the offset between the visible layer and the hidden layer, and wij is the connection weight value between the visible layer and the hidden layer;
8) adjusting network connection parameters by using a loss function, wherein a back propagation loss function is used, and the implementation process is as follows;
step1: the back propagation loss function is used as follows:
Figure BDA0001682650840000061
step2: in the formula XijcRepresenting the image category output by the category layer, wherein q is a joint layer network output value;
step 3: cosine values reflect the similarity of vectors by the angle between the vectors;
step 4: beta is a correlation rate, a very small constant is taken, and 0-1 is taken;
9) the loss function is minimized by adopting a random gradient descent method, which comprises the following steps:
step1 selecting a combination of parameters (00, 01, …, 0 n);
step2: combining the parameters to calculate a cost function;
step 3: searching the next parameter combination which can lead the cost function to be reduced most;
step 4: and (5) circulating the steps of Step1, Step2 and Step3 until a local minimum value is obtained.
(2) The method comprises the following steps of preprocessing a training image and a test image, wherein the steps are as follows:
1) the picture is enhanced, and images are stretched, rotated, scaled and contrasted, and the implementation process is as follows:
step1: randomly rotating the image, wherein the rotation angle is 10 degrees each time;
step2: randomly stretching the finishing strength of the image, wherein the stretching amplitude is 20%;
step 3: performing error-cutting transformation on the image, wherein the transformation amplitude is 10%, and finally, changing the overall brightness;
2) for a plurality of images with different sizes, the images with the normalization processing of 256 multiplied by 256 are adopted to obtain 'new' data.
(3) Training a mixed neural network model by using visible light and infrared light images, and specifically comprising the following steps:
1) stacking the preprocessed basic units by adopting a stacked self-coding neural network (SAE) idea to construct a hybrid neural network, wherein the implementation mode of the hybrid neural network is as follows:
step1: selecting a convolution layer with proper parameters at the first layer, and extracting information from the first layer input information of the hybrid neural network;
step2: selecting a proper down-sampling layer at the second layer, and limiting the second layer information of the hybrid neural network;
step 3: recombining the information acquired by the first two layers by using the full connection layer for the third layer of the hybrid neural network;
step 4: and (3) carrying out feature classification matching on the features by adopting a deep belief network, wherein the formula is as follows:
Smatch=Ws(δ(Wh(VJR)+bh)+bs
step 5: inputting the joint representation of the infrared and visible images into the equation of step 4;
step 6: step4 equation, δ (-) is nonlinearActivating a sigmoid or ReLU function, WhAnd bsFor mapping VJRRepresentation to hidden layer, WsAnd bsFor calculating a matching score between the visible and infrared images;
step 7: the category layer is linear nodes, and the activation functions of the category layer and the linear nodes are soft-max functions;
step 8: any two layers of structures except the category layer in the hybrid neural network form a visible layer and a hidden layer, and the hidden layer and the visible layer are fully connected;
step 9: visible layer nodes are not connected with each other, and v E is equal to {0, 1}DHidden layer node, h ∈ {0, 1}PThe joint configuration { v, h } between the hidden layer and the visible layer nodes has a joint energy:
E(v,h;θ)=-∑bivi-∑bjvj-∑bjvjwij
step 10: in step10 formula, the model parameter is θ ═ { w, b }, viAnd hjBinary states of visible layer node i and hidden layer node j, respectively, biAnd bjCorresponding to the offset, w, between the visible layer and the hidden layer, respectivelyijIs the connection weight between them;
step 11: fully connecting the characteristic layers and the category layers between the hybrid neural networks to form a combination layer, automatically generating corresponding image blocks, and calculating the state values of all nodes in the network through forward propagation in the network;
step 12: calculating the state values of all nodes in the network by forward propagation, including assuming a loss function Γ (P):
Figure BDA0001682650840000091
step 13: step13 formula, XijcAnd q is a joint layer network output value, wherein cosine values reflect the similarity of vectors through angles between the vectors. Beta is a correlation rate, usually taking a very small constant, usually 0.001。
Step 14: the trained basic units are stacked by adopting a stacked self-coding neural network (SAE) idea to construct a hybrid neural network.
2) Preprocessing an original visible light image and an infrared light image data set, wherein the implementation mode is as follows;
step1: randomly rotating the image, wherein the rotation angle is 10 degrees each time;
step2: randomly stretching the finishing strength of the image, wherein the stretching amplitude is 20%;
step 3: performing error-cutting transformation on the image, wherein the transformation amplitude is 10%; finally, the overall brightness is changed;
3) inputting the image obtained by preprocessing into the substep 1), training a basic unit by adopting a back propagation algorithm, and adjusting network connection parameters by using a loss function, wherein the back propagation loss function is as follows:
Figure BDA0001682650840000101
in the formula, XijcRepresenting the image category output by the category layer, wherein q is a joint layer network output value; cosine values reflect similarity of vectors through angles among the vectors, beta is a correlation rate, a very small constant is taken, and 0-1 is taken;
4) and (3) stacking the trained basic units to form a hybrid neural network, and finely adjusting the parameters of the whole network in an end-to-end mode.
(4) Testing the mixed neural network model to obtain a final fusion image, which comprises the following steps:
1) randomly extracting 10 images of the original visible image and the infrared image, wherein most of the original visible image and the infrared image are training sets, and the small part of the original visible image and the infrared image are testing sets;
2) preprocessing the extracted picture by adopting an image enhancement method;
3) putting the preprocessed image into a constructed hybrid neural network for training;
4) fusing the constructed characteristic images in the mixed neural network according to an image fusion rule after the characteristic images pass through the mixed network; the image fusion rule is preferably a pixel fusion rule;
5) and (4) putting the fusion result into a mixed neural network model for reconstruction to obtain a final fusion image.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention.

Claims (5)

1. A fusion method of a visible light image and an infrared light image based on a hybrid neural network is characterized by comprising the following steps:
(1) establishing a hybrid neural network structure formed by stacking basic units; 1) selecting a convolution layer with proper parameters as a first layer of the hybrid neural network; 2) selecting a suitable down-sampling layer as a second layer of the hybrid neural network; 3) selecting a full connection layer to recombine a down-sampling layer to obtain characteristics; 4) a deep belief network is adopted as a classification layer, and the characteristics are subjected to characteristic classification matching; 5) The activation function of the deep belief network is a sigmoid function or a ReLU function, the category layer is a linear node, and the activation functions are soft-max functions; 6) fully connecting the characteristic layers and the category layers between the hybrid neural networks to form a combination layer, automatically generating corresponding image blocks, and calculating the state values of all nodes in the network through forward propagation in the network; 7) initializing a weight in the hybrid convolutional neural network, fully connecting a hidden layer and a visible layer, and using an energy function to keep the stability of parameters; 8) adjusting network connection parameters by using a loss function, wherein a back propagation loss function is used; 9) minimizing a loss function by adopting a random gradient descent method;
(2) preprocessing the training and testing images;
(3) training a hybrid neural network model using the visible light and infrared light images;
(4) and testing the mixed neural network model to obtain a final fusion image.
2. The fusion method of the visible light image and the infrared light image based on the hybrid neural network as claimed in claim 1, wherein the step (2) comprises the following steps:
1) preprocessing a picture to enable the picture to be stretched, rotated, scaled and contrasted, firstly randomly rotating the picture, wherein the rotation angle is 10 degrees each time, then randomly stretching the finishing strength of the picture, the stretching amplitude is 20 percent, then performing cutting-off transformation on the picture, the transformation amplitude is 10 percent, and finally changing the overall brightness;
2) for a plurality of images with different sizes, the images with the normalization processing of 256 multiplied by 256 are adopted to obtain 'new' data.
3. The fusion method of the visible light image and the infrared light image based on the hybrid neural network as claimed in claim 1, wherein the step (3) comprises the following steps:
1) stacking the preprocessed basic units to construct a hybrid neural network;
2) preprocessing an original visible light image and an infrared light image;
3) inputting the image obtained by preprocessing into the substep 1), and training a basic unit by adopting a back propagation algorithm;
4) and (3) stacking the trained basic units to form a hybrid neural network, and finely adjusting the parameters of the whole network in an end-to-end mode.
4. The fusion method of the visible light image and the infrared light image based on the hybrid neural network as claimed in claim 1, wherein the step (4) comprises the following steps:
1) randomly extracting 10 images of the original visible image and the infrared image, wherein most of the original visible image and the infrared image are training sets, and the small part of the original visible image and the infrared image are testing sets;
2) preprocessing the extracted picture by adopting an image enhancement method;
3) putting the preprocessed image into a constructed hybrid neural network for training;
4) fusing the constructed characteristic images in the mixed neural network according to an image fusion rule after the characteristic images pass through the mixed network;
5) and (4) putting the fusion result into a mixed neural network model for reconstruction to obtain a final fusion image.
5. The method for fusing the visible light image and the infrared light image based on the hybrid neural network as claimed in claim 4, wherein in substep 4), the image fusion rule is a pixel fusion rule.
CN201810558973.5A 2018-06-01 2018-06-01 Fusion method of visible light image and infrared light image based on hybrid neural network Active CN108846822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810558973.5A CN108846822B (en) 2018-06-01 2018-06-01 Fusion method of visible light image and infrared light image based on hybrid neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810558973.5A CN108846822B (en) 2018-06-01 2018-06-01 Fusion method of visible light image and infrared light image based on hybrid neural network

Publications (2)

Publication Number Publication Date
CN108846822A CN108846822A (en) 2018-11-20
CN108846822B true CN108846822B (en) 2021-08-24

Family

ID=64211525

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810558973.5A Active CN108846822B (en) 2018-06-01 2018-06-01 Fusion method of visible light image and infrared light image based on hybrid neural network

Country Status (1)

Country Link
CN (1) CN108846822B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11483451B2 (en) 2018-11-27 2022-10-25 Google Llc Methods and systems for colorizing infrared images
CN109447936A (en) * 2018-12-21 2019-03-08 江苏师范大学 A kind of infrared and visible light image fusion method
CN111951200B (en) * 2019-05-15 2023-11-14 杭州海康威视数字技术股份有限公司 Image pickup apparatus, image fusion method, image fusion device, and storage medium
CN110288555B (en) * 2019-07-02 2022-08-02 桂林电子科技大学 Low-illumination enhancement method based on improved capsule network
CN112098714B (en) * 2020-08-12 2023-04-18 国网江苏省电力有限公司南京供电分公司 Electricity stealing detection method and system based on ResNet-LSTM
CN113743582B (en) * 2021-08-06 2023-11-17 北京邮电大学 Novel channel shuffling method and device based on stack shuffling

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251355A (en) * 2016-08-03 2016-12-21 江苏大学 A kind of detection method merging visible images and corresponding night vision infrared image
CN106952220A (en) * 2017-03-14 2017-07-14 长沙全度影像科技有限公司 A kind of panoramic picture fusion method based on deep learning
CN107103331A (en) * 2017-04-01 2017-08-29 中北大学 A kind of image interfusion method based on deep learning

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106251355A (en) * 2016-08-03 2016-12-21 江苏大学 A kind of detection method merging visible images and corresponding night vision infrared image
CN106952220A (en) * 2017-03-14 2017-07-14 长沙全度影像科技有限公司 A kind of panoramic picture fusion method based on deep learning
CN107103331A (en) * 2017-04-01 2017-08-29 中北大学 A kind of image interfusion method based on deep learning

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于改进引导滤波和双通道脉冲发放皮层模型的红外与可见光图像融合算法;江泽涛 等;《光学学报》;20180228;第38卷(第2期);第0210002-1至0210002-9页 *

Also Published As

Publication number Publication date
CN108846822A (en) 2018-11-20

Similar Documents

Publication Publication Date Title
CN108846822B (en) Fusion method of visible light image and infrared light image based on hybrid neural network
Qi et al. Geonet: Geometric neural network for joint depth and surface normal estimation
Suganuma et al. Attention-based adaptive selection of operations for image restoration in the presence of unknown combined distortions
Liu et al. When image denoising meets high-level vision tasks: A deep learning approach
Arbelle et al. Microscopy cell segmentation via adversarial neural networks
Lin et al. Hyperspectral image denoising via matrix factorization and deep prior regularization
CN107103285B (en) Face depth prediction method based on convolutional neural network
CN113344806A (en) Image defogging method and system based on global feature fusion attention network
Matuszewski et al. Minimal annotation training for segmentation of microscopy images
CN109961407A (en) Facial image restorative procedure based on face similitude
CN109598732A (en) A kind of medical image cutting method based on three-dimensional space weighting
Lee et al. Meta-learning sparse implicit neural representations
CN112861659A (en) Image model training method and device, electronic equipment and storage medium
Verma et al. Computational cost reduction of convolution neural networks by insignificant filter removal
CN116052218B (en) Pedestrian re-identification method
CN109190666B (en) Flower image classification method based on improved deep neural network
CN114219824A (en) Visible light-infrared target tracking method and system based on deep network
Liu et al. Modern architecture style transfer for ruin or old buildings
CN114492634A (en) Fine-grained equipment image classification and identification method and system
CN111368734B (en) Micro expression recognition method based on normal expression assistance
CN109583406B (en) Facial expression recognition method based on feature attention mechanism
Akbar et al. Training neural networks using Clonal Selection Algorithm and Particle Swarm Optimization: A comparisons for 3D object recognition
Ataman et al. Visible and infrared image fusion using encoder-decoder network
CN112560626A (en) Depth measurement learning cartoon identification method based on local and global combination
CN111209879A (en) Unsupervised 3D object identification and retrieval method based on depth circle view

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20181120

Assignee: Guangxi Yanze Information Technology Co.,Ltd.

Assignor: GUILIN University OF ELECTRONIC TECHNOLOGY

Contract record no.: X2023980046249

Denomination of invention: Fusion Method of Visible and Infrared Images Based on Hybrid Neural Networks

Granted publication date: 20210824

License type: Common License

Record date: 20231108