CN109118467A - Based on the infrared and visible light image fusion method for generating confrontation network - Google Patents

Based on the infrared and visible light image fusion method for generating confrontation network Download PDF

Info

Publication number
CN109118467A
CN109118467A CN201811011172.3A CN201811011172A CN109118467A CN 109118467 A CN109118467 A CN 109118467A CN 201811011172 A CN201811011172 A CN 201811011172A CN 109118467 A CN109118467 A CN 109118467A
Authority
CN
China
Prior art keywords
image
generator
layer
visible light
infrared
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811011172.3A
Other languages
Chinese (zh)
Other versions
CN109118467B (en
Inventor
马佳义
马泳
黄珺
梅晓光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University WHU
Original Assignee
Wuhan University WHU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University WHU filed Critical Wuhan University WHU
Priority to CN201811011172.3A priority Critical patent/CN109118467B/en
Publication of CN109118467A publication Critical patent/CN109118467A/en
Application granted granted Critical
Publication of CN109118467B publication Critical patent/CN109118467B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/14Transformations for image registration, e.g. adjusting or mapping for alignment of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a kind of based on the infrared and visible light image fusion method for generating confrontation network, confrontation network is generated it is characterized by: establishing, the generation confrontation network includes generator and decision device, convolutional neural networks are set by generator and decision device, automatic study fusion feature and fusion rule are realized during training convolutional neural networks;Infrared figure image and visible images input the generator generated in confrontation network simultaneously, and generator carries out the fusion operation of image, exports final blending image.The present invention breaches the limitation of manual designs fusion feature and rule, learn fusion feature and rule automatically using the generator generated in confrontation network, and, the blending image that the present invention generates can keep the fundamental radiation information of infrared image well, and there is texture information abundant, blending image good visual effect simultaneously, target conspicuousness are high.

Description

Infrared and visible light image fusion method based on generation countermeasure network
Technical Field
The invention relates to the technical field of infrared image fusion, in particular to an infrared and visible light image fusion technical scheme based on a generation countermeasure network.
Background
The infrared imaging system has the advantages of strong anti-interference capability, good hiding performance, strong atmosphere penetration capability and obvious target, and can be suitable for various special occasions. However, due to the characteristics of the infrared detector, such as high sensitivity, large dynamic range, and the like, and various noise interferences in a complex working environment, the infrared image has the characteristics of high background and low contrast. The imaging scene only occupies a small part in the dynamic range of the whole infrared imaging system, and the image contrast is poor and blurred. On the contrary, the visible light image mainly records the spectral information reflected by the surface of an object, and has the characteristics of high image resolution and abundant texture detail features, but in specific application scenes such as at night or in the presence of smoke, target information is difficult to reflect in the visible light image, so that it is very important to generate a fused image having the advantages of both the visible light image and the infrared image.
At present, mainstream infrared and visible light image fusion methods are mainly classified into 6 categories according to different theories of application, namely a method based on multi-scale transformation, a method based on sparse expression, a method based on neural network, a method based on subspace, a method based on significance and a mixing method. However, these methods adopt the same strategy to select the same salient features in the infrared image and the visible light image for fusion, so that the objects originally salient in the infrared image are no longer salient in the visible light image. Meanwhile, the methods need manual design of fusion characteristics and fusion rules, so that new breakthroughs are difficult to be made in the field of infrared and visible light images.
Disclosure of Invention
The invention aims to provide a fusion technical scheme of infrared and visible light images based on a generated countermeasure network, which ensures that a fusion image contains richer texture information on the premise of ensuring basic radiation information of the infrared image by generating a countermeasure mechanism of the countermeasure network, and has good visual effect and high target significance.
In order to achieve the purpose, the technical scheme adopted by the invention provides an infrared and visible light image fusion method based on a generation countermeasure network, and the generation countermeasure network is established and comprises a generatorAnd decision deviceThe generator and the decision device are set as a convolutional neural network, and the automatic learning of fusion characteristics and fusion rules is realized in the process of training the convolutional neural network; the infrared image and the visible light image are simultaneously input into a generator in the generation countermeasure network, and the generator performs image fusion operation and outputs a final fusion image.
Furthermore, the generating a competing network comprises a generatorAnd decision deviceGeneratorThe convolutional neural network of (1) does not have any down-sampling and up-sampling processes, and all operations are performed on the same scale.
Furthermore, the generatorThe convolutional neural network comprises five layers of convolutional neural networks, wherein the first layer structure is the same as the second layer structure and consists of a convolutional layer with the convolutional template size of 5 multiplied by 5, a batch normalization layer and an activation layer with the activation function of Leaky Relu; the third layer structure is the same as the fourth layer structure and consists of a convolution layer with the convolution template size of 3 multiplied by 3, a batch normalization layer and an activation layer with the activation function of LeakyRelu; the fifth layer consists of a convolution layer with a convolution template size of 1 x 1 and an activation layer with an activation function of tanh, the output being the result of fusing the images.
Furthermore, a decision deviceThe convolutional neural network comprises five layers, wherein the first layer consists of a convolutional layer with the convolutional template size of 3 multiplied by 3, a pooling layer and an activation layer with an activation function of Leaky Relu; the second layer, the third layer and the fourth layer have the same structure and are composed of a convolution layer with the convolution template size of 3 multiplied by 3, a pooling layer, a batch normalization layer and an activation layer with the activation function of Leaky Relu; the fifth layer is a full link layer, and outputs the classification result of the input image, and predicts whether the output is a fusion image or a visible light image.
Furthermore, a generator in a pair-opposing network is generatedNetwork parameter and deciderThe network parameters of (a) are trained using the following steps,
step 1, randomly selecting a plurality of matched infrared and visible light pixel block pairs from a training set, splicing each pair of pixel blocks on the dimension of an image channel to be used as the input of a generator, and passing through the generatorThen, obtaining the fusion image corresponding to the pixel block pair, and calculating the loss function of the generatorUpdating the network parameters by using an optimizer to obtain generator network parameters;
step 2, inputting the fusion image of the pixel block pair obtained in the step 1 and the corresponding visible light pixel block into a decision device for classification, and calculating a loss function of the decision deviceUpdating the network parameters by using an optimizer to obtain the network parameters of the judger;
step 3, judging iteration ending conditions, wherein the iteration ending conditions comprise that when the iteration times H reach a preset maximum iteration time I, the iteration is ended, and network parameters of a generator and a judger obtained in the last iteration process are used as final network parameters; otherwise, returning to execute the step 1.
And in each iteration process, repeating the operation of the step 2 for K times, and then entering the step 3, wherein K is a preset numerical value.
Furthermore, the loss function of the generatorAs follows below, the following description will be given,
wherein,is the loss of content after the fused image is compared with the visible and infrared images respectively,is the penalty between the generator and the decider, λ is a constant;
the establishment is as follows,
wherein slice and W are the height and width, respectively, of the image of the input generator, | · | | luminance2To calculate the two-norm, IfIs a fused image output by the generator, IrIs an infrared image of the input generator, IvIs a visible light image of the input generator,is a gradient operator, ξ is a constant;the establishment is as follows,
wherein N is the number of fused images input into the decision device,is the decision result of the decider, If (n)Is the n-th fused image of the input decision device, a isAnd the visible light image corresponds to the label.
Furthermore, the loss function of the decision deviceAs follows below, the following description will be given,
wherein N is the number of the fused images input into the judger, and the number of the visible light images input into the judger is also equal to N;is the result of the classification of the input image by the decision maker, Iv nIs the n-th visible image, I, input into the decision devicef (n)Is the n-th fused image input to the decision device, a is the label corresponding to the visible light image, and b is the label corresponding to the fused image.
The fusion method of the invention enables the fusion image to have richer texture detail characteristics by generating the countermeasure mechanism of the countermeasure network, and designs the generator and the judger in the generated countermeasure network into the convolutional neural network, so that the fusion characteristics and the fusion rules are learned automatically in the process of training the convolutional neural network, the limitation of manual design characteristics is broken through, the image obtained by fusion of the invention can keep the basic radiation information of the infrared image by optimizing the loss function of the design, the target is obvious, and the fusion method has rich texture information, accords with the human visual perception and is beneficial to the detection and the identification of the target.
Drawings
Fig. 1 is a network structure diagram of a generator in a generation countermeasure network according to an embodiment of the present invention.
Fig. 2 is a network structure diagram of a decision device in a generation countermeasure network according to an embodiment of the present invention.
Detailed Description
The invention is further illustrated with reference to the following figures and examples.
The invention discloses an infrared and visible light image fusion method based on a generation countermeasure network. The invention provides a generator and decision device network structure design in a generation countermeasure network, a generator and decision device loss function design, and training and testing of the generation countermeasure network.
The generating a countermeasure network includes a generatorAnd decision deviceThe invention sets the generator and the judger as the convolution neural network, realizes the automatic learning of the fusion characteristics and the fusion rules in the process of training the convolution neural network:
referring to fig. 1, in an embodiment, a generator for generating a countermeasure networkFive layers of convolution neural network; the concrete structure is that, the first layer: convolution layer with convolution template size of 5 × 5, batch normalization layer, and activation layer with activation function of Leaky Relu, wherein the output of the first layer is 256 feature graphs with the same input size; the second layer structure is the same as the first layer, and 128 characteristic graphs with the same size as the input are output; the structure of the third layer is the same as that of the first layer, the size of the convolution template is modified to be 3 multiplied by 3, and the output of the third layer is 64 characteristic graphs with the same size as the input size; the fourth layer structure is the same as the third layer structure, and the output is 32 characteristic graphs with the same size as the input; and a fifth layer: rolling mouldThe convolution layer with a plate size of 1 × 1 and the activation layer with an activation function of tanh, the output is the result of fusing the images.
Wherein Leaky Relu means a linear rectification function with leakage, and tanh means a hyperbolic tangent function.
The prior art generation countermeasure network generators generally employ a design mode of encoding and decoding. Encoding is essentially a down-sampling process, extracting feature information of an image. Decoding is essentially an upsampling process, which reconstructs the extracted features to the size of the original image, and the currently common generators all use the network structure of VGG. The generator of the invention does not have any down-sampling and up-sampling processes, all operations are carried out on the same scale so as to prevent the down-sampling operation from losing important fusion information, and meanwhile, the network structure does not use the existing VGG and is an autonomously designed five-layer network structure.
In specific implementation, the number of neural network layers of the generator can be properly increased on the premise of the current design principle. The advantage of using this structure is that the network can converge faster and better during the training process, and the information of the original image can not be lost without any down-sampling operation.
Refer to fig. 2, wherein, the decision deviceThe five-layer convolutional neural network has the specific structure that: convolution layer with convolution template size of 3 x 3, pooling layer, and activation layer with activation function of Leaky Relu, the output of the first layer being input size32 feature maps of; a second layer: convolution layer with convolution template size of 3 × 3, pooling layer, batch normalization layer, activation layer with activation function of Leaky Relu, secondThe output of the two layers is of the size of the input size of the layer64 feature maps of; the third layer structure is the same as the second layer structure, and the output is the input size of the layer128 feature maps of (1); the fourth layer structure is the same as the third layer structure, and the output is the input size of the layer256 feature maps of (1); the fifth layer is a full link layer, and outputs the classification result of the input image, and predicts whether the output is a fusion image or a visible light image.
In the specific implementation, the number of network layers of the decision device can be increased properly under the principle of the above decision device design.
Generator in pair-opposing networkNetwork parameter and deciderThe network parameters of (2) can be initialized randomly, for example, the initialized parameters satisfy a truncated normal distribution with a mean value of 0 and a variance of 10^ (-3); the following steps are required for training:
step 1: randomly selecting N matched infrared and visible light pixel block pairs from a training set, presetting the value of N in specific implementation, wherein N is 32 in the embodiment, splicing each pair of pixel blocks on the dimension of a channel of an image to be used as the input of a generator, and passing through the generatorThen, obtaining the fused image of the corresponding pixel block pair, and calculating to generateLoss function of resultant deviceUpdating the network parameters by using an optimizer, and selecting SGD (random gradient descent) by using the optimizer during specific implementation to obtain generator network parameters;
in an embodiment, the training set is 45 pairs of registered infrared and visible light images selected from the TNO data set, and then these images are clipped to obtain 64381 pairs of pixel blocks of 120 × 120 size, and these pixel block pairs are used as training data. Because the used infrared image and the visible light image are both single-channel gray images, when each pair of pixel blocks are spliced in the dimension of the channel of the image, the infrared image can be used as a first channel, and the visible light image can be used as a second channel.
Step 2: inputting the fusion image of the pixel block pair obtained in the step 1 and the visible light pixel block corresponding to the fusion image into a decision device for classification, and calculating a loss function of the decision deviceUpdating network parameters by using an optimizer;
in specific implementation, the optimizer is further recommended to select the SGD, the processing is continuously performed for K times, the value of K can be preset in specific implementation, in the embodiment, K is preferably selected to be 2, and the step 3 is carried out after the network parameters of the judger are obtained.
The advantage of this is that, since the parameters are initialized randomly in the initial training period and the decision capability of the decision device is very weak, the number of optimizations for the decision device is greater than that of the generator in one iteration to achieve the countermeasure effect. Preferably, in one iteration, the generator is optimized once corresponding to the arbiter twice.
And step 3: judging an iteration ending condition, and when the iteration number H reaches the maximum iteration number I, terminating the iteration, wherein in the embodiment I preferably takes 10000, namely H takes 10000 as a last iteration process, and parameters of a generator and a judger obtained in the last iteration process are final network parameters; and when the iteration times H do not reach the maximum iteration times I, returning to execute the step 1.
The loss function includesAnd
wherein the generator loss functionThe design is as follows:
wherein,is the loss of content after the fused image is compared with the visible and infrared images respectively,is the confrontation loss between the generator and the decision device, lambda is a constant, and lambda is preferably selected to be 100 according to the experimental effect during the specific implementation;
the establishment is as follows:
whereinH and W are the height and width, respectively, of the image of the input generator, | · | | luminance2To calculate the two-norm, IfIs a fused image output by the generator, IrIs an infrared image of the input generator, IvIs a visible light image of the input generator,is a gradient operator, which, in particular implementation,the Laplace operator is selected, ξ is a constant, and according to the experimental effect, ξ is preferably selected to be 5;
the establishment is as follows:
wherein N is the number of fused images input into the decision device, in the embodiment, N is 32,is the decision result of the decider, If (n)The n-th fusion image is input into the decision device, a is a label corresponding to the visible light image, and a is 1 in specific implementation;
decision device loss functionThe design is as follows:
wherein N is the number of the fused images input into the judger, and the number of the visible light images input into the judger is also equal to N;
is the result of the classification of the input image by the decision maker, Iv nIs the n-th visible image, I, input into the decision devicef (n)The fusion image is the n-th fusion image input into the decision device, a is a label corresponding to the visible light image, in the embodiment, a is 1, b is a label corresponding to the fusion image, and in the embodiment, b is 0.
In the present invention,the item is designed to retain the basic radiation information of the infrared image, namely the gray value of the fused image is similar to that of the infrared image;the design of the item is to reserve gradient information of the infrared image, namely to represent a part of texture detail information of the image through the gradient, and the gradient of the fused image is required to be similar to that of the visible light image;the design of (2) is to directly quantify the similarity between the detail texture information of the fused image and the detail texture information of the visible light image. When these three losses are reduced, the desired effect of the present invention is achieved. In the prior art, the information of visible light is fused on the visible light image, the definition of the infrared image information is similar to that of the visible light image, and the gradient is used as a representation, so that the obtained fusion result basically loses the radiation information of the infrared image, and the target which is particularly remarkable in the infrared image is not remarkable in the fused image. On the contrary, the design of the method reserves the basic radiation information of the infrared image and has two representations of visible light texture detail information, so that the obtained result can reserve the basic radiation information of the infrared image, the target is obvious, the texture information is rich, the method accords with the human visual perception and is beneficial to the detection and identification of the targetOtherwise.
Initializing the generator by using the obtained network parameters of the generator, connecting the infrared image and the visible light image corresponding to the infrared image on a channel, and inputting the infrared image and the visible light image into the generator, wherein the output of the generator is the final fusion result.
In a conventional fusion method (non-CNN method), a fusion feature and a fusion rule are generally designed first, and a fusion image can be obtained finally. The invention designs a convolution neural network structure, and inputs infrared and visible light images and directly outputs a fusion image by training the convolution neural network structure, so that the training process can be considered as a process of automatically learning fusion characteristics and fusion rules, and the specific fusion characteristics and fusion rules are hidden in the parameters trained by the convolution neural network. In specific implementation, automatic fusion can be realized by adopting a computer software mode.
According to the fusion method, the countermeasure mechanism of the countermeasure network is generated, so that the fusion image contains richer texture information on the premise of ensuring the basic radiation information of the infrared image, the visual effect of the fusion image is good, and the target significance is high.

Claims (8)

1. An infrared and visible light image fusion method based on a generation countermeasure network is characterized in that: establishing a generative confrontation network including a generatorAnd decision deviceIn training convolutional neural networks by setting generators and determiners as convolutional neural networksThe automatic learning of fusion characteristics and fusion rules is realized in the process; the infrared image and the visible light image are simultaneously input into a generator in the generation countermeasure network, and the generator performs image fusion operation and outputs a final fusion image.
2. The infrared and visible light image fusion method based on the generation countermeasure network as claimed in claim 1, wherein: the generating a countermeasure network includes a generatorAnd decision deviceGeneratorThe convolutional neural network of (1) does not have any down-sampling and up-sampling processes, and all operations are performed on the same scale.
3. The infrared and visible light image fusion method based on the generation countermeasure network as claimed in claim 2, characterized in that: generatorThe convolutional neural network comprises five layers of convolutional neural networks, wherein the first layer structure is the same as the second layer structure and consists of a convolutional layer with the convolutional template size of 5 multiplied by 5, a batch normalization layer and an activation layer with the activation function of Leaky Relu; the third layer structure is the same as the fourth layer structure and consists of a convolution layer with the convolution template size of 3 multiplied by 3, a batch normalization layer and an activation layer with the activation function of Leaky Relu; the fifth layer consists of a convolution layer with a convolution template size of 1 x 1 and an activation layer with an activation function of tanh, the output being the result of fusing the images.
4. The infrared and visible light image fusion method based on generation countermeasure network according to claim 2,the method is characterized in that: decision deviceThe convolutional neural network comprises five layers, wherein the first layer consists of a convolutional layer with the convolutional template size of 3 multiplied by 3, a pooling layer and an activation layer with an activation function of Leaky Relu; the second layer, the third layer and the fourth layer have the same structure and are composed of a convolution layer with the convolution template size of 3 multiplied by 3, a pooling layer, a batch normalization layer and an activation layer with the activation function of Leaky Relu; the fifth layer is a full link layer, and outputs the classification result of the input image, and predicts whether the output is a fusion image or a visible light image.
5. The infrared and visible light image fusion method based on generation of countermeasure network according to claim 1 or 2 or 3 or 4, characterized in that: generator in pair-opposing networkNetwork parameter and deciderThe network parameters of (a) are trained using the following steps,
step 1, randomly selecting a plurality of matched infrared and visible light pixel block pairs from a training set, splicing each pair of pixel blocks on the dimension of an image channel to be used as the input of a generator, and passing through the generatorThen, obtaining the fusion image corresponding to the pixel block pair, and calculating the loss function of the generatorUpdating the network parameters by using an optimizer to obtain generator network parameters;
step 2, inputting the fusion image of the pixel block pair obtained in the step 1 and the corresponding visible light pixel block into a decision deviceLine classification, calculating the decision device loss functionUpdating the network parameters by using an optimizer to obtain the network parameters of the judger;
step 3, judging iteration ending conditions, wherein the iteration ending conditions comprise that when the iteration times H reach a preset maximum iteration time I, the iteration is ended, and network parameters of a generator and a judger obtained in the last iteration process are used as final network parameters; otherwise, returning to execute the step 1.
6. The infrared and visible light image fusion method based on the generative countermeasure network as claimed in claim 5, wherein: and in each iteration process, repeating the operation of the step 2 for K times, and then entering the step 3, wherein K is a preset numerical value.
7. The infrared and visible light image fusion method based on the generative countermeasure network as claimed in claim 5, wherein: loss function of generatorAs follows below, the following description will be given,
wherein,is the loss of content after the fused image is compared with the visible and infrared images respectively,is the penalty between the generator and the decider, λ is a constant;
establishingAs follows below, the following description will be given,
wherein H and W are the height and width, respectively, of the image of the input generator, | · | | purple2To calculate the two-norm, IfIs a fused image output by the generator, IrIs an infrared image of the input generator, IvIs a visible light image of the input generator,is a gradient operator, ξ is a constant;the establishment is as follows,
wherein N is the number of fused images input into the decision device,is the decision result of the decider, If (n)Is the n-th fused image input to the decision device, and a is the label corresponding to the visible light image.
8. The infrared and visible light image fusion method based on the generative countermeasure network as claimed in claim 5, wherein: loss function of decision deviceAs follows below, the following description will be given,
wherein N is the number of fused images input to the decision device, the number of visible light images input to the decision device, etcAt N;is the result of the classification of the input image by the decision maker, Iv nIs the n-th visible image, I, input into the decision devicef (n)Is the n-th fused image input to the decision device, a is the label corresponding to the visible light image, and b is the label corresponding to the fused image.
CN201811011172.3A 2018-08-31 2018-08-31 Infrared and visible light image fusion method based on generation countermeasure network Active CN109118467B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811011172.3A CN109118467B (en) 2018-08-31 2018-08-31 Infrared and visible light image fusion method based on generation countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811011172.3A CN109118467B (en) 2018-08-31 2018-08-31 Infrared and visible light image fusion method based on generation countermeasure network

Publications (2)

Publication Number Publication Date
CN109118467A true CN109118467A (en) 2019-01-01
CN109118467B CN109118467B (en) 2021-11-16

Family

ID=64861764

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811011172.3A Active CN109118467B (en) 2018-08-31 2018-08-31 Infrared and visible light image fusion method based on generation countermeasure network

Country Status (1)

Country Link
CN (1) CN109118467B (en)

Cited By (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009569A (en) * 2019-04-17 2019-07-12 中国人民解放军陆军工程大学 Infrared and visible light image fusion method based on lightweight convolutional neural network
CN110211046A (en) * 2019-06-03 2019-09-06 重庆邮电大学 A kind of remote sensing image fusion method, system and terminal based on generation confrontation network
CN110322423A (en) * 2019-04-29 2019-10-11 天津大学 A kind of multi-modality images object detection method based on image co-registration
CN110852326A (en) * 2019-11-06 2020-02-28 贵州工程应用技术学院 Handwriting layout analysis and multi-style ancient book background fusion method
CN111080155A (en) * 2019-12-24 2020-04-28 武汉大学 Air conditioner user frequency modulation capability evaluation method based on generation countermeasure network
CN111145131A (en) * 2019-11-28 2020-05-12 中国矿业大学 Infrared and visible light image fusion method based on multi-scale generation type countermeasure network
CN111260594A (en) * 2019-12-22 2020-06-09 天津大学 Unsupervised multi-modal image fusion method
CN111275692A (en) * 2020-01-26 2020-06-12 重庆邮电大学 Infrared small target detection method based on generation countermeasure network
CN111476353A (en) * 2020-04-07 2020-07-31 中国科学院重庆绿色智能技术研究院 Super-resolution method of GAN image introducing significance
CN111523401A (en) * 2020-03-31 2020-08-11 河北工业大学 Method for recognizing vehicle type
CN111709903A (en) * 2020-05-26 2020-09-25 中国科学院长春光学精密机械与物理研究所 Infrared and visible light image fusion method
CN111951199A (en) * 2019-05-16 2020-11-17 武汉Tcl集团工业研究院有限公司 Image fusion method and device
CN112001868A (en) * 2020-07-30 2020-11-27 山东师范大学 Infrared and visible light image fusion method and system based on generation of antagonistic network
CN112132255A (en) * 2019-06-24 2020-12-25 百度(美国)有限责任公司 Batch normalization layer fusion and quantification method for model inference in artificial intelligence neural network engine
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system
CN112488970A (en) * 2019-09-12 2021-03-12 四川大学 Infrared and visible light image fusion method based on coupling generation countermeasure network
CN112487233A (en) * 2020-11-27 2021-03-12 重庆邮电大学 Infrared and visible light image retrieval method based on feature decoupling
CN113128422A (en) * 2021-04-23 2021-07-16 重庆市海普软件产业有限公司 Image smoke and fire detection method and system of deep neural network
CN113160286A (en) * 2021-01-06 2021-07-23 中国地质大学(武汉) Near-infrared and visible light image fusion method based on convolutional neural network
CN113222879A (en) * 2021-07-08 2021-08-06 中国工程物理研究院流体物理研究所 Generation countermeasure network for fusion of infrared and visible light images
CN113298744A (en) * 2021-06-07 2021-08-24 长春理工大学 End-to-end infrared and visible light image fusion method
CN113327271A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Decision-level target tracking method and system based on double-optical twin network and storage medium
CN113343966A (en) * 2021-05-08 2021-09-03 武汉大学 Infrared and visible light image text description generation method
CN113627504A (en) * 2021-08-02 2021-11-09 南京邮电大学 Multi-mode multi-scale feature fusion target detection method based on generation of countermeasure network
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN114004775A (en) * 2021-11-30 2022-02-01 四川大学 Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network
CN114022401A (en) * 2021-08-12 2022-02-08 中国地质大学(武汉) Method for generating confrontation network infrared image fusion based on multi-classification constraint double discriminators

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945140A (en) * 2017-12-20 2018-04-20 中国科学院深圳先进技术研究院 A kind of image repair method, device and equipment
CN108090521A (en) * 2018-01-12 2018-05-29 广州视声智能科技有限公司 A kind of image interfusion method and arbiter of production confrontation network model
CN108399422A (en) * 2018-02-01 2018-08-14 华南理工大学 A kind of image channel fusion method based on WGAN models

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945140A (en) * 2017-12-20 2018-04-20 中国科学院深圳先进技术研究院 A kind of image repair method, device and equipment
CN108090521A (en) * 2018-01-12 2018-05-29 广州视声智能科技有限公司 A kind of image interfusion method and arbiter of production confrontation network model
CN108399422A (en) * 2018-02-01 2018-08-14 华南理工大学 A kind of image channel fusion method based on WGAN models

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
XIANGYU LIU ET AL.: "PSGAN:A GENERATIVE ADVERSARIAL NETWORK FOR REMOTE SENSING IMAGE PAN-SHARPENING", 《ARXIV:1805.03371V1》 *

Cited By (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110009569A (en) * 2019-04-17 2019-07-12 中国人民解放军陆军工程大学 Infrared and visible light image fusion method based on lightweight convolutional neural network
CN110322423A (en) * 2019-04-29 2019-10-11 天津大学 A kind of multi-modality images object detection method based on image co-registration
CN111951199A (en) * 2019-05-16 2020-11-17 武汉Tcl集团工业研究院有限公司 Image fusion method and device
CN110211046A (en) * 2019-06-03 2019-09-06 重庆邮电大学 A kind of remote sensing image fusion method, system and terminal based on generation confrontation network
CN110211046B (en) * 2019-06-03 2023-07-14 重庆邮电大学 Remote sensing image fusion method, system and terminal based on generation countermeasure network
CN112132255A (en) * 2019-06-24 2020-12-25 百度(美国)有限责任公司 Batch normalization layer fusion and quantification method for model inference in artificial intelligence neural network engine
CN112488970A (en) * 2019-09-12 2021-03-12 四川大学 Infrared and visible light image fusion method based on coupling generation countermeasure network
CN110852326A (en) * 2019-11-06 2020-02-28 贵州工程应用技术学院 Handwriting layout analysis and multi-style ancient book background fusion method
CN110852326B (en) * 2019-11-06 2022-11-04 贵州工程应用技术学院 Handwriting layout analysis and multi-style ancient book background fusion method
CN111145131A (en) * 2019-11-28 2020-05-12 中国矿业大学 Infrared and visible light image fusion method based on multi-scale generation type countermeasure network
CN111260594A (en) * 2019-12-22 2020-06-09 天津大学 Unsupervised multi-modal image fusion method
CN111260594B (en) * 2019-12-22 2023-10-31 天津大学 Unsupervised multi-mode image fusion method
CN111080155A (en) * 2019-12-24 2020-04-28 武汉大学 Air conditioner user frequency modulation capability evaluation method based on generation countermeasure network
CN111080155B (en) * 2019-12-24 2022-03-15 武汉大学 Air conditioner user frequency modulation capability evaluation method based on generation countermeasure network
CN111275692B (en) * 2020-01-26 2022-09-13 重庆邮电大学 Infrared small target detection method based on generation countermeasure network
CN111275692A (en) * 2020-01-26 2020-06-12 重庆邮电大学 Infrared small target detection method based on generation countermeasure network
CN111523401B (en) * 2020-03-31 2022-10-04 河北工业大学 Method for recognizing vehicle type
CN111523401A (en) * 2020-03-31 2020-08-11 河北工业大学 Method for recognizing vehicle type
CN111476353A (en) * 2020-04-07 2020-07-31 中国科学院重庆绿色智能技术研究院 Super-resolution method of GAN image introducing significance
CN111476353B (en) * 2020-04-07 2022-07-15 中国科学院重庆绿色智能技术研究院 Super-resolution method of GAN image introducing significance
CN111709903A (en) * 2020-05-26 2020-09-25 中国科学院长春光学精密机械与物理研究所 Infrared and visible light image fusion method
CN112001868B (en) * 2020-07-30 2024-06-11 山东师范大学 Infrared and visible light image fusion method and system based on generation of antagonism network
CN112001868A (en) * 2020-07-30 2020-11-27 山东师范大学 Infrared and visible light image fusion method and system based on generation of antagonistic network
CN112288663A (en) * 2020-09-24 2021-01-29 山东师范大学 Infrared and visible light image fusion method and system
CN112487233B (en) * 2020-11-27 2022-07-12 重庆邮电大学 Infrared and visible light image retrieval method based on feature decoupling
CN112487233A (en) * 2020-11-27 2021-03-12 重庆邮电大学 Infrared and visible light image retrieval method based on feature decoupling
CN113160286A (en) * 2021-01-06 2021-07-23 中国地质大学(武汉) Near-infrared and visible light image fusion method based on convolutional neural network
CN113128422B (en) * 2021-04-23 2024-03-29 重庆市海普软件产业有限公司 Image smoke and fire detection method and system for deep neural network
CN113128422A (en) * 2021-04-23 2021-07-16 重庆市海普软件产业有限公司 Image smoke and fire detection method and system of deep neural network
CN113343966B (en) * 2021-05-08 2022-04-29 武汉大学 Infrared and visible light image text description generation method
CN113343966A (en) * 2021-05-08 2021-09-03 武汉大学 Infrared and visible light image text description generation method
CN113327271A (en) * 2021-05-28 2021-08-31 北京理工大学重庆创新中心 Decision-level target tracking method and system based on double-optical twin network and storage medium
CN113298744A (en) * 2021-06-07 2021-08-24 长春理工大学 End-to-end infrared and visible light image fusion method
CN113222879A (en) * 2021-07-08 2021-08-06 中国工程物理研究院流体物理研究所 Generation countermeasure network for fusion of infrared and visible light images
CN113222879B (en) * 2021-07-08 2021-09-21 中国工程物理研究院流体物理研究所 Generation countermeasure network for fusion of infrared and visible light images
CN113627504A (en) * 2021-08-02 2021-11-09 南京邮电大学 Multi-mode multi-scale feature fusion target detection method based on generation of countermeasure network
CN114022401A (en) * 2021-08-12 2022-02-08 中国地质大学(武汉) Method for generating confrontation network infrared image fusion based on multi-classification constraint double discriminators
CN114022401B (en) * 2021-08-12 2024-09-06 中国地质大学(武汉) Method for generating countermeasure network infrared image fusion based on multi-classification constraint double discriminators
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN114004775A (en) * 2021-11-30 2022-02-01 四川大学 Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network

Also Published As

Publication number Publication date
CN109118467B (en) 2021-11-16

Similar Documents

Publication Publication Date Title
CN109118467B (en) Infrared and visible light image fusion method based on generation countermeasure network
JP6980958B1 (en) Rural area classification garbage identification method based on deep learning
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
CN110363716B (en) High-quality reconstruction method for generating confrontation network composite degraded image based on conditions
CN108509978B (en) Multi-class target detection method and model based on CNN (CNN) multi-level feature fusion
CN111539887B (en) Channel attention mechanism and layered learning neural network image defogging method based on mixed convolution
CN113642390B (en) Street view image semantic segmentation method based on local attention network
CN110956094A (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-current network
CN113807355A (en) Image semantic segmentation method based on coding and decoding structure
CN104217404A (en) Video image sharpness processing method in fog and haze day and device thereof
CN109903339B (en) Video group figure positioning detection method based on multi-dimensional fusion features
CN112329780B (en) Depth image semantic segmentation method based on deep learning
CN112489164A (en) Image coloring method based on improved depth separable convolutional neural network
CN112149526B (en) Lane line detection method and system based on long-distance information fusion
CN111768326A (en) High-capacity data protection method based on GAN amplification image foreground object
CN117557775A (en) Substation power equipment detection method and system based on infrared and visible light fusion
CN116206214A (en) Automatic landslide recognition method, system, equipment and medium based on lightweight convolutional neural network and double attention
CN114581789A (en) Hyperspectral image classification method and system
CN115908793A (en) Coding and decoding structure semantic segmentation model based on position attention mechanism
CN114066899A (en) Image segmentation model training method, image segmentation device, image segmentation equipment and image segmentation medium
CN116452469B (en) Image defogging processing method and device based on deep learning
CN112836755A (en) Sample image generation method and system based on deep learning
CN116309171A (en) Method and device for enhancing monitoring image of power transmission line
Wang et al. Impact of Traditional Augmentation Methods on Window State Detection
CN113780241A (en) Acceleration method and device for detecting salient object

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant