CN113450297A - Fusion model construction method and system for infrared image and visible light image - Google Patents

Fusion model construction method and system for infrared image and visible light image Download PDF

Info

Publication number
CN113450297A
CN113450297A CN202110830253.1A CN202110830253A CN113450297A CN 113450297 A CN113450297 A CN 113450297A CN 202110830253 A CN202110830253 A CN 202110830253A CN 113450297 A CN113450297 A CN 113450297A
Authority
CN
China
Prior art keywords
image
fused
discriminator
generator
visible light
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202110830253.1A
Other languages
Chinese (zh)
Inventor
王晶晶
任金雯
张立人
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong Aowande Information Technology Co ltd
Original Assignee
Shandong Aowande Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong Aowande Information Technology Co ltd filed Critical Shandong Aowande Information Technology Co ltd
Priority to CN202110830253.1A priority Critical patent/CN113450297A/en
Publication of CN113450297A publication Critical patent/CN113450297A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Abstract

The invention provides a method and a system for constructing a fusion model of an infrared image and a visible light image, wherein the model comprises a generator and a discriminator and comprises the following steps: the generator adopts a gradient path for extracting detail information of the image and adopts an intensity path to extract contrast characteristics of the image; fusing the images obtained by the two paths by a cascading method to obtain a fused image; two discriminators are adopted to process the fused image respectively, one discriminator takes the infrared image as real data to extract the contrast information of the fused image and the infrared image, and the other discriminator takes the visible light image as real data to extract the detail information of the fused image; and the construction of the fusion model is completed through continuous countertraining between the generator and the discriminator. The fusion image obtained by the scheme disclosed by the disclosure is more consistent with human visual perception and can contain more detailed information and obvious contrast characteristics.

Description

Fusion model construction method and system for infrared image and visible light image
Technical Field
The invention belongs to the technical field of image processing, and particularly relates to a method and a system for constructing and fusing a fusion model of an infrared image and a visible light image.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
Image fusion is an enhanced technique that fuses images from different types of sensors to produce a comprehensive, informative image for subsequent processing. In particular, fusion techniques of infrared and visible light images have been widely used to enhance human visual perception, target detection, and target recognition.
For the same application scene, different bands of image sensors may reflect different scene information. Image fusion of different spectral bands is one of the research hotspots in the fields of computer vision and image processing. Infrared images have the advantage of significant target to background contrast, but have little texture detail. Visible light images have the advantage of rich texture detail information, but the contrast is not significant. And is greatly influenced by the external environment. Such as lighting conditions, fog, occlusion, etc., can severely affect image quality. The infrared and visible light image fusion can provide more complementary information, and is more beneficial to human eye observation or computer vision analysis. Therefore, the image fusion result not only has clear contrast information, but also has rich texture detail information, and can describe the object more clearly, completely and accurately. Therefore, the image fusion technique plays a very important role.
The key point of image fusion is to extract the most important characteristic information from source images collected by different imaging devices and fuse the most important characteristic information into a fused image. Thus, the fused image may provide a more complex and detailed scene representation while reducing redundant information. Therefore, in the past few decades, many fusion methods have been proposed. These fusion methods can be classified into different categories according to the fusion scheme, including multi-scale transform-based methods, sparse representation-based methods, subspace-based methods, saliency-based methods, hybrid methods, and other fusion methods. These methods focus on manually designing feature extraction and fusion rules to achieve better fusion performance. However, due to the diversity of feature extraction and fusion rule design, the fusion method becomes more and more complex.
In recent years, the deep learning method has been widely applied to image fusion and has achieved a good effect. The countermeasure network is also applied to the fusion method, and the method based on deep learning utilizes the strong nonlinear fitting capability of the neural network to enable the fused image to have ideal distribution, but still has many defects.
In the process of implementing the invention, the inventor finds that the following technical problems exist in the prior art:
1. the deep learning method is only used for feature extraction in image fusion, but the whole framework still adopts the traditional network;
2. the traditional GAN only adopts one discriminator to carry out countermeasure training, only retains the information of one source image, but loses a large amount of information of the other source image, so that the final fusion result has the problems of unclear image, incomplete retained information and the like.
Disclosure of Invention
In order to overcome the defects of the prior art, the invention provides a method for fusing an infrared image and a visible light image, and a fused image with obvious contrast and rich texture detail information can be obtained.
In order to achieve the above object, one or more embodiments of the present invention provide the following technical solutions:
in a first aspect, a method for constructing a fusion model of an infrared image and a visible light image is disclosed, the model including a generator and a discriminator, comprising:
the generator adopts a gradient path for extracting detail information of the image and adopts an intensity path to extract contrast characteristics of the image;
fusing the images obtained by the two paths by a cascading method to obtain a fused image;
two discriminators are adopted to process the fused image respectively, one discriminator takes the infrared image as real data to extract the contrast information of the fused image and the infrared image, and the other discriminator takes the visible light image as real data to extract the detail information of the fused image;
through continuous countertraining between the generator and the discriminator, the quality of the fused image obtained by the generator is continuously improved until the discriminator cannot distinguish the fused image from the real image, and the construction of the fused model is completed.
According to the further technical scheme, on each information extraction path of the generator, a plurality of layers of convolutional neural networks are adopted, and an attention module is added behind each layer.
According to a further technical scheme, the generator loss function is as follows:
LG=LDDGANSE(G)+λLcontent
wherein L isGRepresenting the total loss of the generator, λ is used to balance LDDGANSE(G)And Lcontent,LDDGANSE(G)Representation generator G and discriminator DiAnd DvLoss of antagonism between, LcontentConsists of two parts, one is the loss of intensity and the other is the loss of gradient.
In a further technical solution, the discriminator loss function is as follows:
Figure BDA0003175210040000031
Figure BDA0003175210040000032
wherein b, bi、bvRespectively representing the fused images IfusedInfrared image IiAnd a visible light image IvLabel of (D)v(Iv) And Dv(Ifused) Respectively represent discriminators DVClassification results of medium visible light image and fused image, Di(Ii) And Di(Ifused) Representation discriminator DiAnd (5) classifying the intermediate infrared image and the fused image.
In a second aspect, a method for fusing an infrared image and a visible light image is disclosed, which comprises:
obtaining a trained fusion model by using the method;
and fusing the infrared image and the visible light image by using the trained fusion model and outputting a fusion result.
In a third aspect, a system for constructing a fusion model of an infrared image and a visible light image is disclosed, wherein the model comprises a generator and a discriminator, and comprises:
a generator building module configured to: the generator adopts a gradient path for extracting detail information of the image and adopts an intensity path to extract contrast characteristics of the image;
fusing the images obtained by the two paths by a cascading method to obtain a fused image;
a discriminator construction module configured to: two discriminators are adopted to process the fused image respectively, one discriminator takes the infrared image as real data to extract the contrast information of the fused image and the infrared image, and the other discriminator takes the visible light image as real data to extract the detail information of the fused image;
a model building module configured to: through continuous countertraining between the generator and the discriminator, the quality of the fused image obtained by the generator is continuously improved until the discriminator cannot distinguish the fused image from the real image, and the construction of the fused model is completed.
The above one or more technical solutions have the following beneficial effects:
the invention relates to an algorithm of a dual-discriminator countermeasure network based on an attention mechanism, which is a fusion method aiming at an infrared image and a visible light image.
The invention relates to a fusion method for generating an infrared image and a visible light image of an anti-network based on an attention mechanism, and the fusion image obtained by the scheme disclosed by the invention is more in line with human visual perception, can contain more detailed information and obvious contrast characteristics, and is beneficial to application in aspects of target detection, target identification and the like.
According to the invention, the attention module is added in the generator, so that the information of the source image is more effectively utilized, and the loss of the information is reduced.
The method adopts the generation countermeasure network of the double discriminators, so that the fused image not only has the advantage of high contrast of the infrared image, but also has the advantage of rich detail information of the visible light image, and the fused image is more accurate and clear.
Advantages of additional aspects of the invention will be set forth in part in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, are included to provide a further understanding of the invention, and are incorporated in and constitute a part of this specification, illustrate exemplary embodiments of the invention and together with the description serve to explain the invention and not to limit the invention.
FIG. 1 is an overall architecture of a dual arbiter based attention mechanism countermeasure network;
FIG. 2 is a generator network architecture for a dual arbiter based attention mechanism countermeasure network;
FIG. 3 is a discriminator network architecture for a dual discriminator countermeasure network based on the attention mechanism;
fig. 4 is the result of fusing an infrared image and a visible image on a TNO data set using the present invention.
Detailed Description
It is to be understood that the following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of exemplary embodiments according to the invention.
The embodiments and features of the embodiments of the present invention may be combined with each other without conflict.
Example one
The embodiment aims to provide a method for constructing a fusion model of an infrared image and a visible light image, which mainly comprises the following steps:
firstly, a generator of a convolutional neural network based on an attention mechanism divides two paths to respectively extract information of an infrared image and a visible light image.
Secondly, fusing the extracted feature maps through cascade connection to obtain an initial fusion image containing feature information of the two source images;
then, distinguishing the initial fusion image and the source image by using a double-discriminator;
and finally, the generator is prompted to generate a fusion image with higher quality through continuous countertraining of the discriminator and the generator, the training is stopped until the discriminator cannot distinguish the source image from the fusion image, and at the moment, the fusion image obtained by the generator is the final fusion result.
The TNO data set is selected for the implementation case, and the main invention points are as follows: (1) performing countermeasure learning by using a countermeasure network of a dual discriminator; (2) the generator adds an attention mechanism on the basis of the convolutional neural network.
The overall architecture of the attention mechanism-based dual-discriminator countermeasure network provided by the invention is shown in fig. 1, the fusion model comprises a construction generator and a discriminator, and the specific steps are as follows:
the generator respectively extracts the characteristics of the visible light image and the infrared image by adopting two path gradient paths and an intensity path, then the images obtained by the two paths are fused by a cascade method, and information is further merged by convolution with the step length of 1 and the convolution kernel size of 1x1 and a Tanh activation function, wherein the structure of the generator is shown in figure 2.
The gradient path of the generator is mainly used for extracting detail information, the invention adopts an infrared image and two visible light images as input, the intensity path mainly extracts the contrast detail information of the images, and adopts a visible light imageThe image and the two infrared images together serve as input. On each information extraction path of the generator, a convolutional neural network with four layers is adopted, wherein the first layer and the second layer adopt convolution of 5x5, convolution of 3x3 is adopted in the third layer and the fourth layer, the step length in each layer is set to be 1, then the functions of batch normalization and Leaky ReLU activation are used, meanwhile, an attention module is added behind each layer, and the attention module is used for learning correlation among channels to further improve the performance of the generator. In addition, the generator loss function L of the countermeasure networkDDGANSE(G)Further enhances the restraint and adds LcontentAs follows:
LG=LDDGANSE(G)+λLcontent
wherein L isGRepresenting the total loss of the generator, lambda regulation parameter, for balancing LDDGANSE(G)And Lcontent,LDDGANSE(G)Representation generator G and discriminator DiAnd DvThe loss of antagonism between, defined as follows:
Figure BDA0003175210040000061
wherein
Figure BDA0003175210040000062
Representing the fused images, N representing the number of fused images, a1,a2Is the output probability of the discriminator, where a is the fused image and the true data, since the generator expects the discriminator not to be able to distinguish1And a2Are all set to 1, LcontentConsists of two parts, one is the loss of intensity and the other is the loss of gradient. The intensity loss constrains the contrast between the target object and the background, while the gradient loss makes the fused image richer in texture information.
Specifically, LcontentIs defined as follows
Figure BDA0003175210040000071
H and W denote the height and width, ε, of the input image, respectively1,ε2Is a constant that adjusts the balance between the terms, typically the former being less than the latter.
In the strength path and the gradient path of the generator, each path respectively defines a loss function, and the training process is carried out until the loss is minimum
The loss of strength is defined as:
Figure BDA0003175210040000072
Figure BDA0003175210040000073
the gradient loss is defined as:
Figure BDA0003175210040000074
Figure BDA0003175210040000075
wherein IfusedIs a fused image. I isirAs an infrared image, IvisIs a visible light source image, | · non-woven phosphorFRepresenting the Frobenius specification of the matrix,
Figure BDA0003175210040000076
a gradient operator is represented.
The fused image obtained by the generator is used as the input of a discriminator, two discriminators are adopted for carrying out countermeasure training, the two discriminators are respectively composed of four convolutional layers, the four convolutional layers use 3 multiplied by 3 convolutional kernels and a Leaky ReLU activation function, and finally, batch normalization is added into the three convolutional layers. The step length is set to 2 in all the convolutional layers, the last linear layer discriminates the input according to the features extracted from the first four convolutional layers, and the classification probability is output, as shown in fig. 3. One discriminator extracts contrast information of the background and the target object by using the infrared image as real data, and the other discriminator extracts detail information of the entire picture by using the visible light image as real data. Through continuous counterstudy, training is stopped until the discriminator cannot distinguish the fused image from the real image.
The setting of the Loss function is to determine each parameter of the network, and to select the parameter with the minimum Loss as the final parameter through training. The Loss function comprises two parts, namely a Loss function of the generator and a Loss function of the discriminator.
The penalty function of the discriminator is as follows:
Figure BDA0003175210040000081
Figure BDA0003175210040000082
wherein, bi、bvRespectively representing the fused images IfusedInfrared image IiAnd a visible light image IvLabel of (D)v(Iv) And Dv(Ifused) Respectively representing the results of the classification of the visible light image and the fused image, Di(Ii) And Di(Ifused) Representing the classification results of the infrared image and the fused image.
In order to verify the advantages of the TNO image fusion method for the infrared image and the visible light image, a large number of fusion experiments are carried out on the TNO data set, the infrared image and the visible light image in the TNO data set are input into a generator, and image fusion is carried out to obtain an initial fusion image; and sending the initial fused image into a discriminator for discrimination, and stopping training if the discriminator cannot distinguish the fused image from the source image.
Experimental results the results of the fusion of the present invention on TNO data sets are shown in figure 4. Where the first line is a visible image, the second line is an infrared image, and the third line is the fusion result of the present invention. As can be seen from fig. 4, the fusion method of the dual-discriminator countermeasure network based on the attention mechanism, which is established by the invention, can achieve a good effect on the fusion of the infrared image and the visible light image, and the fused image not only contains rich detail information, but also has obvious contrast information. The fusion method of the infrared image and the visible light image of the countermeasure network based on the attention mechanism is effective, provides a better method for obtaining an accurate fusion image, and has certain practicability.
Example two
It is an object of this embodiment to provide a computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the above method when executing the program.
EXAMPLE III
An object of the present embodiment is to provide a computer-readable storage medium.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the above-mentioned method.
Example four
The object of this embodiment is to provide a system for constructing a fusion model of an infrared image and a visible light image, where the model includes a generator and an arbiter, and includes:
a generator building module configured to: the generator adopts a gradient path for extracting detail information of the image and adopts an intensity path to extract contrast characteristics of the image;
fusing the images obtained by the two paths by a cascading method to obtain a fused image;
a discriminator construction module configured to: two discriminators are adopted to process the fused image respectively, one discriminator takes the infrared image as real data to extract the contrast information of the fused image and the infrared image, and the other discriminator takes the visible light image as real data to extract the detail information of the fused image;
a model building module configured to: through continuous countertraining between the generator and the discriminator, the quality of the fused image obtained by the generator is continuously improved until the discriminator cannot distinguish the fused image from the real image, and the construction of the fused model is completed.
EXAMPLE five
The embodiment aims to provide a method based on fusion of an infrared image and a visible light image, which comprises the following steps:
obtaining a trained fusion model by using the method of the first embodiment;
and fusing the infrared image and the visible light image by using the trained fusion model and outputting a fusion result.
The steps involved in the apparatus of the above embodiment correspond to the first embodiment of the method, and the detailed description thereof can be found in the relevant description of the first embodiment. The term "computer-readable storage medium" should be taken to include a single medium or multiple media containing one or more sets of instructions; it should also be understood to include any medium that is capable of storing, encoding or carrying a set of instructions for execution by a processor and that cause the processor to perform any of the methods of the present invention.
Those skilled in the art will appreciate that the modules or steps of the present invention described above can be implemented using general purpose computer means, or alternatively, they can be implemented using program code that is executable by computing means, such that they are stored in memory means for execution by the computing means, or they are separately fabricated into individual integrated circuit modules, or multiple modules or steps of them are fabricated into a single integrated circuit module. The present invention is not limited to any specific combination of hardware and software.
Although the embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the scope of the present invention, and it should be understood by those skilled in the art that various modifications and variations can be made without inventive efforts by those skilled in the art based on the technical solution of the present invention.

Claims (10)

1. The method for constructing the fusion model of the infrared image and the visible light image is characterized in that the model comprises a generator and a discriminator, and comprises the following steps:
the generator adopts a gradient path for extracting detail information of the image and adopts an intensity path to extract contrast characteristics of the image;
fusing the images obtained by the two paths by a cascading method to obtain a fused image;
two discriminators are adopted to process the fused image respectively, one discriminator takes the infrared image as real data to extract the contrast information of the fused image and the infrared image, and the other discriminator takes the visible light image as real data to extract the detail information of the fused image;
through continuous countertraining between the generator and the discriminator, the quality of the fused image obtained by the generator is continuously improved until the discriminator cannot distinguish the fused image from the real image, and the construction of the fused model is completed.
2. The method of claim 1, wherein a plurality of convolutional neural networks are used in each information extraction path of the generator, and an attention module is added after each layer.
3. The method of fusion modeling of infrared and visible light images of claim 1, wherein said generator loss function is:
LG=LDDGANSE(G)+λLcontent
wherein L isGRepresenting the total loss of the generator, λ is used to balance LDDGANSE(G)And Lcontent,LDDGANSE(G)Representation generator G and discriminator DiAnd DvLoss of antagonism between, LcontentConsists of two parts, one is the loss of intensity and the other is the loss of gradient.
4. The method of claim 1, wherein the discriminator loss function is as follows:
Figure FDA0003175210030000021
Figure FDA0003175210030000022
wherein b, bi、bvRespectively representing the fused images IfusedInfrared image IiAnd a visible light image IvLabel of (D)v(Iv) And Dv(Ifused) Respectively represent discriminators DVClassification results of medium visible light image and fused image, Di(Ii) And Di(Ifused) Representation discriminator DiAnd (5) classifying the intermediate infrared image and the fused image.
5. The method of claim 3, wherein L is the number of LcontentIs defined as follows
Figure FDA0003175210030000023
Where H and W denote the height and width, ε, of the input image, respectively1,ε2Is a constant that adjusts the balance between the terms, typically the former being less than the latter.
6. The method of claim 3, wherein the intensity path and the gradient path of the generator each define a loss function, and wherein the training is performed until the loss is minimized.
7. The method for fusing the infrared image and the visible light image is characterized by comprising the following steps:
obtaining a trained fusion model using the method of any one of claims 1-6;
and fusing the infrared image and the visible light image by using the trained fusion model and outputting a fusion result.
8. The system for constructing the fusion model of the infrared image and the visible light image is characterized in that the model comprises a generator and a discriminator, and comprises:
a generator building module configured to: the generator adopts a gradient path for extracting detail information of the image and adopts an intensity path to extract contrast characteristics of the image;
fusing the images obtained by the two paths by a cascading method to obtain a fused image;
a discriminator construction module configured to: two discriminators are adopted to process the fused image respectively, one discriminator takes the infrared image as real data to extract the contrast information of the fused image and the infrared image, and the other discriminator takes the visible light image as real data to extract the detail information of the fused image;
a model building module configured to: through continuous countertraining between the generator and the discriminator, the quality of the fused image obtained by the generator is continuously improved until the discriminator cannot distinguish the fused image from the real image, and the construction of the fused model is completed.
9. A computing device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the steps of the method of any of the preceding claims 1 to 5 when executing the program.
10. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of the method of any one of the preceding claims 1 to 5.
CN202110830253.1A 2021-07-22 2021-07-22 Fusion model construction method and system for infrared image and visible light image Withdrawn CN113450297A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110830253.1A CN113450297A (en) 2021-07-22 2021-07-22 Fusion model construction method and system for infrared image and visible light image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110830253.1A CN113450297A (en) 2021-07-22 2021-07-22 Fusion model construction method and system for infrared image and visible light image

Publications (1)

Publication Number Publication Date
CN113450297A true CN113450297A (en) 2021-09-28

Family

ID=77817000

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110830253.1A Withdrawn CN113450297A (en) 2021-07-22 2021-07-22 Fusion model construction method and system for infrared image and visible light image

Country Status (1)

Country Link
CN (1) CN113450297A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN116109539A (en) * 2023-03-21 2023-05-12 智洋创新科技股份有限公司 Infrared image texture information enhancement method and system based on generation of countermeasure network

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113781377A (en) * 2021-11-03 2021-12-10 南京理工大学 Infrared and visible light image fusion method based on antagonism semantic guidance and perception
CN116109539A (en) * 2023-03-21 2023-05-12 智洋创新科技股份有限公司 Infrared image texture information enhancement method and system based on generation of countermeasure network

Similar Documents

Publication Publication Date Title
Ma et al. GANMcC: A generative adversarial network with multiclassification constraints for infrared and visible image fusion
CN110443143B (en) Multi-branch convolutional neural network fused remote sensing image scene classification method
CN109472298B (en) Deep bidirectional feature pyramid enhanced network for small-scale target detection
CN112818862B (en) Face tampering detection method and system based on multi-source clues and mixed attention
CN111767882A (en) Multi-mode pedestrian detection method based on improved YOLO model
CN112446476A (en) Neural network model compression method, device, storage medium and chip
CN110188829B (en) Neural network training method, target recognition method and related products
CN108710893B (en) Digital image camera source model classification method based on feature fusion
CN113762138B (en) Identification method, device, computer equipment and storage medium for fake face pictures
CN109472193A (en) Method for detecting human face and device
CN108197669B (en) Feature training method and device of convolutional neural network
CN112036260B (en) Expression recognition method and system for multi-scale sub-block aggregation in natural environment
CN111046821A (en) Video behavior identification method and system and electronic equipment
CN110598788A (en) Target detection method and device, electronic equipment and storage medium
CN113450297A (en) Fusion model construction method and system for infrared image and visible light image
CN110222718A (en) The method and device of image procossing
CN113706406A (en) Infrared and visible light image fusion method based on feature space multi-classification countermeasure mechanism
You et al. An extended filtered channel framework for pedestrian detection
CN112115805A (en) Pedestrian re-identification method and system with bimodal hard-excavation ternary-center loss
Liu et al. Semantic-guided polarization image fusion method based on a dual-discriminator GAN
CN114331946A (en) Image data processing method, device and medium
CN112446322A (en) Eyeball feature detection method, device, equipment and computer-readable storage medium
CN116757986A (en) Infrared and visible light image fusion method and device
CN113361466B (en) Multispectral target detection method based on multi-mode cross guidance learning
CN113706404A (en) Depression angle human face image correction method and system based on self-attention mechanism

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20210928