CN116109539A - Infrared image texture information enhancement method and system based on generation of countermeasure network - Google Patents

Infrared image texture information enhancement method and system based on generation of countermeasure network Download PDF

Info

Publication number
CN116109539A
CN116109539A CN202310272214.3A CN202310272214A CN116109539A CN 116109539 A CN116109539 A CN 116109539A CN 202310272214 A CN202310272214 A CN 202310272214A CN 116109539 A CN116109539 A CN 116109539A
Authority
CN
China
Prior art keywords
image
infrared
infrared image
visible light
representing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310272214.3A
Other languages
Chinese (zh)
Inventor
赵砚青
孙英良
孔令琦
张广鑫
王硕
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhiyang Innovation Technology Co Ltd
Original Assignee
Zhiyang Innovation Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhiyang Innovation Technology Co Ltd filed Critical Zhiyang Innovation Technology Co Ltd
Priority to CN202310272214.3A priority Critical patent/CN116109539A/en
Publication of CN116109539A publication Critical patent/CN116109539A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses an infrared image texture information enhancement method and system based on a generated countermeasure network, and mainly relates to the technical field of infrared image processing. The method comprises the following steps: respectively shooting and collecting images of the identical scene by using an infrared camera and a visible light camera to obtain an infrared image and a visible light image; fusing the acquired infrared image and visible light image to obtain a fused image; the acquired infrared images and the fusion images form an image pair, and a data set required by the training of the countermeasure network model is established; improving the generation of the countermeasure network model; model training is carried out on the improved model; the input infrared image is inferred using a generator network, and a final image is output. The invention has the advantage that the texture richness of the infrared image can be practically improved.

Description

Infrared image texture information enhancement method and system based on generation of countermeasure network
Technical Field
The invention relates to the technical field of infrared image processing, in particular to an infrared image texture information enhancement method and system based on a generated countermeasure network.
Background
The infrared imaging technology has the advantages of strong target detection capability, strong anti-interference capability and the like, and can well detect targets under severe weather conditions, so that the infrared imaging technology is widely applied to various industries. However, due to the technical limitations of imaging principles and the like, the original image obtained by the infrared detector contains less texture information, the edge information of an object in the image is not obvious, the visual effect is poor, and the development of subsequent works such as image analysis and the like is not facilitated, so that the method has important significance for the later texture information enhancement processing of the infrared image.
At present, the traditional algorithm is still a mainstream method for enhancing the texture information of the infrared image, and comprises the methods of sharpening the infrared image, improving the contrast ratio and gradient of the infrared image and the like, but the method has poor robustness, has larger limitation in use and has an unobvious enhancement effect.
In recent years, with the development of deep learning technology, computer vision technology based on a deep learning method is enabled to various industries, and part of the method is also applied to the aspect of image texture enhancement.
Patent CN 114117614A mentions a method for automatically generating texture of building facades, but the method is used for enhancing the texture of visible light images; patent CN 113989100A mentions an infrared texture sample expansion method based on a pattern generation countermeasure network, which directly uses an image shot by infrared equipment as a training image with rich textures, but the image shot by infrared equipment has insufficient texture richness, which can limit the practical effect of the method to a great extent.
In summary, how to provide a reliable and efficient method for enhancing texture information of an infrared image, so as to actually improve the texture richness of the infrared image, is a problem to be solved by those skilled in the art.
Disclosure of Invention
The invention aims to provide an infrared image texture information enhancement method and system based on a generated countermeasure network, which can practically improve the texture richness of an infrared image.
The invention aims to achieve the aim, and the aim is achieved by the following technical scheme:
an infrared image texture information enhancement method based on generation of a countermeasure network, comprising the steps of:
s1: respectively shooting and collecting images of the identical scene by using an infrared camera and a visible light camera to obtain an infrared image and a visible light image;
s2: fusing the acquired infrared image and visible light image to obtain a fused image;
s3: the acquired infrared images and the fusion images form an image pair, and a data set required by the training of the countermeasure network model is established;
s4: improving the generation of the countermeasure network model;
s5: model training is carried out on the improved model;
s6: the input infrared image is inferred using a generator network, and a final image is output.
Preferably, the fusing the collected infrared image and the visible light image to obtain a fused image specifically includes: and fusing the infrared image and the visible light image by adopting an image fusion method based on low-rank representation.
Preferably, the fusing of the infrared image and the visible light image by using the image fusion method based on low-rank representation comprises the following steps:
s21: respectively decomposing an infrared image and a visible light image into a low-rank part and a significance part:
Figure SMS_1
wherein ,
Figure SMS_2
representing image data +.>
Figure SMS_3
Is of low rank index, +.>
Figure SMS_4
Is a coefficient of significance,/->
Figure SMS_5
Representing noise;
s22: respectively carrying out weighted fusion on the original image low-rank part and the significance part, wherein the fused low-rank part can be expressed as:
Figure SMS_6
wherein ,
Figure SMS_8
representing the low rank part after fusion, +.>
Figure SMS_10
Representing pixel points in the image, +.>
Figure SMS_13
Representing an infrared image>
Figure SMS_9
Representing visible light image,/->
Figure SMS_11
Low rank part representing infrared image, +.>
Figure SMS_12
Low rank part representing visible light image, +.>
Figure SMS_14
And
Figure SMS_7
the weights of the fusion are represented as such,
the saliency fraction after fusion can be expressed as:
Figure SMS_15
wherein ,
Figure SMS_16
representing the low rank part after fusion, +.>
Figure SMS_17
Representing pixel points in the image, +.>
Figure SMS_18
Representing an infrared image>
Figure SMS_19
Representing visible light image,/->
Figure SMS_20
Low rank part representing infrared image, +.>
Figure SMS_21
A low rank portion representing a visible light image;
s23: and directly adding and summing the low-rank part and the significance part to obtain a fused image:
Figure SMS_22
preferably, the improvement on generating the countermeasure network model is specifically: adding a structural similarity loss to the loss function of the original pix2pix network to strengthen the constraint of the model on the image texture structure, wherein the structural similarity loss is defined as follows:
Figure SMS_23
the improved model loss function is as follows:
Figure SMS_24
the loss function then comprises three parts, the firstThe score is cGAN loss, where G is generator, D is arbiter, the second is L1 distance loss of the generated image from the label image, the third is increased structural similarity loss,
Figure SMS_26
and
Figure SMS_30
Is a weight for balancing two losses, wherein, < ->
Figure SMS_33
Representing the image generated by the generator,>
Figure SMS_27
for an infrared image with less texture information input, < +.>
Figure SMS_29
Is noise (I)>
Figure SMS_31
For the mean value of the label image, +.>
Figure SMS_34
Generating an average value of the image>
Figure SMS_25
Is the standard deviation of the label image +.>
Figure SMS_28
Standard deviation for generating an image, +.>
Figure SMS_32
and
Figure SMS_35
Is a constant.
Preferably, the model training of the improved model is specifically: in the training process, the generator generates an infrared image with rich textures and fused effects through the original infrared image, and the discriminator carries out true and false judgment on the image generated by the generator and the real fused image, and the two are trained alternately until the discriminator cannot distinguish true and false of the generated image and the real image.
The infrared image texture information enhancement system based on the generation countermeasure network comprises an image acquisition module, an image processing module and an image output module, wherein the image acquisition module is in data connection with the image processing module, and the image processing module is in data connection with the image output module.
Preferably, the image acquisition module is configured to: and shooting and collecting images of the identical scene to obtain an infrared image and a visible light image.
Preferably, the image processing module is configured to: the acquired infrared image and the visible light image are fused to obtain a fused image, the acquired infrared image and the fused image form an image pair, a data set required by the training of generating the countermeasure network model is established, the generated countermeasure network model is improved, and the model training is carried out on the improved model.
Preferably, the image output module is configured to: the input infrared image is inferred using a generator network, and a final image is output.
Compared with the prior art, the invention has the beneficial effects that:
1. the invention designs an infrared image texture information enhancement method based on improved generation of an countermeasure network, which comprises the steps of constructing a data set for generating the countermeasure network by using an image fusion method based on low-rank representation; specifically, texture information of the visible light image is blended into the infrared image by adjusting the blending ratio of the low-rank part and the salient part of the infrared and visible light image, so that an infrared image dataset with rich texture information is formed.
2. The invention designs an infrared image texture information enhancement method based on an improved generation countermeasure network, which comprises the steps of improving a loss function of an improved generation countermeasure network model to improve the constraint of the model on the generation image texture information; specifically, on the basis of the original generation of the antagonism network loss function, the constraint of the generator for generating the antagonism network on the image texture information is improved by adding the structural similarity loss, and the infrared image with richer texture information is generated.
3. The invention designs an infrared image texture information enhancement method based on an improved generation countermeasure network, which can implicitly learn the mapping relation between an infrared image and an image after infrared and visible light fusion and improve the texture information of the infrared image.
Drawings
Fig. 1 is a flow chart of an infrared image texture information enhancement method.
Fig. 2 is a raw infrared image before fusion.
Fig. 3 is an original visible image before fusion.
Fig. 4 is a fused image.
Fig. 5 is an original infrared image for inference verification.
FIG. 6 is an infrared image with model reasoning detail enhancement.
Detailed Description
The invention will be further illustrated with reference to specific examples. It is to be understood that these examples are illustrative of the present invention and are not intended to limit the scope of the present invention. Further, it will be understood that various changes or modifications may be made by those skilled in the art after reading the teachings of the invention, and such equivalents are intended to fall within the scope of the invention as defined herein.
Examples: infrared image texture information enhancement method and system based on generation of countermeasure network
As shown in fig. 1, texture information of a visible light image is fused into an infrared image by a low-rank representation-based image fusion method, and the texture information and the original infrared image form a data set in the form of an image pair; secondly, carrying out improved optimization on the generated antagonism network loss function, namely increasing the structural similarity loss on the basis of the original loss function; training the improved generation countermeasure network model using the data set; finally, inputting the infrared image into a trained generation countermeasure network to be inferred, and obtaining the infrared image with visible light texture detail information, wherein the detailed steps comprise:
s1: respectively shooting and collecting images of the identical scene by using an infrared camera and a visible light camera to obtain an infrared image and a visible light image, and shooting 1000 pairs of images altogether, wherein the infrared image is shown in fig. 2, and the visible light image is shown in fig. 3;
s2: the method comprises the steps of fusing an acquired infrared image and a visible light image to obtain a fused image, and fusing the infrared image and the visible light image by adopting an image fusion method based on low-rank representation, and specifically comprises the following steps of:
s21: respectively decomposing an infrared image and a visible light image into a low-rank part and a significance part:
Figure SMS_36
wherein ,
Figure SMS_37
representing image data +.>
Figure SMS_38
Is of low rank index, +.>
Figure SMS_39
Is a coefficient of significance,/->
Figure SMS_40
Representing noise;
s22: respectively carrying out weighted fusion on the original image low-rank part and the significance part, wherein the fused low-rank part can be expressed as:
Figure SMS_41
wherein ,
Figure SMS_43
representing the low rank part after fusion, +.>
Figure SMS_45
Representing pixel points in the image, +.>
Figure SMS_48
Representing infrared images,
Figure SMS_44
Representing visible light image,/->
Figure SMS_46
Low rank part representing infrared image, +.>
Figure SMS_47
Low rank part representing visible light image, +.>
Figure SMS_49
And
Figure SMS_42
the weights of the fusion are represented as such,
the saliency fraction after fusion can be expressed as:
Figure SMS_50
wherein ,
Figure SMS_51
representing the low rank part after fusion, +.>
Figure SMS_52
Representing pixel points in the image, +.>
Figure SMS_53
Representing an infrared image>
Figure SMS_54
Representing visible light image,/->
Figure SMS_55
Low rank part representing infrared image, +.>
Figure SMS_56
A low rank portion representing a visible light image;
s23: and directly adding and summing the low-rank part and the significance part to obtain a fused image:
Figure SMS_57
the fused image is shown in fig. 4.
S3: forming an image pair by the infrared image and the fused image, and forming a data set required by training an countermeasure network model;
s4: the method is characterized in that the loss function of an original model is improved, namely, structural similarity loss is added into the loss function of an original pix2pix network, the constraint of the model on the image texture structure is enhanced, the visual effect of an infrared image output by the model is improved, and the pix2pix is a classical image translation algorithm based on a generation countermeasure network, wherein the loss function is as follows:
Figure SMS_58
the loss function includes two parts, one part is cGAN loss, where G is generator, D is arbiter, the other part is L1 distance loss of the generated image from the label image,
Figure SMS_59
is the weight used to balance the two losses.
In order to enhance texture information of an infrared image, the invention provides that structural similarity loss is added into an original loss function, the structural similarity loss is used for measuring structural similarity between a reconstructed image and a label image, the reconstruction of texture features is facilitated, and the structural similarity loss is defined as follows:
Figure SMS_60
the loss function of the model at this time is:
Figure SMS_61
the loss function at this point includes three parts, the first part is the cGAN loss, where GThe generator, D, is a discriminant, the second is the L1 distance loss of the generated image from the label image, the third is the increased structural similarity loss,
Figure SMS_63
and
Figure SMS_66
Is a weight for balancing two losses, wherein, < ->
Figure SMS_69
Representing the image generated by the generator,>
Figure SMS_64
for an infrared image with less texture information input, < +.>
Figure SMS_67
Is noise (I)>
Figure SMS_70
For the mean value of the label image, +.>
Figure SMS_72
Generating an average value of the image>
Figure SMS_62
Is the standard deviation of the label image +.>
Figure SMS_65
Standard deviation for generating an image, +.>
Figure SMS_68
and
Figure SMS_71
Is a constant.
S5: model training is carried out on the improved model, a generator generates an infrared image with rich textures and fused effects through an original infrared image in the training process, and a discriminator carries out true and false judgment on the image generated by the generator and a true fused image, and the two are alternately trained until the discriminator cannot distinguish true and false of the generated image and the true image;
s6: the input infrared image is inferred by using a generator network, at this time, the generator can generate the infrared image with rich texture information consistent with the fused effect, and when the input original infrared image is shown in fig. 5, the infrared image with enhanced detail information output by the model is shown in fig. 6.
The infrared image texture information enhancement system based on the generation countermeasure network comprises an image acquisition module, an image processing module and an image output module, wherein the image acquisition module is in data connection with the image processing module, the image processing module is in data connection with the image output module, and the image acquisition module is used for: shooting and collecting images of the identical scene to obtain an infrared image and a visible light image; the image processing module is used for: fusing the acquired infrared image and the visible light image to obtain a fused image, forming an image pair by the acquired infrared image and the fused image, establishing a data set required by training of generating an countermeasure network model, improving the generated countermeasure network model, and further training the model after improvement; the image output module is used for: the input infrared image is inferred using a generator network, and a final image is output.

Claims (9)

1. An infrared image texture information enhancement method based on generation of an countermeasure network is characterized by comprising the following steps:
s1: respectively shooting and collecting images of the identical scene by using an infrared camera and a visible light camera to obtain an infrared image and a visible light image;
s2: fusing the acquired infrared image and visible light image to obtain a fused image;
s3: the acquired infrared images and the fusion images form an image pair, and a data set required by the training of the countermeasure network model is established;
s4: improving the generation of the countermeasure network model;
s5: model training is carried out on the improved model;
s6: the input infrared image is inferred using a generator network, and a final image is output.
2. The method for enhancing texture information of an infrared image based on a generated countermeasure network according to claim 1, wherein the fusing of the collected infrared image and the visible light image to obtain a fused image is specifically: and fusing the infrared image and the visible light image by adopting an image fusion method based on low-rank representation.
3. The method for enhancing texture information of an infrared image based on a generation countermeasure network according to claim 2, wherein the fusing of the infrared image and the visible light image using an image fusing method based on a low rank representation comprises the steps of:
s21: respectively decomposing an infrared image and a visible light image into a low-rank part and a significance part:
Figure QLYQS_1
wherein ,
Figure QLYQS_2
representing image data +.>
Figure QLYQS_3
Is of low rank index, +.>
Figure QLYQS_4
Is a coefficient of significance,/->
Figure QLYQS_5
Representing noise;
s22: respectively carrying out weighted fusion on the original image low-rank part and the significance part, wherein the fused low-rank part can be expressed as:
Figure QLYQS_6
wherein ,
Figure QLYQS_7
representing the low rank part after fusion, +.>
Figure QLYQS_11
Representing pixel points in the image, +.>
Figure QLYQS_13
Representing an infrared image>
Figure QLYQS_9
Representing visible light image,/->
Figure QLYQS_10
Low rank part representing infrared image, +.>
Figure QLYQS_12
Low rank part representing visible light image, +.>
Figure QLYQS_14
and
Figure QLYQS_8
The weights of the fusion are represented as such,
the saliency fraction after fusion can be expressed as:
Figure QLYQS_15
wherein ,
Figure QLYQS_16
representing the low rank part after fusion, +.>
Figure QLYQS_17
Representing pixel points in the image, +.>
Figure QLYQS_18
Representing an infrared image>
Figure QLYQS_19
Representing visible light image,/->
Figure QLYQS_20
Low rank part representing infrared image, +.>
Figure QLYQS_21
A low rank portion representing a visible light image;
s23: and directly adding and summing the low-rank part and the significance part to obtain a fused image:
Figure QLYQS_22
4. the method for enhancing the texture information of the infrared image based on the generation of the countermeasure network according to claim 1, wherein the improvement on the generation of the countermeasure network model is specifically: adding a structural similarity loss to the loss function of the original pix2pix network to strengthen the constraint of the model on the image texture structure, wherein the structural similarity loss is defined as follows:
Figure QLYQS_23
the improved model loss function is as follows:
Figure QLYQS_24
at this point, the loss function includes three parts, the first part being the cGAN loss, wherein,
Figure QLYQS_26
generator (s)/(s)>
Figure QLYQS_31
For the discriminator, the second part is the generation image and the label image +.>
Figure QLYQS_36
Distance loss, third part is increased structural similarity loss, +.>
Figure QLYQS_27
and
Figure QLYQS_29
Is a weight for balancing two losses, wherein, < ->
Figure QLYQS_33
Representing the image generated by the generator,>
Figure QLYQS_37
for an infrared image with less texture information input, < +.>
Figure QLYQS_25
Is noise (I)>
Figure QLYQS_32
For the mean value of the label image, +.>
Figure QLYQS_34
Generating an average value of the image>
Figure QLYQS_38
Is the standard deviation of the label image +.>
Figure QLYQS_28
Standard deviation for generating an image, +.>
Figure QLYQS_30
and
Figure QLYQS_35
Is a constant.
5. The method for enhancing infrared image texture information based on generation of countermeasure network according to claim 1, wherein the model training of the improved model is specifically: in the training process, the generator generates an infrared image with rich textures and fused effects through the original infrared image, and the discriminator carries out true and false judgment on the image generated by the generator and the real fused image, and the two are trained alternately until the discriminator cannot distinguish true and false of the generated image and the real image.
6. The infrared image texture information enhancement system based on the generation countermeasure network is characterized by comprising an image acquisition module, an image processing module and an image output module, wherein the image acquisition module is in data connection with the image processing module, and the image processing module is in data connection with the image output module.
7. The system of claim 6, wherein the image acquisition module is configured to: and shooting and collecting images of the identical scene to obtain an infrared image and a visible light image.
8. An infrared image texture information enhancement system based on generation of an countermeasure network of claim 6, wherein the image processing module is configured to: the acquired infrared image and the visible light image are fused to obtain a fused image, the acquired infrared image and the fused image form an image pair, a data set required by the training of generating the countermeasure network model is established, the generated countermeasure network model is improved, and the model training is carried out on the improved model.
9. An infrared image texture information enhancement system based on generation of an countermeasure network of claim 6, wherein the image output module is configured to: the input infrared image is inferred using a generator network, and a final image is output.
CN202310272214.3A 2023-03-21 2023-03-21 Infrared image texture information enhancement method and system based on generation of countermeasure network Pending CN116109539A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310272214.3A CN116109539A (en) 2023-03-21 2023-03-21 Infrared image texture information enhancement method and system based on generation of countermeasure network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310272214.3A CN116109539A (en) 2023-03-21 2023-03-21 Infrared image texture information enhancement method and system based on generation of countermeasure network

Publications (1)

Publication Number Publication Date
CN116109539A true CN116109539A (en) 2023-05-12

Family

ID=86267436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310272214.3A Pending CN116109539A (en) 2023-03-21 2023-03-21 Infrared image texture information enhancement method and system based on generation of countermeasure network

Country Status (1)

Country Link
CN (1) CN116109539A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614996A (en) * 2018-11-28 2019-04-12 桂林电子科技大学 The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image
CN112488970A (en) * 2019-09-12 2021-03-12 四川大学 Infrared and visible light image fusion method based on coupling generation countermeasure network
CN113192049A (en) * 2021-05-17 2021-07-30 杭州电子科技大学 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement
CN113222839A (en) * 2021-05-08 2021-08-06 华北电力大学 Infrared and visible light image fusion denoising method based on generation countermeasure network
CN113450297A (en) * 2021-07-22 2021-09-28 山东澳万德信息科技有限责任公司 Fusion model construction method and system for infrared image and visible light image
CN113989100A (en) * 2021-09-18 2022-01-28 西安电子科技大学 Infrared texture sample expansion method based on pattern generation countermeasure network
CN114004775A (en) * 2021-11-30 2022-02-01 四川大学 Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network
CN114463235A (en) * 2022-01-27 2022-05-10 上海电力大学 Infrared and visible light image fusion method and device and storage medium
CN114648475A (en) * 2022-03-14 2022-06-21 泰山学院 Infrared and visible light image fusion method and system based on low-rank sparse representation
CN115358961A (en) * 2022-09-13 2022-11-18 杭州电子科技大学 Multi-focus image fusion method based on deep learning
CN115601282A (en) * 2022-11-10 2023-01-13 江苏海洋大学(Cn) Infrared and visible light image fusion method based on multi-discriminator generation countermeasure network

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109614996A (en) * 2018-11-28 2019-04-12 桂林电子科技大学 The recognition methods merged based on the weakly visible light for generating confrontation network with infrared image
CN112488970A (en) * 2019-09-12 2021-03-12 四川大学 Infrared and visible light image fusion method based on coupling generation countermeasure network
CN113222839A (en) * 2021-05-08 2021-08-06 华北电力大学 Infrared and visible light image fusion denoising method based on generation countermeasure network
CN113192049A (en) * 2021-05-17 2021-07-30 杭州电子科技大学 Visible light and infrared image fusion method based on LatLRR and Retinex enhancement
CN113450297A (en) * 2021-07-22 2021-09-28 山东澳万德信息科技有限责任公司 Fusion model construction method and system for infrared image and visible light image
CN113989100A (en) * 2021-09-18 2022-01-28 西安电子科技大学 Infrared texture sample expansion method based on pattern generation countermeasure network
CN114004775A (en) * 2021-11-30 2022-02-01 四川大学 Infrared and visible light image fusion method combining potential low-rank representation and convolutional neural network
CN114463235A (en) * 2022-01-27 2022-05-10 上海电力大学 Infrared and visible light image fusion method and device and storage medium
CN114648475A (en) * 2022-03-14 2022-06-21 泰山学院 Infrared and visible light image fusion method and system based on low-rank sparse representation
CN115358961A (en) * 2022-09-13 2022-11-18 杭州电子科技大学 Multi-focus image fusion method based on deep learning
CN115601282A (en) * 2022-11-10 2023-01-13 江苏海洋大学(Cn) Infrared and visible light image fusion method based on multi-discriminator generation countermeasure network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
PHILLIP ISOLA ET AL.: "Image-to-Image Translation with Conditional Adversarial Networks", 《2017 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》, pages 5967 - 5976 *
李佳琦: "基于GAN的红外与可见光图像融合算法研究", 《中国优秀硕士学位论文全文数据库》, pages 32 - 33 *
颜敏 等: "基于光传输模型学习的红外和可见光图像融合网络设计", 《计算机科学》, vol. 49, no. 4, pages 216 - 217 *

Similar Documents

Publication Publication Date Title
CN111709902B (en) Infrared and visible light image fusion method based on self-attention mechanism
CN109815893B (en) Color face image illumination domain normalization method based on cyclic generation countermeasure network
CN108537191B (en) Three-dimensional face recognition method based on structured light camera
CN110490158B (en) Robust face alignment method based on multistage model
CN114299559A (en) Finger vein identification method based on lightweight fusion global and local feature network
CN114998615B (en) Collaborative saliency detection method based on deep learning
CN116805360B (en) Obvious target detection method based on double-flow gating progressive optimization network
CN113095158A (en) Handwriting generation method and device based on countermeasure generation network
CN108470178A (en) A kind of depth map conspicuousness detection method of the combination depth trust evaluation factor
CN116310394A (en) Saliency target detection method and device
Cheng et al. Generating high-resolution climate prediction through generative adversarial network
CN117292117A (en) Small target detection method based on attention mechanism
Tian et al. Semantic segmentation of remote sensing image based on GAN and FCN network model
CN118172283A (en) Marine target image defogging method based on improved gUNet model
CN112330562B (en) Heterogeneous remote sensing image transformation method and system
CN117253277A (en) Method for detecting key points of face in complex environment by combining real and synthetic data
CN117495718A (en) Multi-scale self-adaptive remote sensing image defogging method
CN113076806A (en) Structure-enhanced semi-supervised online map generation method
CN116109539A (en) Infrared image texture information enhancement method and system based on generation of countermeasure network
CN115294371B (en) Complementary feature reliable description and matching method based on deep learning
CN114529455A (en) Task decoupling-based parameter image super-resolution method and system
CN118570600B (en) Unsupervised infrared and visible light image fusion method under divide-and-conquer loss constraint
CN115018056B (en) Training method for local description subnetwork for natural scene image matching
Li et al. Single image defogging method based on improved generative adversarial network
CN117994822B (en) Cross-mode pedestrian re-identification method based on auxiliary mode enhancement and multi-scale feature fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230512

RJ01 Rejection of invention patent application after publication