CN117670753B - Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network - Google Patents

Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network Download PDF

Info

Publication number
CN117670753B
CN117670753B CN202410125179.7A CN202410125179A CN117670753B CN 117670753 B CN117670753 B CN 117670753B CN 202410125179 A CN202410125179 A CN 202410125179A CN 117670753 B CN117670753 B CN 117670753B
Authority
CN
China
Prior art keywords
infrared image
brightness mapping
image
fusion
network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410125179.7A
Other languages
Chinese (zh)
Other versions
CN117670753A (en
Inventor
曹思源
俞贝楠
沈会良
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua Research Institute Of Zhejiang University
Original Assignee
Jinhua Research Institute Of Zhejiang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua Research Institute Of Zhejiang University filed Critical Jinhua Research Institute Of Zhejiang University
Priority to CN202410125179.7A priority Critical patent/CN117670753B/en
Publication of CN117670753A publication Critical patent/CN117670753A/en
Application granted granted Critical
Publication of CN117670753B publication Critical patent/CN117670753B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an infrared image enhancement method based on a depth multi-brightness mapping unsupervised fusion network. The method comprises the following steps: acquiring an infrared image, preprocessing and amplifying data enhancement to obtain an enhanced infrared image, thereby constructing an extended infrared image data set; establishing a depth multi-brightness mapping non-supervision fusion network, wherein the network comprises a nonlinear multi-brightness mapping module and a multi-brightness mapping fusion module; inputting the expanded infrared image data set into a network for performing unsupervised training to obtain a trained network; and inputting the infrared image to be enhanced into a network for processing, and outputting the enhanced infrared image to complete the enhancement of the infrared image. The method has the characteristics of high parallelization, can process a plurality of infrared images with multiple sizes at the same time, has high network reasoning speed and has a natural effect.

Description

Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network
Technical Field
The invention relates to an infrared image enhancement method, relates to the technical field of image processing, and in particular relates to an infrared image enhancement method based on a depth multi-brightness mapping non-supervision fusion network.
Background
Infrared images are important data sources for multispectral perception, and have observation performance which visible light images such as night vision, fog transmission, smoke transmission and the like do not have. In actual use, the infrared image needs to be enhanced to have a better look and feel due to the characteristics of the sensor. The traditional image brightness mapping algorithm is mostly adopted in the existing enhancement method, and the global information of the image is lack of better perception and grasp. On the other hand, with the continuous intellectualization of edge computing devices, image signal processing ISP (IMAGE SIGNAL Processor) algorithms based on deep learning means gradually exhibit advantages in effect and speed, while related algorithm researches are not much.
The mainstream infrared enhancement algorithm at present comprises a brightness mapping method, histogram equalization, truncated image stretching and the like.
The brightness mapping method uses a pre-designed image mapping curve to control brightness variation of an image in the range of 0-1. Common luminance mapping methods include gamma conversion, and various types of artificially designed mapping curves, such as S-curves. However, mapping based on purely manual design ignores global information and luminance distribution information of the image.
Histogram equalization enhances an image by converting its histogram distribution into a form of uniform distribution. So that the local contrast can be enhanced without affecting the overall contrast. However, it does not add statistical information to the processed data, which may enhance the image noise, may reduce the contrast of the useful signal, and may result in an excessive enhancement.
The cut-off stretching is performed by selecting the brightness of a certain proportion of the lowest and the highest of the images, and then performing linear stretching on pixel values in the brightness range, so that the aim of enhancing the images is fulfilled. Such a method is robust to the existence of image dead spots, but has poor flexibility in linear stretching, and may degrade the effect on infrared images with low contrast.
Disclosure of Invention
In order to solve the problems in the background technology, the invention provides an infrared image enhancement method based on a depth multi-brightness mapping non-supervision fusion network.
The technical scheme adopted by the invention is as follows:
The invention relates to an infrared image enhancement method based on a depth multi-brightness mapping non-supervision fusion network, which comprises the following steps:
1) Collecting a plurality of original infrared images, sequentially carrying out image preprocessing and data enhancement and amplification on each original infrared image to obtain a plurality of enhanced infrared images, and constructing each enhanced infrared image and the original infrared image together into an extended infrared image data set.
2) Establishing a depth multi-brightness mapping non-supervision fusion network, wherein the depth multi-brightness mapping non-supervision fusion network comprises a nonlinear multi-brightness mapping module and a multi-brightness mapping fusion module which are sequentially connected; and inputting the extended infrared image data set into the depth multi-brightness mapping non-supervision fusion network to perform non-supervision training, and obtaining the depth multi-brightness mapping non-supervision fusion network with the training completed.
3) And inputting a plurality of infrared images to be enhanced into the trained depth multi-brightness mapping non-supervision fusion network, processing the trained depth multi-brightness mapping non-supervision fusion network, and outputting each enhanced infrared image to complete the enhancement of the infrared images.
In the step 1), for each original infrared image, the infrared image is preprocessed, the corrected infrared image is obtained after two-point correction of the infrared image, the corrected infrared image is linearly stretched, the brightness range of the corrected infrared image is mapped between 0 and 1, and finally the preprocessed original infrared image is obtained.
The two-point correction of the single infrared image can offset the non-uniformity of the background of the infrared detector image and the response of each phase element of the detector, and preferably, 5-segment two-point correction is used for processing. The infrared image is linearly stretched, so that the problem that the infrared image is small in dynamic range after background subtraction can be solved, the mapped output image range is between 0 and 1, the dynamic range is stretched, and further processing is facilitated.
In the step 1), data enhancement amplification processing is performed on each preprocessed infrared image, wherein the data enhancement amplification processing comprises brightness mapping conversion, noise intensity conversion, non-uniform shielding, image rotation, image scaling and the like, and each preprocessed infrared image is processed in one or more processing modes in the data enhancement amplification processing to obtain an enhanced infrared image.
The data enhancement and data amplification means can ensure that the infrared enhancement network has stronger enhancement capability for shooting infrared images under various conditions so as to enrich image scenes and image contents. The brightness mapping of the image is completed by adopting a mode of randomly switching the brightness of the image; the noise intensity conversion is realized by randomly adding poisson-Gaussian noise with different intensities; the non-uniform shielding is realized by randomly extracting the image content and juxtaposing zero; image rotation and image scaling are accomplished by corresponding spatial transformations.
In the step 2), the nonlinear multi-brightness mapping module comprises N largest pooling layers and a global pooling layer which are sequentially connected; inputting the infrared image into a nonlinear multi-brightness mapping module for processing aiming at each infrared image in the extended infrared image data set, and outputting N predicted nonlinear brightness mapping parameters after the nonlinear multi-brightness mapping module processes, wherein the value range of the nonlinear brightness mapping parameters is 0-1; according to each predicted nonlinear brightness mapping parameter, the nonlinear brightness mapping method is used for mapping, and then N infrared images with nonlinear brightness mapped are finally obtained, so that enhancement of different degrees for a plurality of different attention areas can be completed.
And carrying out self-adaptive nonlinear multi-brightness mapping on the normalized infrared image so as to improve the significance of different brightness contents of the image. In order to adaptively accomplish this function, a suitable plurality of nonlinear luminance mapping intermediate results are constructed according to the infrared image content to further enhance the final infrared image enhancement effect. The nonlinear multi-luminance mapping module may progressively aggregate global information.
In the step 2), the multi-brightness mapping fusion module comprises an image feature extractor part, an image feature fusion device part and an image feature decoder part which are connected in sequence; for an N Zhang Liangdu nonlinear mapped infrared image obtained by processing each infrared image in the expanded infrared image data set through a nonlinear multi-brightness mapping module, an image feature extractor part processes the N Zhang Liangdu nonlinear mapped infrared image through a dense connection twin network to obtain N coding features, then an image feature fusion part firstly performs channel dimension concatenation processing on the N coding features to obtain concatenation features, then inputs the concatenation features into a full connection layer to perform feature fusion and feature dimension reduction processing to obtain fusion features, and finally an image feature decoder part decodes the fusion features through a full convolution neural network to obtain a final enhanced infrared image.
And fusing the plurality of brightness nonlinear mapped infrared images by utilizing a multi-brightness mapping fusion module to obtain the enhancement effect of natural look and feel and clear contrast, wherein the optimal number of nonlinear mappings is N=4. The densely connected twin network comprises four convolution layers connected in sequence.
The full convolution neural network comprises four convolution layers which are sequentially connected, the input of the full convolution neural network is firstly input into the first three convolution layers in sequence for processing, the output processed by the first convolution layer and the output processed by the third convolution layer are connected through residual errors and then are input into the fourth convolution layer for processing, and the processed output and the input of the full convolution neural network are connected through residual errors and then are output as the output of the full convolution neural network.
In the step 2), the loss function constraint is adopted to perform the unsupervised training on the depth multi-brightness mapping unsupervised fusion network, and the loss function in the unsupervised training is specifically as follows:
L=λ1L12L23L34L4
Wherein L represents the total loss value of the depth multi-brightness mapping unsupervised fusion network; l 1 represents a consistency metric value between an original infrared image of the input depth multi-brightness mapping unsupervised fusion network and an enhanced infrared image output after processing; l 2、L3 and L 4 represent a perceived loss value, an image contrast, and an image brightness loss value, respectively; lambda 1、λ2、λ3 and lambda 4 represent the first, second, third and fourth weight parameters of the loss function, respectively.
The unsupervised loss function mainly realizes two functions, namely, one of the functions is used for restraining the image content to be consistent with the original image content as much as possible and ensuring that the image content is not distorted; and secondly, measuring the quality of the generated infrared image, thereby achieving the purposes of directly judging the enhancement result and optimizing the enhancement effect. Therefore, the network loss function mainly comprises two aspects of consistency measure and evaluation index. The invention utilizes consistency measurement to restrict the consistency of the infrared image content after enhancement so as to avoid image distortion, wherein the consistency measurement is specifically a local normalized cross-correlation measurement; meanwhile, the semantic consistency of the image is supervised by using the perception loss, and the perception loss is obtained by a VGG-16 (Visual Geometry Group-layer) network; on the other hand, an evaluation index guiding network is required to be constructed to carry out unsupervised enhancement on an infrared image, and the unsupervised enhancement mainly comprises supervision on image contrast and image brightness, wherein the image contrast is obtained through a Laplacian filter, and the image brightness is required to be as close as possible to 0.5.
In the process of the unsupervised training, performing parameter iterative update on the depth multi-brightness mapping unsupervised fusion network by using a gradient descent method, after setting an initial learning rate, attenuating the learning rate every other fixed epoch, and stopping parameter update when the preset iteration times are reached to obtain the optimal parameters of the depth multi-brightness mapping unsupervised fusion network, wherein lambda 1=0.5,λ2=0.75,λ3=0.75,λ4 =1 is set as a preferable value; and finally, the total loss value of the depth multi-brightness mapping non-supervision fusion network is reduced and tends to be stable, the non-supervision training is completed, and the depth multi-brightness mapping non-supervision fusion network with the training completed is obtained.
The beneficial effects of the invention are as follows:
1. The invention adopts two-point correction and linear stretching to preprocess the infrared image, can offset the non-uniformity of the image background of the infrared detector and the response of each phase element of the detector, can solve the problem that the infrared image has smaller dynamic range after the background is subtracted, has a stretched dynamic range, and is convenient for subsequent further processing.
2. According to the invention, the nonlinear multi-brightness mapping module is used for carrying out self-adaptive nonlinear multi-brightness mapping on the normalized infrared image, so that the significance of different brightness contents of the image can be improved.
3. According to the invention, the infrared images subjected to nonlinear mapping of a plurality of brightnesses are fused through the multi-brightness mapping fusion module, so that the enhancement effect with natural appearance and clear contrast can be obtained.
The network in the method is of a full convolution structure, so that the input of the multi-size image can be processed, and two modules of the network can be trained and inferred cooperatively. Meanwhile, by means of the characteristic of high parallelization of the neural network, the network can process a plurality of input images at the same time, and the network reasoning speed is high and the effect is natural.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a schematic diagram of a nonlinear multi-luminance mapping module;
FIG. 3 is a schematic diagram of a portion of an image feature extractor in a multi-intensity map fusion module;
FIG. 4 is a schematic diagram of a portion of an image feature fusion engine in a multi-luma map fusion module;
FIG. 5 is a schematic diagram of a portion of an image feature decoder in a multi-luma map fusion module;
FIG. 6 is a diagram comparing the method of the present invention with the prior art advanced infrared image enhancement method.
Detailed Description
The invention will be described in further detail with reference to the accompanying drawings and specific examples.
As shown in fig. 1, the infrared image enhancement method based on the depth multi-brightness mapping unsupervised fusion network of the present invention includes:
1) Collecting a plurality of original infrared images, sequentially carrying out image preprocessing and data enhancement and amplification on each original infrared image to obtain a plurality of enhanced infrared images, and constructing each enhanced infrared image and the original infrared image together into an extended infrared image data set.
In the step 1), for each original infrared image, preprocessing the infrared image, firstly, carrying out two-point correction on the infrared image to obtain a corrected infrared image, carrying out linear stretching on the corrected infrared image, mapping the brightness range of the corrected infrared image between 0 and 1, and finally obtaining the preprocessed original infrared image.
The two-point correction of the single infrared image can offset the non-uniformity of the background of the infrared detector image and the response of each phase element of the detector, and preferably, 5-segment two-point correction is used for processing. The infrared image is linearly stretched, so that the problem that the infrared image is small in dynamic range after background subtraction can be solved, the mapped output image range is between 0 and 1, the dynamic range is stretched, and further processing is facilitated.
In the step 1), data enhancement amplification processing is carried out on each preprocessed infrared image, wherein the data enhancement amplification processing comprises brightness mapping conversion, noise intensity conversion, non-uniform shielding, image rotation, image scaling and the like, and each preprocessed infrared image is processed in one or more processing modes in the data enhancement amplification processing to obtain an enhanced infrared image.
The data enhancement and data amplification means can ensure that the infrared enhancement network has stronger enhancement capability for shooting infrared images under various conditions so as to enrich image scenes and image contents. The brightness mapping of the image is completed by adopting a mode of randomly switching the brightness of the image; the noise intensity conversion is realized by randomly adding poisson-Gaussian noise with different intensities; the non-uniform shielding is realized by randomly extracting the image content and juxtaposing zero; image rotation and image scaling are accomplished by corresponding spatial transformations.
2) Establishing a depth multi-brightness mapping non-supervision fusion network, wherein the depth multi-brightness mapping non-supervision fusion network comprises a nonlinear multi-brightness mapping module and a multi-brightness mapping fusion module which are sequentially connected; and inputting the extended infrared image data set into the depth multi-brightness mapping non-supervision fusion network to perform non-supervision training, and obtaining the depth multi-brightness mapping non-supervision fusion network with the training completed.
As shown in fig. 2, in step 2), the nonlinear multi-brightness mapping module comprises N maximum pooling layers and one global pooling layer which are sequentially connected; inputting the infrared image into a nonlinear multi-brightness mapping module for processing aiming at each infrared image in the extended infrared image data set, and outputting N predicted nonlinear brightness mapping parameters alpha 1、α2…αN after the nonlinear multi-brightness mapping module processes, wherein the value range of the nonlinear brightness mapping parameters is 0-1; according to each predicted nonlinear brightness mapping parameter, the nonlinear brightness mapping method is used for mapping, and then N infrared images with nonlinear brightness mapped are finally obtained, so that enhancement of different degrees for a plurality of different attention areas can be completed.
And carrying out self-adaptive nonlinear multi-brightness mapping on the normalized infrared image so as to improve the significance of different brightness contents of the image. In order to adaptively accomplish this function, a suitable plurality of nonlinear luminance mapping intermediate results are constructed according to the infrared image content to further enhance the final infrared image enhancement effect. The nonlinear multi-brightness mapping module can progressively aggregate global information by utilizing a multi-layer convolutional network neural cooperation with a maximum pooling operation to aggregate global image information, and the network simultaneously outputs a plurality of parameter predictions ranging from 0 to 1 and applies the parameter predictions to an input image in a fixed form, so that different parts of the image are enhanced by means of the attention mechanism of the neural network.
As shown in fig. 3, 4 and 5, in step 2), the multi-luminance mapping fusion module includes an image feature extractor portion, an image feature fusion portion and an image feature decoder portion connected in sequence; for an N Zhang Liangdu nonlinear mapped infrared image obtained by processing each infrared image in the expanded infrared image data set through a nonlinear multi-brightness mapping module, an image feature extractor part processes the N Zhang Liangdu nonlinear mapped infrared image through a dense connection twin network to obtain N coding features, then an image feature fusion part firstly performs channel dimension concatenation processing on the N coding features to obtain concatenation features, then inputs the concatenation features into a full connection layer to perform feature fusion and feature dimension reduction processing to obtain fusion features, and finally an image feature decoder part decodes the fusion features through a full convolution neural network to obtain a final enhanced infrared image.
The image feature extractor is constructed using a convolutional neural network with dense connections to perform feature extraction on the multi-intensity map image results. Preferably, the number of convolution layers of the network is set to 5. The image feature fusion device connects different mapping image features on the channel layer, and utilizes the full connection layer to perform feature fusion and feature dimension reduction on each spatial position feature. The image feature decoder performs feature decoding using a network with a layer jump connection.
And fusing the plurality of brightness nonlinear mapped infrared images by utilizing a multi-brightness mapping fusion module to obtain the enhancement effect of natural look and feel and clear contrast, wherein the optimal number of nonlinear mappings is N=4. The densely connected twin network comprises four convolution layers connected in sequence.
The full convolution neural network comprises four convolution layers which are sequentially connected, wherein the input of the full convolution neural network is firstly input into the first three convolution layers in sequence for processing, the processed output of the first convolution layer and the processed output of the third convolution layer are connected through residual errors and then are input into the fourth convolution layer for processing, and the processed output and the input of the full convolution neural network are connected through residual errors and then are output as the output of the full convolution neural network.
In the step 2), performing unsupervised training on the depth multi-brightness mapping unsupervised fusion network by adopting loss function constraint, wherein the loss function in the unsupervised training is specifically as follows:
L=λ1L12L23L34L4
Wherein L represents the total loss value of the depth multi-brightness mapping unsupervised fusion network; l 1 represents a consistency metric value between an original infrared image of the input depth multi-brightness mapping unsupervised fusion network and an enhanced infrared image output after processing; l 2、L3 and L 4 represent a perceived loss value, an image contrast, and an image brightness loss value, respectively; lambda 1、λ2、λ3 and lambda 4 represent the first, second, third and fourth weight parameters of the loss function, respectively.
The unsupervised loss function mainly realizes two functions, namely, one of the functions is used for restraining the image content to be consistent with the original image content as much as possible and ensuring that the image content is not distorted; and secondly, measuring the quality of the generated infrared image, thereby achieving the purposes of directly judging the enhancement result and optimizing the enhancement effect. Therefore, the network loss function mainly comprises two aspects of consistency measure and evaluation index. The invention utilizes consistency measurement to restrict the consistency of the infrared image content after enhancement so as to avoid image distortion, wherein the consistency measurement is specifically a local normalized cross-correlation measurement; meanwhile, the semantic consistency of the image is supervised by using the perception loss, and the perception loss is obtained by a VGG-16 (Visual Geometry Group-layer) network; on the other hand, an evaluation index guiding network is required to be constructed to carry out unsupervised enhancement on an infrared image, and the unsupervised enhancement mainly comprises supervision on image contrast and image brightness, wherein the image contrast is obtained through a Laplacian filter, and the image brightness is required to be as close as possible to 0.5.
In the process of the unsupervised training, performing parameter iterative update on the depth multi-brightness mapping unsupervised fusion network by using a gradient descent method, after setting an initial learning rate, attenuating the learning rate every other fixed epoch, and stopping parameter update when the preset iteration times are reached to obtain the optimal parameters of the depth multi-brightness mapping unsupervised fusion network, wherein lambda 1=0.5,λ2=0.75,λ3=0.75,λ4 =1 is set as a preferable value; and finally, the total loss value of the depth multi-brightness mapping non-supervision fusion network is reduced and tends to be stable, the non-supervision training is completed, and the depth multi-brightness mapping non-supervision fusion network with the training completed is obtained.
3) And inputting a plurality of infrared images to be enhanced into the trained depth multi-brightness mapping non-supervision fusion network, processing the trained depth multi-brightness mapping non-supervision fusion network, and outputting each enhanced infrared image to complete the enhancement of the infrared images.
Specific embodiments of the invention are as follows:
Acquiring two infrared images to be enhanced by using an infrared camera, inputting the two infrared images to be enhanced into a trained depth multi-brightness mapping non-supervision fusion network for processing, wherein the number N of the maximum pooling layers in a nonlinear multi-brightness mapping module in the trained depth multi-brightness mapping non-supervision fusion network is selected to be 4, the number of convolution layers in an image feature extractor of the multi-brightness mapping fusion module is set to be 5, and the first, second, third and fourth weight parameters of a loss function in non-supervision training are respectively set to be: lambda 1=0.5,λ2=0.75,λ3=0.75,λ4 =1; and outputting two reinforced infrared images after the training is finished and the depth multi-brightness mapping is processed by an unsupervised fusion network, so that the reinforcement of the infrared images is finished.
As shown in fig. 6, which is a schematic diagram comparing the method of the present invention with the existing advanced infrared image enhancement method, it can be seen that the method of the present invention has a better enhancement effect.
The above description is only of embodiments of the present invention and should not be construed as limiting the scope of the present invention, and equivalent changes, which are known to those skilled in the art based on the present invention, should be construed as falling within the scope of the present invention.

Claims (5)

1. An infrared image enhancement method based on a depth multi-brightness mapping non-supervision fusion network is characterized by comprising the following steps:
1) Collecting a plurality of original infrared images, sequentially carrying out image preprocessing and data enhancement amplification on each original infrared image to obtain a plurality of enhanced infrared images, and constructing each enhanced infrared image and the original infrared image together into an extended infrared image data set;
2) Establishing a depth multi-brightness mapping non-supervision fusion network, wherein the depth multi-brightness mapping non-supervision fusion network comprises a nonlinear multi-brightness mapping module and a multi-brightness mapping fusion module which are sequentially connected; inputting the extended infrared image data set into a depth multi-brightness mapping non-supervision fusion network to perform non-supervision training, and obtaining a depth multi-brightness mapping non-supervision fusion network after training is completed;
3) Inputting a plurality of infrared images to be enhanced into a trained depth multi-brightness mapping non-supervision fusion network, processing the trained depth multi-brightness mapping non-supervision fusion network, and outputting enhanced infrared images to complete the enhancement of the infrared images;
In the step 2), the nonlinear multi-brightness mapping module comprises N largest pooling layers and a global pooling layer which are sequentially connected; inputting the infrared image into a nonlinear multi-brightness mapping module for processing aiming at each infrared image in the expanded infrared image data set, outputting N predicted nonlinear brightness mapping parameters after the processing of the nonlinear multi-brightness mapping module, and finally obtaining N infrared images after nonlinear brightness mapping after mapping by using a nonlinear brightness mapping method according to each predicted nonlinear brightness mapping parameter;
in the step 2), the multi-brightness mapping fusion module comprises an image feature extractor part, an image feature fusion device part and an image feature decoder part which are connected in sequence; for an N Zhang Liangdu nonlinear mapped infrared image obtained by processing each infrared image in the expanded infrared image data set through a nonlinear multi-brightness mapping module, an image feature extractor part processes the N Zhang Liangdu nonlinear mapped infrared image through a dense connection twin network to obtain N coding features, then an image feature fusion part firstly performs channel dimension concatenation processing on the N coding features to obtain concatenation features, then inputs the concatenation features into a full connection layer to perform feature fusion and feature dimension reduction processing to obtain fusion features, and finally an image feature decoder part decodes the fusion features through a full convolution neural network to obtain a final enhanced infrared image.
2. The infrared image enhancement method based on the depth multi-brightness mapping unsupervised fusion network according to claim 1, wherein: in the step 1), for each original infrared image, the infrared image is preprocessed, the corrected infrared image is obtained after two-point correction of the infrared image, the corrected infrared image is linearly stretched, the brightness range of the corrected infrared image is mapped between 0 and 1, and finally the preprocessed original infrared image is obtained.
3. The infrared image enhancement method based on the depth multi-brightness mapping unsupervised fusion network according to claim 1, wherein: in the step 1), data enhancement and amplification processing is performed on each preprocessed infrared image, wherein the data enhancement and amplification processing comprises brightness mapping conversion, noise intensity conversion, non-uniform shielding, image rotation and image scaling processing, and each preprocessed infrared image is processed in one or more processing modes in the data enhancement and amplification processing to obtain an enhanced infrared image.
4. The infrared image enhancement method based on the depth multi-brightness mapping unsupervised fusion network according to claim 1, wherein: the full convolution neural network comprises four convolution layers which are sequentially connected, the input of the full convolution neural network is firstly input into the first three convolution layers in sequence for processing, the output processed by the first convolution layer and the output processed by the third convolution layer are connected through residual errors and then are input into the fourth convolution layer for processing, and the processed output and the input of the full convolution neural network are connected through residual errors and then are output as the output of the full convolution neural network.
5. The infrared image enhancement method based on the depth multi-brightness mapping unsupervised fusion network according to claim 1, wherein: in the step 2), the loss function constraint is adopted to perform the unsupervised training on the depth multi-brightness mapping unsupervised fusion network, and the loss function in the unsupervised training is specifically as follows:
L=λ1L12L23L34L4
Wherein L represents the total loss value of the depth multi-brightness mapping unsupervised fusion network; l 1 represents a consistency metric value between an original infrared image of the input depth multi-brightness mapping unsupervised fusion network and an enhanced infrared image output after processing; l 2、L3 and L 4 represent a perceived loss value, an image contrast, and an image brightness loss value, respectively; lambda 1、λ2、λ3 and lambda 4 represent the first, second, third and fourth weight parameters of the loss function, respectively;
In the process of the unsupervised training, performing parameter iterative updating on the depth multi-brightness mapping unsupervised fusion network by using a gradient descent method, and stopping parameter updating after the preset iteration times are reached to obtain the optimal parameters of the depth multi-brightness mapping unsupervised fusion network; and finally, the total loss value of the depth multi-brightness mapping non-supervision fusion network is reduced and tends to be stable, the non-supervision training is completed, and the depth multi-brightness mapping non-supervision fusion network with the training completed is obtained.
CN202410125179.7A 2024-01-30 2024-01-30 Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network Active CN117670753B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410125179.7A CN117670753B (en) 2024-01-30 2024-01-30 Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410125179.7A CN117670753B (en) 2024-01-30 2024-01-30 Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network

Publications (2)

Publication Number Publication Date
CN117670753A CN117670753A (en) 2024-03-08
CN117670753B true CN117670753B (en) 2024-06-18

Family

ID=90079210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410125179.7A Active CN117670753B (en) 2024-01-30 2024-01-30 Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network

Country Status (1)

Country Link
CN (1) CN117670753B (en)

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102095443B1 (en) * 2019-10-17 2020-05-26 엘아이지넥스원 주식회사 Method and Apparatus for Enhancing Image using Structural Tensor Based on Deep Learning
CN112927162A (en) * 2021-03-17 2021-06-08 长春理工大学 Low-illumination image oriented enhancement method and system
CN113140011B (en) * 2021-05-18 2022-09-06 烟台艾睿光电科技有限公司 Infrared thermal imaging monocular vision distance measurement method and related components
CN113838104B (en) * 2021-08-04 2023-10-27 浙江大学 Registration method based on multispectral and multimodal image consistency enhancement network
CN113902625A (en) * 2021-08-19 2022-01-07 深圳市朗驰欣创科技股份有限公司 Infrared image enhancement method based on deep learning
CN116977188A (en) * 2022-04-15 2023-10-31 西南科技大学 Infrared image enhancement method based on depth full convolution neural network
CN115393225A (en) * 2022-09-07 2022-11-25 南京邮电大学 Low-illumination image enhancement method based on multilevel feature extraction and fusion
CN116823686B (en) * 2023-04-28 2024-03-08 长春理工大学重庆研究院 Night infrared and visible light image fusion method based on image enhancement
CN117274333A (en) * 2023-09-25 2023-12-22 浙江大学 Multispectral image registration method based on multiscale depth feature map fusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
视觉对比度机制小目标增强与背景抑制综述;李刚等;《激光与红外》;20230720;第9页 *
面向可见光—近红外图像融合的植被与天空概率模板生成;童璨,应佳成,沈会良;《中国图象图形学报》;20221216;第14页 *

Also Published As

Publication number Publication date
CN117670753A (en) 2024-03-08

Similar Documents

Publication Publication Date Title
CN109447907B (en) Single image enhancement method based on full convolution neural network
CN112614077B (en) Unsupervised low-illumination image enhancement method based on generation countermeasure network
CN104217404B (en) Haze sky video image clearness processing method and its device
CN111915526A (en) Photographing method based on brightness attention mechanism low-illumination image enhancement algorithm
CN110717868B (en) Video high dynamic range inverse tone mapping model construction and mapping method and device
CN110378845A (en) A kind of image repair method under extreme condition based on convolutional neural networks
CN111402145B (en) Self-supervision low-illumination image enhancement method based on deep learning
CN112308803B (en) Self-supervision low-illumination image enhancement and denoising method based on deep learning
CN113096029A (en) High dynamic range image generation method based on multi-branch codec neural network
CN111612722A (en) Low-illumination image processing method based on simplified Unet full-convolution neural network
CN111681180A (en) Priori-driven deep learning image defogging method
Shutova et al. NTIRE 2023 challenge on night photography rendering
CN116757986A (en) Infrared and visible light image fusion method and device
CN111768326A (en) High-capacity data protection method based on GAN amplification image foreground object
CN113379861B (en) Color low-light-level image reconstruction method based on color recovery block
CN117670753B (en) Infrared image enhancement method based on depth multi-brightness mapping non-supervision fusion network
CN117611467A (en) Low-light image enhancement method capable of balancing details and brightness of different areas simultaneously
CN117391987A (en) Dim light image processing method based on multi-stage joint enhancement mechanism
CN115829868B (en) Underwater dim light image enhancement method based on illumination and noise residual image
CN116309171A (en) Method and device for enhancing monitoring image of power transmission line
CN116229081A (en) Unmanned aerial vehicle panoramic image denoising method based on attention mechanism
CN114549343A (en) Defogging method based on dual-branch residual error feature fusion
CN113222828A (en) Zero-reference-based image enhancement method for industrial Internet of things monitoring platform
Chen et al. Multiple channel adjustment based on composite backbone network for underwater image enhancement
CN116797490B (en) Lightweight turbid water body image enhancement method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant