CN115063434A - Low-low-light image instance segmentation method and system based on feature denoising - Google Patents

Low-low-light image instance segmentation method and system based on feature denoising Download PDF

Info

Publication number
CN115063434A
CN115063434A CN202210516256.2A CN202210516256A CN115063434A CN 115063434 A CN115063434 A CN 115063434A CN 202210516256 A CN202210516256 A CN 202210516256A CN 115063434 A CN115063434 A CN 115063434A
Authority
CN
China
Prior art keywords
low
feature
image
light image
segmentation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210516256.2A
Other languages
Chinese (zh)
Other versions
CN115063434B (en
Inventor
付莹
陈林蔚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210516256.2A priority Critical patent/CN115063434B/en
Publication of CN115063434A publication Critical patent/CN115063434A/en
Application granted granted Critical
Publication of CN115063434B publication Critical patent/CN115063434B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a low-light image instance segmentation method and system based on feature denoising, and belongs to the technical field of computer vision. Firstly, calibrating according to the characteristics of the image sensor, and constructing a full-element physical quantity noise model. And then training the example segmentation deep convolutional network by using a noise disturbance suppression loss function and an example segmentation task loss function. The low-light image is subjected to example segmentation by a convolutional neural network, and the low-light image does not need to be preprocessed. And finally, performing target detection segmentation on the denoised features by utilizing the trained detection segmentation sub-network to obtain a final instance segmentation result. The method is simple to realize, high in performance, strong in robustness, extremely low in extra calculation amount overhead of dealing with low dim light, and beneficial to realizing low-delay and high-speed low dim light example segmentation. The method can synthesize the high-quality simulation low-light image data set only by utilizing the existing natural image data set, thereby reducing the generation cost of the low-light image data.

Description

Low-low-light image instance segmentation method and system based on feature denoising
Technical Field
The invention relates to a method and a system for example segmentation of an image under low-light conditions, in particular to a method and a system for example segmentation of a low-light image based on feature denoising, and belongs to the technical field of computer vision.
Background
Low dim light environments, i.e., environments with low light intensity, such as environments under nighttime city lighting, moonlight, starlight conditions, etc. The image collected under low-light conditions is the low-light image. At this time, the light is insufficient, the photons collected by the image sensor are less, the image signal is weaker, the low-light image has stronger noise and lower signal-to-noise ratio, and the scene information is seriously lost and is difficult to recover. RAW images are RAW data that the image sensor converts the captured light source signals into digital signals, in an unprocessed, uncompressed format. Compared with the common sRGB image in JPEG format, the RAW image reserves more information and has a better dynamic range.
Instance segmentation is a technique that can extract semantic information from an image. The technique can intelligently identify the category, position, size and shape of the interested target from the image under the natural illumination condition, and is described by category labels, object boundary frames and pixel masks or polygons. The technology is widely applied to image editing, intelligent security and protection, automatic driving and satellite image interpretation, and has high practical value and application potential. At present, mainstream example segmentation methods are based on deep learning and deep convolution neural networks and rely on a normal light image data set for learning training. Due to the lack of a low-light image data set and a corresponding end-to-end method and system, when the method is applied to a low-light image, preprocessing such as denoising and enhancing needs to be carried out on the image. Image denoising refers to converting a noisy image into a clean image, and image enhancement refers to converting a blurred image with lower brightness into a brighter clear image. The image denoising and enhancement need a complex calculation process, so that the calculation complexity of the whole system is increased, and the effect on low-low light images is poor, so that the speed and the precision of the final example segmentation system are difficult to meet the actual requirements.
Denoising refers to a process of reducing noise in a signal and improving the signal-to-noise ratio of the signal. Since low-light images have strong noise, the noise can cause the deep convolutional neural network for example segmentation to be severely disturbed in the process of extracting the features of the images. Therefore, the image features extracted from the network shallow layer have obvious high-frequency noise, and the image features extracted from the network deep layer have low semantic response to the interested target, so that the example segmentation result of the interested target cannot be accurately extracted finally. The self-adaptive feature denoising method is characterized in that image feature disturbance caused by image noise is reduced in a self-adaptive mode according to the noise condition of input features, high-frequency noise of network shallow layer extraction features is reduced, more effective feature signals are reserved, and semantic response of deep network features to interested targets is improved.
Disclosure of Invention
The invention aims to creatively provide a low-light image example segmentation method and system based on feature denoising, starting from the requirement of intelligent image identification of the existing night low-light scene, aiming at the defects of high calculation complexity, low performance and the like of an example segmentation technology under low-light conditions. The method realizes end-to-end rapid high-performance low-weak-light image example segmentation, does not need to carry out complex denoising and enhanced preprocessing on the low-weak-light image, and reduces the computational complexity of the whole example segmentation.
The invention is realized by adopting the following technical scheme.
A low-light image example segmentation method based on feature denoising comprises the following steps:
step 101: a simulated low-light image dataset is synthesized.
Step 102: and training the example segmentation deep convolutional network by using a noise disturbance suppression loss function and an example segmentation task loss function.
Step 103: and performing feature extraction and feature denoising on the low and weak light images by using the trained deep convolutional neural network. No pre-processing of the low-low light image is required.
Step 104: and (4) utilizing the trained detection segmentation sub-network to perform target detection segmentation on the denoised features to obtain a final instance segmentation result.
In order to achieve the purpose, the invention further provides a low-light image example segmentation system based on feature denoising, which comprises a simulation low-light image synthesis module, a noise disturbance suppression learning module, an image feature extraction and feature denoising module and a target detection segmentation extraction module.
Advantageous effects
Compared with the prior art, the method and the system have the following advantages that:
1. the method does not depend on additional low-low light image denoising, enhancing and other processing modules, the training process is carried out end to end, and the method is simple to realize, high in performance and strong in robustness.
2. The method has extremely low overhead of extra calculation amount for dealing with low dim light, and is beneficial to realizing low-delay and high-speed low dim light example segmentation.
3. The method can reduce the generation cost of the low-light image data. The data-driven deep convolution neural network image example segmentation method needs a large amount of low-low light image data, and the traditional low-low light image data set collection and production needs a large amount of manpower and material resources.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a schematic diagram of a learning method for noise disturbance suppression in the process of simulating low-light image synthesis and training by the method of the present invention.
FIG. 3 is a schematic diagram of details inside a down-sampling layer with adaptive feature denoising according to the method of the present invention.
FIG. 4 is a schematic diagram of a learnable low pass filtered convolution block according to the method of the present invention.
FIG. 5 is a flow chart of the system of the present invention.
Detailed Description
For a better understanding of the objects and advantages of the present invention, reference is made to the following detailed description taken in conjunction with the accompanying drawings.
A low-light image example segmentation method based on feature denoising comprises the following steps:
step 101: a simulated low-light image dataset is synthesized.
At present, the example segmentation method based on learning mainly depends on a high-quality data set to realize good performance, but a disclosed low-low light image data set is lacked at present, and the manufacturing period of a real low-low light image data set suitable for example segmentation is long and the cost is high, so that the example segmentation model is trained by considering synthesizing and simulating the low-low light image data set so as to reduce the manufacturing cost of the low-low light image example segmentation data set.
As shown in fig. 2, first, the ordinary natural light RGB image can be converted into RAW image data by an upnp process. Simulated noise injection is then used to simulate noise in the image under low light conditions. The injected simulation noise may be generated using a gaussian noise model, a gaussian poisson mixed noise model, a full-element physical quantity noise model, or the like, and preferably, a full-element physical quantity noise model is used. The full-element physical quantity noise model needs to be calibrated according to the characteristics of the sensor so as to obtain physical noise components such as shot noise, read noise, color cast noise, random row noise and the like, so that more real noise simulation is realized, and a better low-dim-light image example segmentation effect is favorably achieved. Through the two steps, large-scale natural illumination image data can be converted into simulated low-light image data. The proposed method is applicable to various types of low-light images (RAW images, sRGB images, etc.), preferably RAW images.
Step 102: and training the example segmentation deep convolutional network by using a noise disturbance suppression loss function and an example segmentation task loss function. As shown in FIG. 2, example segmentation models (Mask R-CNN, HTC, Cascade Mask R-CNN, YOLACT, PointRend, etc.) are applied.
Specifically, the total loss function L (θ) is expressed as:
L(θ)=L IS (x;θ)+αL IS (x′;θ)+βL DS (x,x′;θ) (1)
wherein L is IS 、L DS Respectively representing an example segmentation task loss function and a noise disturbance suppression loss function; x and x' respectively represent a simulated clean image and a simulated noisy image; theta is a model parameter; α, β are loss function weights. Wherein L is IS Adjusting according to different models used by example segmentation; l is DS Expressed as:
Figure BDA0003639616990000041
wherein f is (i) The extracted features of layer (i) of the network f are segmented for the example. L is DS The clean image features extracted from the clean image can be used as guidance, so that the features extracted from the noisy image by the model are as close to the clean image features as possible, thereby reducing the interference of higher-level noise in the low-light image on the network feature extraction, and being beneficial to realizing robust low-light image instance segmentation.
In the whole training process, the low and weak light images do not need to be processed, and the feature extraction network and the detection segmentation sub-network of the example segmentation network are trained and finished at one time in an end-to-end mode.
Step 103: and carrying out example segmentation on the low-low light image by using the trained example segmentation depth convolution neural network without preprocessing the low-low light image.
The common example segmentation model can only extract the features of the image, and the high-frequency noise is introduced into the extracted image features by the high-intensity noise in the low-light image, so that the final low responsiveness to the interested target is caused.
The method firstly extracts the characteristics of the image. In the characteristic extraction process, a self-adaptive characteristic denoising downsampling layer and a learnable low-pass filtering convolution block are used for denoising the characteristics. The denoised features greatly reduce the feature disturbance caused by higher-level noise in the low-light image, the extra calculation amount brought by the self-adaptive feature denoising downsampling layer is less, the learnable low-pass filtering volume block does not additionally increase the calculation amount, and the low-light image example segmentation of low-delay robustness is favorably realized.
The self-adaptive feature denoising downsampling layer fully utilizes local features to carry out weighted average in the downsampling process of the convolutional neural network, and reduces the noise level in the features.
In order to reduce the characteristic noise and simultaneously reserve more target characteristics. As shown in fig. 3, the adaptive feature denoise downsampling layer adaptively predicts different low pass filters for each channel and each position of the feature map. The characteristics after the self-adaptive low-pass filtering processing and 2 times of down sampling are the characteristics after de-noising:
Figure BDA0003639616990000051
Figure BDA0003639616990000052
x, Y is an input feature map and a denoised feature map respectively; w is the weighting weight predicted by the self-adaptive downsampling layer according to the input feature map; (c, i, j) represents coordinates in a feature map channel dimension, a width dimension, and a height dimension; s represents a position around the spatial position (i, j); phi () represents a weight prediction function of the adaptive downsampling layer, and in order to ensure that the output weight is low-pass, the weight prediction function normalizes the predicted weight through softmax;
Figure BDA0003639616990000053
represents the prediction W c,i,j The local feature location relied upon; GP () represents a Global Pooling (Global pool) operation. p and q represent the horizontal and vertical coordinates on the convolution kernel, respectively.
The learnable low-pass filtering convolution block improves the robustness of the normal convolution layer to the characteristic noise by explicitly using a branch with learnable low-pass filtering, and the structure of the block is shown in fig. 4, and the block comprises two branches, one branch is the same as the normal convolution, and the other branch is composed of the learnable low-pass filtering and the 1 × 1 convolution. Learnable low-pass filtering weights can be obtained from training and are normalized by a softmax function to ensure that the low-pass filtering is low-pass filtering, the low-pass filtering can locally smooth the features to reduce the noise level in the features, and the denoised features are fused into a common convolution branch through 1 x 1 convolution. Meanwhile, the learnable low-pass filtering convolution block can be changed into a common convolution layer by using a re-parameterization technology during reasoning, so that the calculation amount is not increased. The re-parameterization is to convert a plurality of parallel branches in the convolutional neural network into an equivalent single-branch structure through parameter fusion. In this embodiment, the learnable low-pass filtered convolution block re-parameterization is the parameter of a single 3 × 3 convolution layer, obtained by the following equation:
W′ 3×3 [h,t,p,q]=W 3×3 [h,t,h,t]+(W 1×1 [h,t,1,1]*W LLPF [1,t,p,q]) (5)
wherein, W' 3×3 To re-parameterize the learnable low-pass filtered convolution block to obtain the convolved layer parameters,
Figure BDA0003639616990000054
Figure BDA0003639616990000055
C 1 、C 2 respectively representing the number of input channels and the number of output channels of the convolutional layer, wherein R represents a real number;
Figure BDA0003639616990000056
common 3 x 3 convolutional layer parameters in a learnable low pass filtered convolutional block;
Figure BDA0003639616990000057
is 1 × 1 convolutional layer parameter;
Figure BDA0003639616990000058
for learnable low pass filter parameters, the learnable low pass filter shares a set of 3 × 3 weights for each input channel; h. t represents the coordinates at the output channel, input channel, respectively, and h e {1,2, …, C 2 },t∈{1,2,…,C 1 }; p and q respectively represent the horizontal and vertical coordinates of the convolution kernel, and the p and the q belong to {1,2 and 3 }.
Step 104: and (4) utilizing the trained detection segmentation subnetwork to perform target detection segmentation on the denoised features to obtain a final instance segmentation result.
In order to achieve the purpose of the present invention, the present invention further provides an end-to-end low-dim light image example segmentation system based on adaptive feature denoising, as shown in fig. 5, which includes a simulation low-dim light image synthesis module 10, a noise disturbance suppression learning module 20, an image feature extraction and feature denoising module 30, and a target detection segmentation extraction module 40.
The simulation low-light image synthesis module 10 is used for establishing a simulation low-light image data set used for training an example segmentation model. The module may synthesize the input natural light image dataset into a simulated low-low light image dataset.
And the noise disturbance suppression learning module 20 is configured to guide the example segmentation model to learn robust image features, so that feature disturbance caused by higher-level noise in the low-light and low-light images can be reduced. The module utilizes a simulated low-low light image data set to use a noise disturbance suppression learning training model, and outputs a trained example segmentation model.
The image feature extraction and feature denoising module 30 reduces the noise level of the image features in the example segmentation network by using adaptive low-pass filtering, and extracts stable and clean image features from the noisy low-light image to realize robust low-light image example segmentation.
The target detection segmentation and extraction module 40 can identify and extract the size and shape of the class position of the target of interest from the denoised image features to obtain the final low-light image example segmentation result.
The connection relationship among the modules is as follows:
the output end of the simulated low and low light image synthesis module 10 is connected with the input end of the noise disturbance suppression learning module 20.
The output end of the noise disturbance suppression learning module 20 is connected with the input end of the image feature extraction and feature denoising module 30.
The output end of the image feature extraction and feature denoising module 30 is connected with the input end of the target detection segmentation extraction module 40.

Claims (8)

1. A low-light image example segmentation method based on feature denoising is characterized by comprising the following steps:
step 101: synthesizing a simulation low-light image dataset;
step 102: training an example segmentation depth convolution network by using a noise disturbance suppression loss function and an example segmentation task loss function;
step 103: performing feature extraction and feature denoising on the low and weak light images by using the trained deep convolution neural network;
step 104: and (4) utilizing the trained detection segmentation sub-network to perform target detection segmentation on the denoised features to obtain a final instance segmentation result.
2. The feature-based denoising low-light image example segmentation method as claimed in claim 1, wherein in step 101, firstly, the common natural illumination RGB image is converted into RAW image data; simulated noise injection is then used to simulate noise in the image under low light conditions.
3. The feature-based denoising low-light image instance segmentation method as claimed in claim 2, wherein the injected simulation noise uses a full-element physical quantity noise model.
4. The feature-based denoising low-light image instance segmentation method as claimed in claim 1, wherein in step 102, the total loss function L (θ) is expressed as:
L(θ)=L IS (x;θ)+αL IS (x′;θ)+βL DS (x,x′;θ) (1)
wherein L is IS 、L DS Respectively representing an example segmentation task loss function and a noise disturbance suppression loss function; x and x' respectively represent a simulated clean image and a simulated noisy image; theta is a model parameter; alpha and beta are loss function weights;
wherein L is IS Adjusted according to the different models used for instance segmentation, L DS Expressed as:
Figure FDA0003639616980000011
wherein, f (i) Segmenting the extracted features of layer (i) of the network f for the instance;
in the whole training process, the feature extraction network and the detection segmentation sub-network of the example segmentation network are trained and completed in one step in an end-to-end mode.
5. The method for low-light image instance segmentation based on feature denoising as claimed in claim 4, wherein L is DS The clean image features extracted from the clean image are used as a guide, so that the features extracted from the model on the noisy image are as close to the clean image features as possible.
6. The method for segmenting the low-light image example based on the feature denoising as claimed in claim 1, wherein in step 103, the image is firstly subjected to feature extraction, and in the process of feature extraction, a down-sampling layer of the adaptive feature denoising and a learnable low-pass filtering convolution block are utilized to denoise the feature;
a self-adaptive feature denoising downsampling layer, wherein the local features are fully utilized to carry out weighted average in the downsampling process of the convolutional neural network, so that the noise level in the features is reduced;
the adaptive feature denoising downsampling layer can adaptively predict different low-pass filtering for each channel and each position of the feature map; the characteristics after the self-adaptive low-pass filtering processing and 2 times of down sampling are the characteristics after de-noising:
Figure FDA0003639616980000021
Figure FDA0003639616980000022
x, Y is an input feature map and a denoised feature map respectively; w is the weighting weight predicted by the self-adaptive downsampling layer according to the input feature map; (c, i, j) represents coordinates in a feature map channel dimension, a width dimension, and a height dimension; s represents a position around the spatial position (i, j); phi () represents a weight prediction function of the adaptive downsampling layer, and in order to ensure that the output weight is low-pass, the weight prediction function normalizes the predicted weight through softmax;
Figure FDA0003639616980000023
represents the prediction W c,i,j The local feature location relied upon; GP () represents a global pooling operation; p and q represent the horizontal and vertical coordinates on the convolution kernel, respectively.
7. The method as claimed in claim 6, wherein the learnable low-pass filtered convolution block improves the robustness of the ordinary convolution layer to the characteristic noise by explicitly using a branch with learnable low-pass filtering, and comprises two branches, one is the same branch as the ordinary convolution, and the other is a branch composed of learnable low-pass filtering and 1 x 1 convolution;
the learnable low-pass filtering weight is obtained from training, and is normalized through a softmax function to ensure that the low-pass filtering is low-pass filtering, the characteristic can be locally smoothed to reduce the noise level in the characteristic, and the denoised characteristic is fused into a common convolution branch through 1 multiplied by 1 convolution; meanwhile, the learnable low-pass filtering convolution block is changed into a common convolution layer by using a reparameterization technology during reasoning;
the re-parameterization is to convert a plurality of parallel branches in the convolutional neural network into equivalent single-branch structures through parameter fusion; the learnable low pass filtered convolution block re-parameterization is the parameter of a single 3 x 3 convolution layer, obtained by:
W′ 3×3 [h,t,p,q]=W 3×3 [h,t,h,t]+(W 1×1 [h,t,1,1]*W LLPF [1,t,p,q]) (5)
wherein, W' 3×3 To re-parameterize the learnable low-pass filtered convolution block to obtain the convolved layer parameters,
Figure FDA0003639616980000031
Figure FDA0003639616980000032
C 1 、C 2 respectively representing the number of input channels and the number of output channels of the convolutional layer, wherein R represents a real number;
Figure FDA0003639616980000033
common 3 x 3 convolutional layer parameters in a learnable low pass filtered convolutional block;
Figure FDA0003639616980000034
is a 1 × 1 convolutional layer parameter;
Figure FDA0003639616980000035
for learnable low pass filter parameters, the learnable low pass filter shares a set of 3 x 3 weights for each input channel; h. t represents the coordinates at the output channel, input channel, respectively, and h e {1,2, …, C 2 },t∈{1,2,…,C 1 }; p and q respectively represent the horizontal and vertical coordinates of the convolution kernel, and the p and the q belong to {1,2 and 3 }.
8. A low-light image example segmentation system based on feature denoising is characterized by comprising a simulation low-light image synthesis module, a noise disturbance suppression learning module, an image feature extraction and feature denoising module and a target detection segmentation extraction module;
the simulation low-light image synthesis module is used for establishing a simulation low-light image data set used for training an example segmentation model; the module synthesizes the input natural illumination image data set into a simulation low and weak light image data set;
the noise disturbance inhibition learning module is used for guiding the example segmentation model to learn robust image features, so that the feature disturbance caused by higher-level noise in the low and low light images can be reduced; the module utilizes a simulated low-low light image data set to use a noise disturbance suppression learning training model, and outputs a trained example segmentation model;
the image feature extraction and feature denoising module reduces the noise level of image features in an example segmentation network by using self-adaptive low-pass filtering, and extracts stable and clean image features from a low-light image with noise;
the target detection segmentation extraction module identifies and extracts the size and shape of the category position of the interested target from the denoised image characteristics to obtain a final low-low light image example segmentation result;
the connection relationship among the modules is as follows:
the output end of the simulation low and weak light image synthesis module is connected with the input end of the noise disturbance suppression learning module;
the output end of the noise disturbance suppression learning module is connected with the input end of the image feature extraction and feature denoising module;
the output end of the image feature extraction and feature denoising module is connected with the input end of the target detection segmentation extraction module.
CN202210516256.2A 2022-05-12 2022-05-12 Low-low-light image instance segmentation method and system based on feature denoising Active CN115063434B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210516256.2A CN115063434B (en) 2022-05-12 2022-05-12 Low-low-light image instance segmentation method and system based on feature denoising

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210516256.2A CN115063434B (en) 2022-05-12 2022-05-12 Low-low-light image instance segmentation method and system based on feature denoising

Publications (2)

Publication Number Publication Date
CN115063434A true CN115063434A (en) 2022-09-16
CN115063434B CN115063434B (en) 2024-06-04

Family

ID=83197878

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210516256.2A Active CN115063434B (en) 2022-05-12 2022-05-12 Low-low-light image instance segmentation method and system based on feature denoising

Country Status (1)

Country Link
CN (1) CN115063434B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091372A (en) * 2023-01-03 2023-05-09 江南大学 Infrared and visible light image fusion method based on layer separation and heavy parameters
CN116310356A (en) * 2023-03-23 2023-06-23 昆仑芯(北京)科技有限公司 Training method, target detection method, device and equipment of deep learning model

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10032256B1 (en) * 2016-11-18 2018-07-24 The Florida State University Research Foundation, Inc. System and method for image processing using automatically estimated tuning parameters
CN109584248A (en) * 2018-11-20 2019-04-05 西安电子科技大学 Infrared surface object instance dividing method based on Fusion Features and dense connection network
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
CN114022732A (en) * 2021-11-03 2022-02-08 北京理工大学 Extremely dark light object detection method based on RAW image
WO2022083026A1 (en) * 2020-10-21 2022-04-28 华中科技大学 Ultrasound image denoising model establishing method and ultrasound image denoising method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10032256B1 (en) * 2016-11-18 2018-07-24 The Florida State University Research Foundation, Inc. System and method for image processing using automatically estimated tuning parameters
CN109584248A (en) * 2018-11-20 2019-04-05 西安电子科技大学 Infrared surface object instance dividing method based on Fusion Features and dense connection network
CN111028163A (en) * 2019-11-28 2020-04-17 湖北工业大学 Convolution neural network-based combined image denoising and weak light enhancement method
WO2022083026A1 (en) * 2020-10-21 2022-04-28 华中科技大学 Ultrasound image denoising model establishing method and ultrasound image denoising method
CN114022732A (en) * 2021-11-03 2022-02-08 北京理工大学 Extremely dark light object detection method based on RAW image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116091372A (en) * 2023-01-03 2023-05-09 江南大学 Infrared and visible light image fusion method based on layer separation and heavy parameters
CN116091372B (en) * 2023-01-03 2023-08-15 江南大学 Infrared and visible light image fusion method based on layer separation and heavy parameters
CN116310356A (en) * 2023-03-23 2023-06-23 昆仑芯(北京)科技有限公司 Training method, target detection method, device and equipment of deep learning model
CN116310356B (en) * 2023-03-23 2024-03-29 昆仑芯(北京)科技有限公司 Training method, target detection method, device and equipment of deep learning model

Also Published As

Publication number Publication date
CN115063434B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN111553929B (en) Mobile phone screen defect segmentation method, device and equipment based on converged network
CN108921799B (en) Remote sensing image thin cloud removing method based on multi-scale collaborative learning convolutional neural network
CN109685072B (en) Composite degraded image high-quality reconstruction method based on generation countermeasure network
CN108230264B (en) Single image defogging method based on ResNet neural network
CN108986050B (en) Image and video enhancement method based on multi-branch convolutional neural network
CN111915530B (en) End-to-end-based haze concentration self-adaptive neural network image defogging method
CN108615226B (en) Image defogging method based on generation type countermeasure network
CN115063434B (en) Low-low-light image instance segmentation method and system based on feature denoising
CN108510451B (en) Method for reconstructing license plate based on double-layer convolutional neural network
CN111242862A (en) Multi-scale fusion parallel dense residual convolution neural network image denoising method
CN111489303A (en) Maritime affairs image enhancement method under low-illumination environment
CN112184604B (en) Color image enhancement method based on image fusion
CN109741340B (en) Ice cover radar image ice layer refined segmentation method based on FCN-ASPP network
CN113420794B (en) Binaryzation Faster R-CNN citrus disease and pest identification method based on deep learning
CN112419163B (en) Single image weak supervision defogging method based on priori knowledge and deep learning
CN110706239A (en) Scene segmentation method fusing full convolution neural network and improved ASPP module
CN114219722A (en) Low-illumination image enhancement method by utilizing time-frequency domain hierarchical processing
CN111340718A (en) Image defogging method based on progressive guiding strong supervision neural network
CN112070688A (en) Single image defogging method for generating countermeasure network based on context guidance
CN110717921A (en) Full convolution neural network semantic segmentation method of improved coding and decoding structure
CN113052776A (en) Unsupervised image defogging method based on multi-scale depth image prior
CN112950589A (en) Dark channel prior defogging algorithm of multi-scale convolution neural network
CN116452469B (en) Image defogging processing method and device based on deep learning
CN117333359A (en) Mountain-water painting image super-resolution reconstruction method based on separable convolution network
CN117196980A (en) Low-illumination image enhancement method based on illumination and scene texture attention map

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant