CN111986084A - Multi-camera low-illumination image quality enhancement method based on multi-task fusion - Google Patents
Multi-camera low-illumination image quality enhancement method based on multi-task fusion Download PDFInfo
- Publication number
- CN111986084A CN111986084A CN202010765138.6A CN202010765138A CN111986084A CN 111986084 A CN111986084 A CN 111986084A CN 202010765138 A CN202010765138 A CN 202010765138A CN 111986084 A CN111986084 A CN 111986084A
- Authority
- CN
- China
- Prior art keywords
- resolution
- network
- image
- low
- exposure
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000005286 illumination Methods 0.000 title claims abstract description 34
- 238000000034 method Methods 0.000 title claims abstract description 30
- 230000004927 fusion Effects 0.000 title claims abstract description 11
- 238000012549 training Methods 0.000 claims abstract description 55
- 238000004040 coloring Methods 0.000 claims abstract description 33
- 238000005457 optimization Methods 0.000 claims abstract description 4
- 238000013507 mapping Methods 0.000 claims description 8
- 230000006870 function Effects 0.000 claims description 7
- 230000003287 optical effect Effects 0.000 claims description 6
- 230000002708 enhancing effect Effects 0.000 claims description 5
- 238000005562 fading Methods 0.000 claims description 5
- 238000005070 sampling Methods 0.000 claims description 5
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 238000005520 cutting process Methods 0.000 claims description 3
- 238000003384 imaging method Methods 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 4
- 230000007547 defect Effects 0.000 description 2
- 238000009826 distribution Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000031700 light absorption Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 230000011514 reflex Effects 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4053—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
- G06T3/4076—Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution using the original low-resolution images to iteratively correct the high-resolution images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a multi-camera low-illumination image quality enhancement method based on multi-task fusion. The method utilizes a low-resolution secondary exposure color image and a high-resolution optimal exposure gray image pair which are simultaneously obtained by an imaging system under the condition of low illumination to generate a high-resolution optimal exposure color image, and comprises the following specific steps: (1) generating image block pairs used for training; (2) decomposing the image quality enhancement task into a reference exposure compensation task, a reference coloring task and a reference super-resolution task based on multi-camera input, constructing corresponding network models and cascading; (3) constructing a loss function, sequentially and independently training each network by using an optimizer, and performing end-to-end optimization on the whole cascade network based on the loss function; (4) and using the optimized cascade network to enhance the quality of the real image pair acquired by the multiple cameras under low illumination to obtain a high-resolution and excellent-exposure color image, fully utilizing the captured image information, and reconstructing a real scene efficiently, reliably and economically.
Description
Technical Field
The invention relates to the field of computational photography and the technical field of image processing, in particular to a multi-camera low-illumination color image quality enhancement method based on multi-task fusion.
Background
Low-light imaging techniques are an important and challenging task in applications such as autopilot, security monitoring, and professional photography. When shooting is carried out, the signal-to-noise ratio of a captured image is extremely low and the visual perception quality is seriously reduced due to insufficient illumination conditions caused by too short exposure time or too weak ambient light and the like. The traditional histogram equalization and gamma correction algorithm can directly adjust the distribution of image brightness channels and improve the brightness distribution, but cannot compensate the color saturation of the image. An image quality enhancement algorithm based on the Retinex theory decomposes a shot picture into an illumination component, a reflection component which represents the characteristics of an object and is irrelevant to illumination, and a high-quality color image under the optimal exposure condition is recovered by improving the illumination component or the reflection component. Because the method only uses a single-frame source image to enhance under the condition of low illumination, the information undersampling is serious, and partial color representation in the reconstructed image is not real and the texture is fuzzy.
At present, a multi-camera acquisition system can economically and effectively greatly improve the multi-dimensional information acquisition capacity, and is widely applied to optical systems of hundred million-level pixel acquisition, high frame rate video acquisition, multi-dimensional spectrum acquisition and the like. And due to the refinement of the image processing algorithm, the relatively low mobile terminal imaging equipment can also utilize a multi-camera hybrid system to generate images at the single lens reflex level. Among them, the single-channel black-and-white sensor is the most popular choice for the auxiliary color sensor, such as P10 and P20 series, because of its higher photoelectric conversion efficiency and better retaining scene structure information. However, at present, the texture details acquired by the black-and-white sensor with the same or larger scale are mainly used for enhancing the detail imaging of the color sensor, which does not fully play the maximum effect of the black-and-white sensor with fine texture imaging, and greatly reduces the imaging utilization rate. In some scenes, the space of an optical system is severely limited, so that how to build the most economical and effective imaging system and perform high-quality imaging in the limited sensor space is a problem worthy of exploration, which has important significance to the consumer-grade camera system market.
Disclosure of Invention
Aiming at the defects of the prior low-illumination image quality enhancement algorithm and the defects of the multi-camera system in imaging, the invention aims to provide a more economic and effective multi-camera low-illumination image quality enhancement method based on multi-task fusion,
in order to achieve the purpose, the technical scheme adopted by the invention is as follows:
a multi-camera low-illumination image quality enhancement method based on multi-task fusion is characterized in that a low-resolution secondary exposure color image and a high-resolution optimal exposure gray image which are obtained simultaneously are used for generating a high-quality color image which is optimally exposed under high resolution, and the method specifically comprises the following steps:
step 1, generating image block pairs used for training: randomly selecting pictures of different viewpoints and different exposure times in the same scene for cutting, fading and adding noise, acquiring a low-resolution secondary exposure color image and a high-resolution optimal exposure single-channel gray image required by training, and forming an input image pair as a training data set;
step 2, constructing a cascaded reference exposure compensation network, a reference coloring network and a reference super-resolution network for enhancing the quality of the multi-camera low-illumination image;
step 3, constructing a loss function, using an optimizer to train the reference exposure compensation network, the reference coloring network and the reference super-resolution network in sequence, and after the training of each network is finished, using a training data set to carry out end-to-end training optimization on the whole cascade network;
and 4, enhancing the multi-camera picture quality under the low-illumination condition by using the optimized cascade network: inputting a low-resolution secondary exposure color image and a high-resolution optimal exposure gray image which are simultaneously obtained by a multi-camera under the condition of low illumination, obtaining the low-resolution optimal exposure color image through a reference exposure compensation network, then coloring the down-sampled gray image through a reference coloring network, and finally carrying out interpolation reconstruction on the colored low resolution through the reference super-resolution network to obtain the high-resolution optimal exposure color image under the condition of low illumination.
Further, in step 2, the specific steps of constructing the cascaded reference exposure compensation network, the reference coloring network and the reference super-resolution network include:
step 21, constructing a reference exposure compensation network: firstly, convolution kernels with larger radius are used for carrying out large-range feature understanding on an image, then stacked small convolution kernels and correction linear unit layers are used for carrying out local feature nonlinear fitting, finally, a Sigmoid function and an additional offset layer are used for carrying out feature space value domain constraint, and a low-resolution color image after exposure compensation is obtained;
step 22, constructing a reference coloring network: firstly, calculating the correlation of an input image pair in a brightness characteristic (such as a brightness component in a YUV color space) by using an optical flow estimation module, and then shifting from a color image to a gray image on the color space by using a calculated reference position; then, for the area which can not correctly estimate the reference position, the stacked residual convolution modules are used for carrying out color information compensation to obtain a down-sampling color image after the gray image is colored;
step 23, constructing a reference super-resolution network: firstly, feature learning is carried out on a low-resolution color image on different scales by using a convolutional layer with the step length larger than 1, then global features of the low-resolution color image are extracted by using a full-link layer, local and global features of the image are fused to obtain a color information feature coefficient, and the color information feature coefficient is recombined on a feature space to form a geometric-brightness fused three-dimensional color representation space; in addition, mapping the input high-resolution gray image to a brightness characteristic space through a convolution layer as a guide, combining a plane space grid to obtain a mapping relation between low-resolution color information and corresponding high-resolution color information in a geometric-brightness fused three-dimensional space, and then interpolating a low-resolution color information characteristic coefficient by using the mapping relation to obtain a color characteristic coefficient of the image at high resolution; and finally, carrying out affine combination of the gray level image and the color characteristic coefficient and bilinear up-sampling interpolation addition of the low-resolution color image to obtain the generated high-resolution high-quality color image.
And 24, sequentially cascading the reference exposure compensation network, the reference coloring network and the reference super-resolution network, wherein the reference coloring network and the reference super-resolution network respectively use the output of the former network as the input to perform subsequent image quality enhancement, and the final output of the cascading network, namely the output of the reference super-resolution network, is the high-quality high-resolution color image processed by the network.
Further, in step 3, the specific steps of training the network include:
step 31, training a reference exposure compensation network: the method comprises the following steps of taking a low-resolution secondary exposure color image and a low-resolution optimal exposure gray image of different viewpoints as input of a reference exposure compensation network, and taking the low-resolution optimal exposure color image of the viewpoint where the color image is located as a training target;
step 32, training the reference coloring network: the method comprises the following steps of taking a low-resolution optimal exposure color image and a low-resolution optimal exposure gray image of different viewpoints as input of a reference coloring network, and taking the low-resolution optimal exposure color image of the viewpoint where the gray image is located as a training target;
step 33, training the reference super-resolution network: the method comprises the following steps of taking a low-resolution optimal exposure color image and a high-resolution optimal exposure gray image of the same viewpoint as input of a reference super-resolution network, and taking the high-resolution optimal exposure color image under the viewpoint as a training target;
step 34, jointly training the cascade network: and taking the low-resolution secondary exposure color images and the high-resolution optimal exposure gray images of different viewpoints as initial inputs of the whole cascaded network, and taking the high-resolution optimal exposure color images under the viewpoints corresponding to the gray images as training targets.
The invention simultaneously obtains the color image of low-resolution secondary exposure and the gray image of high-resolution optimal exposure under the condition of low illumination by using the imaging system, gradually reconstructs undersampled image characteristic information through the cascaded reference exposure compensation network, the reference coloring network and the reference super-resolution network, finally obtains the color image of high-resolution optimal exposure, fully utilizes the captured image information, restores the real scene with high quality, and improves the cost performance of the multi-camera acquisition system. Compared with the existing method, the method has the advantages that the real scene characteristics under the low-illumination condition are efficiently and reliably reconstructed in a multi-task fusion mode, and the visual quality of the acquired image is greatly improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a block diagram of a module implementation of the method of the present invention.
FIG. 3 is a diagram of an embodiment of a reference exposure compensation network in the method of the present invention.
FIG. 4 is a diagram of an embodiment of a reference shading network in the method of the present invention.
Fig. 5 is a diagram of an embodiment of a super-resolution network according to the method of the present invention.
Detailed Description
The invention will be described in detail below with reference to the accompanying drawings and specific embodiments.
Referring to fig. 1, in the multi-camera low-illumination image quality enhancement method based on multi-task fusion of the present embodiment, a color-black and white dual-sensor imaging system is used to simultaneously acquire a color image of low-resolution sub-exposure and a grayscale image of high-resolution optimal exposure under a low-illumination condition, so as to generate a color image of high-resolution optimal exposure and high quality. The method comprises the following specific steps:
step 1, generating image block pairs used for training: randomly selecting pictures with different viewpoints and different exposure times in the same scene for cutting, fading and adding noise, fading the pictures with long exposure and adding a small amount of noise to simulate single-channel imaging, adding relatively large noise to the pictures with short exposure to simulate corresponding color sensor imaging, acquiring low-resolution secondary-exposure color images and high-resolution optimal-exposure single-channel gray images required by training, and forming an input image pair as a training data set.
The difference between the main exposure and the sub-exposure here is the difference in the amount of the absorbed light signal converted into the charge signal. The black-and-white camera directly converts all light signals into charge signals, while the color camera converts only part of the light signals into charge signals in each pixel element in order to acquire color information, thereby causing certain light signal loss. In the training data generation of the present embodiment, a long-exposure color picture is subjected to a color fading process as simulated black-and-white camera data.
And 2, decomposing the multi-camera low-illumination image quality enhancement task into a reference exposure compensation task, a reference coloring task and a reference super-resolution task based on the input of the multi-camera. And aiming at each task, constructing a corresponding network model: the two-dimensional convolutional layer, the transposed convolutional layer, the fully-connected layer, the residual module and the like are used for constructing a cascaded reference exposure compensation network, a reference coloring network and a reference super-resolution network, and the whole framework is shown in fig. 2.
Step 21, constructing a reference exposure compensation network: as shown in fig. 3, firstly, a convolution kernel with a larger radius is used to perform a large-scale feature understanding on an image, then a stacked small convolution kernel and a modified linear unit layer (ReLU) are used to perform a local feature nonlinear fitting, and finally a Sigmoid function and an additional offset layer are used to perform a feature space value range constraint, so as to obtain an exposure-compensated low-resolution color image. The specific network composition, labeled in fig. 3, "convolutional layer, k9n 64" represents a convolutional layer with convolutional kernel size of 9x9, output channel number of 64, and step size (omitted) of 1.
Step 22, constructing a reference coloring network: as shown in FIG. 4, the correlation of the input image pair in the Y channel (luminance component in YUV color space) is first calculated Using an Optical Flow estimation module (this module in this embodiment is referred to as D.Sun, X.Yang, M.Liu and J.Kautz, "PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume,"2018IEEE/CVF Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT,2018, pp.8934-8943, doi: 10.1109/CVPR.2018.00931), and then the shift in color space from the color image to the gray image is performed Using the calculated reference position. And then, for areas such as occlusion and the like which cannot correctly estimate the reference position, color information compensation is carried out by using stacked residual convolution modules, and a down-sampled color image obtained after coloring the gray image is obtained. The specific network composition, labeled in fig. 4, "convolutional layer, k9n 64" represents a convolutional layer with convolutional kernel size of 9x9, output channel number of 64, and step size (omitted) of 1.
Step 23, constructing a reference super-resolution network: as shown in fig. 5, feature learning is performed on a low-resolution color image on different scales by using a convolutional layer with a step length larger than 1, then global features of the low-resolution color image are extracted by using a full-link layer, local and global features of the image are fused to obtain a color information feature coefficient, and the color information feature coefficient is recombined on a feature space to form a geometric-brightness fused three-dimensional color representation space. In addition, the input high-resolution gray image is mapped to a brightness characteristic space through a convolution layer as a guide, a mapping relation between low-resolution color information and corresponding high-resolution color information in a geometric-brightness fused three-dimensional space is obtained by combining a planar space grid, and then the low-resolution color information characteristic coefficient is interpolated by utilizing the mapping relation to obtain the color characteristic coefficient of the image at the high resolution. And finally, carrying out affine combination of the gray level image and the color characteristic coefficient and bilinear up-sampling interpolation addition of the low-resolution color image to obtain the generated high-resolution high-quality color image. The specific network composition, labeled in fig. 5, "convolutional layer, k3n8s 2" represents a convolutional layer with convolutional kernel size of 3x3, output channel number of 8, step size of 2, "fully connected layer: 256 "represents the number of output nodes 256.
And step 24, for the whole multi-camera low-illumination color image quality enhancement task, sequentially cascading the reference exposure compensation network, the reference coloring network and the reference super-resolution network, wherein the reference coloring network and the reference super-resolution network respectively take the output of the former network as the input to carry out subsequent image quality enhancement, and the final output of the cascading network (namely the output of the reference super-resolution network) is the enhanced high-quality high-resolution color image.
And 3, constructing a loss function, sequentially and independently training the reference exposure compensation network, the reference coloring network and the reference super-resolution network by using an optimizer, and performing end-to-end training optimization on the whole cascade network process by using the training data set in the step 1 after the independent training of each module is finished until the image quality enhancement effect is remarkably stable.
Step 31, training a reference exposure compensation network: during training, the low-resolution secondary exposure color image and the low-resolution optimal exposure gray-scale image of different viewpoints are used as the input of a reference exposure compensation network, and the low-resolution optimal exposure color image of the viewpoint where the color image is located is used as a training target.
Step 32, training the reference coloring network: during training, the low-resolution optimal exposure color image and the low-resolution optimal exposure gray image of different viewpoints are used as input of a reference coloring network, and the low-resolution optimal exposure color image of the viewpoint where the gray image is located is used as a training target.
Step 33, training the reference super-resolution network: during training, the low-resolution optimal exposure color image and the high-resolution optimal exposure gray image of the same viewpoint are used as the input of a reference super-resolution network, and the high-resolution optimal exposure color image under the viewpoint is used as a training target.
Step 34, jointly training the cascade network: during training, low-resolution secondary exposure color images and high-resolution optimal exposure gray images of different viewpoints are used as initial input of the whole cascaded network, and the high-resolution optimal exposure color images of the viewpoints corresponding to the gray images are used as training targets. For different tasks, the features of the training image pair are summarized in table 1, where the parameters in (… ) represent the features of the input color image and grayscale image, respectively, (v)1,v2) Representing different viewpoints, sL=256×256,sH=1024×1024,(lL,LH) To characterize different amounts of light absorption. During training, loss functions such as L1, mean square error and cosine similarity are used for optimizing the cascade network.
TABLE 1 feature lists for training pairs of images
Task | Viewpoint | Resolution ratio | Amount of light absorption | Number of |
Reference exposure compensation | (v1,v2) | (sL,sL) | (lL,LH) | 45,000 |
Reference coloration | (v1,v2) | (sL,sL) | (lH,LH) | 26,000 |
Reference super resolution | (v2,v2) | (sL,sH) | (lH,LH) | 15,000 |
Low light quality enhancement | (v1,v2) | (sL,sH) | (lL,LH) | 1596 |
And 4, using the trained model to enhance the quality of the multi-camera picture under the low-illumination condition: inputting a low-resolution secondary exposure color image and a high-resolution optimal exposure gray image which are obtained by a multi-camera under the condition of low illumination, obtaining the low-resolution optimal exposure color image through a reference exposure compensation network, then coloring the down-sampled gray image through a reference coloring network, and finally carrying out interpolation on the colored low resolution through the reference super-resolution network to reconstruct the high-resolution optimal exposure color image, thereby realizing the acquisition of the high-quality color image under the condition of low illumination.
Claims (3)
1. A multi-camera low-illumination image quality enhancement method based on multitask fusion is characterized in that a low-resolution secondary exposure color image and a high-resolution optimal exposure gray image which are obtained simultaneously are used for generating a high-quality color image which is optimally exposed at a high resolution, and the method specifically comprises the following steps:
step 1, generating image block pairs used for training: randomly selecting pictures of different viewpoints and different exposure times in the same scene for cutting, fading and adding noise, acquiring a low-resolution secondary exposure color image and a high-resolution optimal exposure single-channel gray image required by training, and forming an input image pair as a training data set;
step 2, constructing a cascaded reference exposure compensation network, a reference coloring network and a reference super-resolution network for enhancing the quality of the multi-camera low-illumination image;
step 3, constructing a loss function, using an optimizer to train the reference exposure compensation network, the reference coloring network and the reference super-resolution network in sequence, and after the training of each network is finished, using a training data set to carry out end-to-end training optimization on the whole cascade network;
and 4, enhancing the multi-camera picture quality under the low-illumination condition by using the optimized cascade network: inputting a low-resolution secondary exposure color image and a high-resolution optimal exposure gray image which are simultaneously obtained by a multi-camera under the condition of low illumination, obtaining the low-resolution optimal exposure color image through a reference exposure compensation network, then coloring the down-sampled gray image through a reference coloring network, and finally carrying out interpolation reconstruction on the colored low resolution through the reference super-resolution network to obtain the high-resolution optimal exposure color image under the condition of low illumination.
2. The multi-camera low-illumination image quality enhancement method based on multitask fusion as claimed in claim 1, wherein in step 2, the specific steps of constructing the cascaded reference exposure compensation network, reference coloring network and reference super-resolution network comprise:
step 21, constructing a reference exposure compensation network: firstly, convolution kernels with larger radius are used for carrying out large-range feature understanding on an image, then stacked small convolution kernels and correction linear unit layers are used for carrying out local feature nonlinear fitting, finally, a Sigmoid function and an additional offset layer are used for carrying out feature space value domain constraint, and a low-resolution color image after exposure compensation is obtained;
step 22, constructing a reference coloring network: firstly, calculating the correlation of an input image pair on brightness characteristics by using an optical flow estimation module, and then shifting from a color image to a gray image on a color space by using a reference position obtained by calculation; then, for the area which can not correctly estimate the reference position, the stacked residual convolution modules are used for carrying out color information compensation to obtain a down-sampling color image after the gray image is colored;
step 23, constructing a reference super-resolution network: firstly, feature learning is carried out on a low-resolution color image on different scales by using a convolutional layer with the step length larger than 1, then global features of the low-resolution color image are extracted by using a full-link layer, local and global features of the image are fused to obtain a color information feature coefficient, and the color information feature coefficient is recombined on a feature space to form a geometric-brightness fused three-dimensional color representation space; in addition, mapping the input high-resolution gray image to a brightness characteristic space through a convolution layer as a guide, combining a plane space grid to obtain a mapping relation between low-resolution color information and corresponding high-resolution color information in a geometric-brightness fused three-dimensional space, and then interpolating a low-resolution color information characteristic coefficient by using the mapping relation to obtain a color characteristic coefficient of the image at high resolution; and finally, carrying out affine combination of the gray level image and the color characteristic coefficient and bilinear up-sampling interpolation addition of the low-resolution color image to obtain the generated high-resolution high-quality color image.
And 24, sequentially cascading the reference exposure compensation network, the reference coloring network and the reference super-resolution network, wherein the reference coloring network and the reference super-resolution network respectively use the output of the former network as the input to perform subsequent image quality enhancement, and the final output of the cascading network, namely the output of the reference super-resolution network, is the high-quality high-resolution color image processed by the network.
3. The multi-camera low-illumination image quality enhancement method based on multitask fusion as claimed in claim 1, wherein in step 3, the specific step of training the network comprises:
step 31, training a reference exposure compensation network: the method comprises the following steps of taking a low-resolution secondary exposure color image and a low-resolution optimal exposure gray image of different viewpoints as input of a reference exposure compensation network, and taking the low-resolution optimal exposure color image of the viewpoint where the color image is located as a training target;
step 32, training the reference coloring network: the method comprises the following steps of taking a low-resolution optimal exposure color image and a low-resolution optimal exposure gray image of different viewpoints as input of a reference coloring network, and taking the low-resolution optimal exposure color image of the viewpoint where the gray image is located as a training target;
step 33, training the reference super-resolution network: the method comprises the following steps of taking a low-resolution optimal exposure color image and a high-resolution optimal exposure gray image of the same viewpoint as input of a reference super-resolution network, and taking the high-resolution optimal exposure color image under the viewpoint as a training target;
step 34, jointly training the cascade network: and taking the low-resolution secondary exposure color images and the high-resolution optimal exposure gray images of different viewpoints as initial inputs of the whole cascaded network, and taking the high-resolution optimal exposure color images under the viewpoints corresponding to the gray images as training targets.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010765138.6A CN111986084B (en) | 2020-08-03 | 2020-08-03 | Multi-camera low-illumination image quality enhancement method based on multi-task fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010765138.6A CN111986084B (en) | 2020-08-03 | 2020-08-03 | Multi-camera low-illumination image quality enhancement method based on multi-task fusion |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111986084A true CN111986084A (en) | 2020-11-24 |
CN111986084B CN111986084B (en) | 2023-12-12 |
Family
ID=73445956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010765138.6A Active CN111986084B (en) | 2020-08-03 | 2020-08-03 | Multi-camera low-illumination image quality enhancement method based on multi-task fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111986084B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884668A (en) * | 2021-02-22 | 2021-06-01 | 大连理工大学 | Lightweight low-light image enhancement method based on multiple scales |
CN113159019A (en) * | 2021-03-08 | 2021-07-23 | 北京理工大学 | Dark light video enhancement method based on optical flow transformation |
CN113256528A (en) * | 2021-06-03 | 2021-08-13 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN113537246A (en) * | 2021-08-12 | 2021-10-22 | 浙江大学 | Gray level image simultaneous coloring and hyper-parting method based on counterstudy |
CN114549746A (en) * | 2022-01-28 | 2022-05-27 | 电子科技大学 | High-precision true color three-dimensional reconstruction method |
CN114913085A (en) * | 2022-05-05 | 2022-08-16 | 福州大学 | Two-way convolution low-illumination image enhancement method based on gray level improvement |
CN115564671A (en) * | 2022-09-23 | 2023-01-03 | 哈尔滨工业大学 | Self-supervision image restoration system based on long and short exposure image pair |
CN116091341A (en) * | 2022-12-15 | 2023-05-09 | 南京信息工程大学 | Exposure difference enhancement method and device for low-light image |
CN116823973A (en) * | 2023-08-25 | 2023-09-29 | 湖南快乐阳光互动娱乐传媒有限公司 | Black-white video coloring method, black-white video coloring device and computer readable medium |
WO2024098351A1 (en) * | 2022-11-11 | 2024-05-16 | Lenovo (Beijing) Limited | Imaging system and method for high resolution imaging of a subject |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080219581A1 (en) * | 2007-03-05 | 2008-09-11 | Fotonation Vision Limited | Image Processing Method and Apparatus |
US20100183071A1 (en) * | 2009-01-19 | 2010-07-22 | Segall Christopher A | Methods and Systems for Enhanced Dynamic Range Images and Video from Multiple Exposures |
US20140340482A1 (en) * | 2013-03-24 | 2014-11-20 | Vutara, Inc. | Three Dimensional Microscopy Imaging |
US20160044252A1 (en) * | 2013-03-14 | 2016-02-11 | Pelican Imaging Corporation | Systems and Methods for Reducing Motion Blur in Images or Video in Ultra Low Light with Array Cameras |
CN107563971A (en) * | 2017-08-12 | 2018-01-09 | 四川精视科技有限公司 | A kind of very color high-definition night-viewing imaging method |
CN107833183A (en) * | 2017-11-29 | 2018-03-23 | 安徽工业大学 | A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring |
US20190043178A1 (en) * | 2018-07-10 | 2019-02-07 | Intel Corporation | Low-light imaging using trained convolutional neural networks |
CN109658367A (en) * | 2018-11-14 | 2019-04-19 | 国网新疆电力有限公司信息通信公司 | Image interfusion method based on Color transfer |
CN110060208A (en) * | 2019-04-22 | 2019-07-26 | 中国科学技术大学 | A method of improving super-resolution algorithms reconstruction property |
CN110660038A (en) * | 2019-09-09 | 2020-01-07 | 山东工商学院 | Multispectral image and panchromatic image fusion method based on generation countermeasure network |
US20200098144A1 (en) * | 2017-05-19 | 2020-03-26 | Google Llc | Transforming grayscale images into color images using deep neural networks |
-
2020
- 2020-08-03 CN CN202010765138.6A patent/CN111986084B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080219581A1 (en) * | 2007-03-05 | 2008-09-11 | Fotonation Vision Limited | Image Processing Method and Apparatus |
US20100183071A1 (en) * | 2009-01-19 | 2010-07-22 | Segall Christopher A | Methods and Systems for Enhanced Dynamic Range Images and Video from Multiple Exposures |
US20160044252A1 (en) * | 2013-03-14 | 2016-02-11 | Pelican Imaging Corporation | Systems and Methods for Reducing Motion Blur in Images or Video in Ultra Low Light with Array Cameras |
US20140340482A1 (en) * | 2013-03-24 | 2014-11-20 | Vutara, Inc. | Three Dimensional Microscopy Imaging |
US20200098144A1 (en) * | 2017-05-19 | 2020-03-26 | Google Llc | Transforming grayscale images into color images using deep neural networks |
CN107563971A (en) * | 2017-08-12 | 2018-01-09 | 四川精视科技有限公司 | A kind of very color high-definition night-viewing imaging method |
CN107833183A (en) * | 2017-11-29 | 2018-03-23 | 安徽工业大学 | A kind of satellite image based on multitask deep neural network while super-resolution and the method for coloring |
US20190043178A1 (en) * | 2018-07-10 | 2019-02-07 | Intel Corporation | Low-light imaging using trained convolutional neural networks |
CN109658367A (en) * | 2018-11-14 | 2019-04-19 | 国网新疆电力有限公司信息通信公司 | Image interfusion method based on Color transfer |
CN110060208A (en) * | 2019-04-22 | 2019-07-26 | 中国科学技术大学 | A method of improving super-resolution algorithms reconstruction property |
CN110660038A (en) * | 2019-09-09 | 2020-01-07 | 山东工商学院 | Multispectral image and panchromatic image fusion method based on generation countermeasure network |
Non-Patent Citations (1)
Title |
---|
郭珮瑶等: ""多相机系统:成像增强及应用"", 《激光与光电子学进展》 * |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112884668A (en) * | 2021-02-22 | 2021-06-01 | 大连理工大学 | Lightweight low-light image enhancement method based on multiple scales |
CN113159019A (en) * | 2021-03-08 | 2021-07-23 | 北京理工大学 | Dark light video enhancement method based on optical flow transformation |
CN113159019B (en) * | 2021-03-08 | 2022-11-08 | 北京理工大学 | Dim light video enhancement method based on optical flow transformation |
CN113256528A (en) * | 2021-06-03 | 2021-08-13 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN113256528B (en) * | 2021-06-03 | 2022-05-27 | 中国人民解放军国防科技大学 | Low-illumination video enhancement method based on multi-scale cascade depth residual error network |
CN113537246A (en) * | 2021-08-12 | 2021-10-22 | 浙江大学 | Gray level image simultaneous coloring and hyper-parting method based on counterstudy |
CN114549746B (en) * | 2022-01-28 | 2023-03-07 | 电子科技大学 | High-precision true color three-dimensional reconstruction method |
CN114549746A (en) * | 2022-01-28 | 2022-05-27 | 电子科技大学 | High-precision true color three-dimensional reconstruction method |
CN114913085A (en) * | 2022-05-05 | 2022-08-16 | 福州大学 | Two-way convolution low-illumination image enhancement method based on gray level improvement |
CN115564671A (en) * | 2022-09-23 | 2023-01-03 | 哈尔滨工业大学 | Self-supervision image restoration system based on long and short exposure image pair |
CN115564671B (en) * | 2022-09-23 | 2024-05-03 | 哈尔滨工业大学 | Self-supervision image restoration system based on long and short exposure image pairs |
WO2024098351A1 (en) * | 2022-11-11 | 2024-05-16 | Lenovo (Beijing) Limited | Imaging system and method for high resolution imaging of a subject |
CN116091341A (en) * | 2022-12-15 | 2023-05-09 | 南京信息工程大学 | Exposure difference enhancement method and device for low-light image |
CN116091341B (en) * | 2022-12-15 | 2024-04-02 | 南京信息工程大学 | Exposure difference enhancement method and device for low-light image |
CN116823973A (en) * | 2023-08-25 | 2023-09-29 | 湖南快乐阳光互动娱乐传媒有限公司 | Black-white video coloring method, black-white video coloring device and computer readable medium |
CN116823973B (en) * | 2023-08-25 | 2023-11-21 | 湖南快乐阳光互动娱乐传媒有限公司 | Black-white video coloring method, black-white video coloring device and computer readable medium |
Also Published As
Publication number | Publication date |
---|---|
CN111986084B (en) | 2023-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111986084B (en) | Multi-camera low-illumination image quality enhancement method based on multi-task fusion | |
CN111127336B (en) | Image signal processing method based on self-adaptive selection module | |
CN113658057B (en) | Swin converter low-light-level image enhancement method | |
CN110225260B (en) | Three-dimensional high dynamic range imaging method based on generation countermeasure network | |
Wang et al. | MAGAN: Unsupervised low-light image enhancement guided by mixed-attention | |
CN112465727A (en) | Low-illumination image enhancement method without normal illumination reference based on HSV color space and Retinex theory | |
CN113284061B (en) | Underwater image enhancement method based on gradient network | |
CN113822830B (en) | Multi-exposure image fusion method based on depth perception enhancement | |
CN113344773B (en) | Single picture reconstruction HDR method based on multi-level dual feedback | |
CN111724317A (en) | Method for constructing Raw domain video denoising supervision data set | |
CN113096029A (en) | High dynamic range image generation method based on multi-branch codec neural network | |
CN114998141B (en) | Space environment high dynamic range imaging method based on multi-branch network | |
CN112508812A (en) | Image color cast correction method, model training method, device and equipment | |
WO2023086194A1 (en) | High dynamic range view synthesis from noisy raw images | |
Huang et al. | Color correction and restoration based on multi-scale recursive network for underwater optical image | |
CN115035011B (en) | Low-illumination image enhancement method of self-adaption RetinexNet under fusion strategy | |
CN115170915A (en) | Infrared and visible light image fusion method based on end-to-end attention network | |
CN115115516A (en) | Real-world video super-resolution algorithm based on Raw domain | |
CN116152128A (en) | High dynamic range multi-exposure image fusion model and method based on attention mechanism | |
CN117408924A (en) | Low-light image enhancement method based on multiple semantic feature fusion network | |
CN115311149A (en) | Image denoising method, model, computer-readable storage medium and terminal device | |
Fu et al. | Raw image based over-exposure correction using channel-guidance strategy | |
Li et al. | Rendering nighttime image via cascaded color and brightness compensation | |
Suda et al. | Deep snapshot hdr imaging using multi-exposure color filter array | |
CN116245968A (en) | Method for generating HDR image based on LDR image of transducer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |