CN111372006A - High dynamic range imaging method and system for mobile terminal - Google Patents

High dynamic range imaging method and system for mobile terminal Download PDF

Info

Publication number
CN111372006A
CN111372006A CN202010139138.5A CN202010139138A CN111372006A CN 111372006 A CN111372006 A CN 111372006A CN 202010139138 A CN202010139138 A CN 202010139138A CN 111372006 A CN111372006 A CN 111372006A
Authority
CN
China
Prior art keywords
dynamic range
high dynamic
image
range image
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010139138.5A
Other languages
Chinese (zh)
Other versions
CN111372006B (en
Inventor
吴国滔
张伟
宋然
张倩
顾建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN202010139138.5A priority Critical patent/CN111372006B/en
Publication of CN111372006A publication Critical patent/CN111372006A/en
Application granted granted Critical
Publication of CN111372006B publication Critical patent/CN111372006B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a high dynamic range imaging method and a system facing a mobile terminal, wherein a high dynamic range image is processed to generate a corresponding low dynamic range image, the low dynamic range image is input into a neural network model based on an encoder-decoder structure, the input of the neural network model and the corresponding high dynamic range image solve loss in a logarithmic domain, the neural network model is optimized by using a gradient descent method, the optimized neural network model is transplanted to the mobile terminal and is regarded as a high-order mapping function, and end-to-end mapping from the low dynamic range image to the high dynamic range image is completed. The practicability of the single-frame high dynamic range imaging method is effectively improved.

Description

High dynamic range imaging method and system for mobile terminal
Technical Field
The disclosure belongs to the technical field of imaging, and relates to a high dynamic range imaging method and system for a mobile terminal.
Background
The statements in this section merely provide background information related to the present disclosure and may not necessarily constitute prior art.
High dynamic range imaging is an imaging technique that can capture true luminance information in a recorded scene. Dynamic range refers to the ratio of maximum luminance to minimum luminance in a scene, and real scenes generally have a wide dynamic range. E.g. dynamic range of natural environment up to 1081, the dynamic range that the most common shooting equipment in people's life, namely the smart phone, can capture is only 1021 this results in the captured image not being able to record the full dynamic range of the real scene, so that the bright and dark areas in the scene appear as overexposed and underexposed, greatly reducing the visibility of the image and the post-processing space.
At present, the algorithm research for high dynamic range imaging at home and abroad mainly comprises a multi-frame synthesis algorithm and a single-frame reconstruction algorithm. The single frame reconstruction algorithm utilizes a mapping function to expand the dynamic range of a single frame image. The method only needs to shoot a single-frame image, and has the advantages of difficult occurrence of artifacts, convenient use and the like. Obviously, the mapping function is a key factor of the performance of the single-frame high dynamic range imaging algorithm, and therefore, finding a mapping function with better performance becomes a hot point for research.
The deep neural network has good function model fitting capability, and can learn complex function mapping relation from complex image input in a supervision training mode, so that the deep neural network becomes an effective means for improving the performance of a single-frame reconstruction algorithm. However, conventional deep neural networks tend to be large in scale, are computationally demanding, and are difficult to deploy on mobile devices with limited computational resources.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a mobile-end-oriented high dynamic range imaging method and system, which effectively simplify the scale of a network, enable a network model to run on a mobile device, and effectively improve the imaging effect and the practicability.
According to some embodiments, the following technical scheme is adopted in the disclosure:
a high dynamic range imaging method facing a mobile terminal comprises the following steps:
processing the high dynamic range image to generate a corresponding low dynamic range image, inputting the low dynamic range image into a neural network model based on an encoder-decoder structure, solving the loss of the input of the neural network model and the corresponding high dynamic range image in a logarithmic domain, optimizing the neural network model by using a gradient descent method, transplanting the optimized neural network model to a mobile terminal, regarding the optimized neural network model as a high-order mapping function, and completing the end-to-end mapping from the low dynamic range image to the high dynamic range image.
As an alternative embodiment, in the process of processing the high dynamic range image, the high dynamic range image is obtained, a clustering algorithm is operated in a camera response curve set to select a plurality of most representative response curves, and the response curves are applied to the high dynamic range image and the saturated pixel values are clipped to obtain the low dynamic range image.
As an alternative embodiment, the method of acquiring a high dynamic range image includes downloading the disclosed high dynamic range image or a high dynamic range image photographed with a camera or synthesized by bracket exposure.
As an alternative embodiment, before applying the camera response curve, a scaling factor is determined, the high dynamic range image is divided by the scaling factor so that a certain percentage of pixel values are saturated, at this time, an obtained camera response curve is randomly selected and applied to the processed high dynamic range image, and the saturated pixels in the result are clipped, so that the corresponding low dynamic range image can be obtained.
As an alternative embodiment, the neural network model comprises two parts of an encoder and a decoder, wherein an input image enters the encoder, the encoder extracts features of different levels of the input image from a low dimension to a high dimension along with the increase of the depth, the features of the same level are used as the next level input of the encoder, and the position information of the features is stored by being connected to the same level of the decoder in a short mode, so that image blurring is avoided.
As a further embodiment, the encoder is configured to implement feature extraction, in which features are extracted by inputting 3x3 depth separable convolutions with step size 1, then convolving the features between the interaction channels by 1x1, and finally applying residual linking to mitigate the gradient vanishing phenomenon while enhancing the multiplexing of learned features; in the down-sampling process, the depth of the feature is firstly improved by 1x1 Conv to avoid the damage of the Relu function to the low-dimensional feature, then the receptive field is expanded by skipping some pixels by 3x3 DWConv with the step size of 2, and finally the depth of the feature is reduced by 1x1 Conv and is directly output to the next layer and the corresponding part of the decoder.
As an alternative embodiment, the decoder is configured to upsample the input feature map by a nearest neighbor domain interpolation method, then perform feature extraction by DW Conv with a step size of 1, connect the result with the upsampled result in a depth dimension, and finally learn and combine feature maps with a depth of n by using a convolution of 1x1 to the feature map with a depth of 2 × n, thereby avoiding artifacts occurring at the edge portions of the image.
As an alternative embodiment, the neural network model only performs dynamic range extension on the overexposed part of the low dynamic range image in the process from the low dynamic range image to the high dynamic range image.
As an alternative embodiment, the neural network model is optimized using a gradient descent method with losses in the logarithmic domain.
A mobile-end oriented high dynamic range imaging system, comprising:
the low dynamic range image acquisition module is used for acquiring a low dynamic range image close to right exposure and outputting the low dynamic range image to the neural network forward reasoning module;
a neural network forward reasoning module that provides: the interface of the input data and the required input of the neural network and the interface of the neural network forward reasoning are output to the image display module;
and the image display module is used for carrying out tone mapping on the output so that the output can be displayed.
A computer readable storage medium having stored therein a plurality of instructions adapted to be loaded by a processor of a terminal device and to execute a mobile-end oriented high dynamic range imaging method.
A terminal device comprising a processor and a computer readable storage medium, the processor being configured to implement instructions; the computer readable storage medium stores instructions adapted to be loaded by a processor and to perform a method of mobile-end oriented high dynamic range imaging.
Compared with the prior art, the beneficial effect of this disclosure is:
the neural network model comprises an encoder and a decoder, wherein the encoder is used for extracting features of different scales from an input image, the decoder is used for reconstructing the image by using the obtained features, and the part with the equivalent feature size between the encoder and the decoder adopts short links, so that the training speed and the training precision can be improved; and simultaneously, the two parts of the encoder and the decoder are respectively optimized in light weight, so that the neural network can run on mobile terminal equipment, the effect which is comparable to that of a large neural network is achieved, and the practicability of the single-frame high dynamic range imaging method is effectively improved.
Drawings
The accompanying drawings, which are included to provide a further understanding of the disclosure, illustrate embodiments of the disclosure and together with the description serve to explain the disclosure and are not to limit the disclosure.
FIG. 1 is a flow chart of the present embodiment;
FIG. 2 is a general structure of a lightweight neural network model;
FIGS. 3(a) and 3(b) are encoder units of a network model;
fig. 4 shows a decoder unit of the network model.
The specific implementation mode is as follows:
the present disclosure is further described with reference to the following drawings and examples.
It should be noted that the following detailed description is exemplary and is intended to provide further explanation of the disclosure. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this disclosure belongs.
It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, and it should be understood that when the terms "comprises" and/or "comprising" are used in this specification, they specify the presence of stated features, steps, operations, devices, components, and/or combinations thereof, unless the context clearly indicates otherwise.
One or more embodiments provide a mobile-end single-frame high dynamic range imaging method for a lightweight neural network, which includes generating a locally overexposed Low Dynamic Range (LDR) image from a captured High Dynamic Range (HDR) image through a known Camera Response Function (CRF), and training the neural network by using the LDR-HDR image pair as an input of the neural network and a reference true value (grouiductr), respectively. After the training is completed, when an LDR image with right exposure (local overexposure) is input, an HDR image with an extended dynamic range and a restored overexposed region can be obtained.
Referring to fig. 1, the single-frame high dynamic range imaging method includes the following steps:
s101, collecting HDR images, running a clustering algorithm in a camera response curve set to select five most representative response curves, and applying the response curves to the HDR images and simultaneously editing saturated pixel values to obtain LDR images.
Specifically, the internet is searched for and downloaded public HDR images, which can also be captured by a camera with high latitude or acquired by exposure synthesis, and typical features of the images are HDR files stored in 32-bit floating point numbers; then selecting a typical camera response curve, wherein the method is to operate a k-means clustering algorithm in a public camera response curve data set, and finally selecting five most representative curves for standby; in order to ensure the effectiveness of the method and speed up the training convergence speed, the LDR image generated from the HDR image should not contain excessive overexposed regions, and therefore, before applying the camera response curve, a scaling factor needs to be determined, and the HDR image is divided by this factor such that a certain percentage of pixel values are saturated.
In this embodiment, 3372 HDR images with different resolutions are collected, counts of different pixel values are counted through an image histogram, when the total count reaches 85% to 90% of total pixels of the image, the current pixel value is recorded as the above-mentioned proportional coefficient, the HDR image is divided by the coefficient, so that the total pixel value between 0 and 1 can account for 85% to 90% of the total pixel value of the image, at this time, a camera response curve obtained as above is randomly selected and applied to the processed HDR image, and saturated pixels in the result are clipped, so that a corresponding LDR image can be obtained.
S102, designing a training lightweight neural network, wherein the network adopts an encoder-decoder structure, and the network can be regarded as a high-order mapping function of LDR to HDR.
Referring to fig. 2, the lightweight neural network includes two parts, an encoder and a decoder. The input image enters an encoder, and along with the improvement of the depth, the encoder extracts features of different levels of the input image from a low dimension to a high dimension, wherein the features of the same level have two uses: one is as the next layer input to the encoder and the other is by short linking (skipconnections) to the same level of the decoder to preserve the location information of the features to avoid image blur.
The encoder part mainly comprises two modules, which are respectively responsible for feature extraction under the same feature map size, please refer to fig. 3(a), and for down-sampling, please refer to fig. 3 (b). In the feature extraction part, 3x3 channel-by-channel convolution (DW Conv) with the step length of 1 is input to extract features, then the features between interaction channels are convoluted through 1x1, and finally residual error linkage is adopted to relieve the gradient disappearance phenomenon and strengthen the multiplexing of the learned features. In the down-sampling part, the depth of the feature is firstly improved by 1x1 Conv to avoid the damage of the Relu function to the low-dimensional feature, then the receptive field is enlarged by skipping some pixels by 3x3DW Conv with the step size of 2, compared with the mode of simultaneously using pooling with the step size of 1, the calculation amount of the mode is only 1/4, and finally the depth of the feature is reduced by 1x1 Conv and is directly output to the next layer and the corresponding part of the decoder.
In this embodiment, the encoder portion is a modified MobileNetV2 network, compared to the original network: the encoder removes all batch normalization layers, thereby avoiding the negative influence of the characteristic re-standardization on the network; the encoder uses Relu activation function instead of Relu6 activation function, so that a larger characteristic range is allowed, and the quality of generated images is improved; the encoder uses DW Conv instead of traditional Conv on the first convolution layer to further reduce the calculation force, and simultaneously one convolution layer with increased full resolution is used for better storing the high-frequency information of the image, so that the problem of fuzzy edge of the generated image is avoided; the encoder removes the full link layer in the MobileNetV2 and replaces the full link layer with a newly created convolutional layer, which is responsible for the feature transition between the encoder and the decoder.
The decoder part is composed of a single module, please refer to fig. 4, first up-sample the input feature map by nearest neighbor interpolation (DW Conv) and then complete feature extraction by DW Conv with step size of 1, and connect the result with the up-sampled result in the depth dimension (context), finally use 1x1 convolution to learn and combine the feature maps with depth of 2 × n, thereby avoiding artifacts (artifacts) at the edge part of the image.
In this embodiment, the conventional way of convolution after interpolation needs to use a special padding operation in convolution, specifically, it needs to use a reflection padding (or mirrorpadding) to avoid the occurrence of artifacts in the edge portion of the image, however, in the scenario of moving end, the reflection padding needs a longer time than the conventional zeropadding, and an intuitive explanation is that the reflection padding needs to read the value inside the edge of the feature map first and then fill the value outside the edge, and the zeropadding only needs to fill the zero value directly outside the edge. In order to solve the problem of edge artifacts while using zeropadding operation which is time-consuming and shorter, the present embodiment proposes the decoder module shown in fig. 4, and its main idea is to solve the problem of edge artifacts by learning and combining the upsampled feature map without artifacts and the 3x3DW Conv convolved feature map with artifacts through 1x1 convolution.
In this embodiment, in order to reduce the amount of calculation, the light weight neural network adopts all the common convolutions of 3 × 3 dwconv and the convolution kernel size of 1, the input of the network is the LDR image in S101, and the output of the network is the HDR image. Since going from LDR directly to HDR is a typical ill-conditioned problem, the network will be hard to converge, so the present network is only responsible for dynamic range extension of overexposed parts in LDR images, which can be achieved by using a mask whose mathematical expression is:
Figure BDA0002398429290000091
in the formula, maskiMask α representing the ith input imageiRepresenting the largest pixel value in rgb channels in the ith input image at each position, thd being a threshold when the input image is normalized to [0,1 ]]Time, pixel value in image>A pixel of 1-thd will be considered to be overexposed, and in this embodiment, thd is set to 0.05. The final processing results are expressed as follows:
final=(1-maski)*LDR+maski*f(LDR)
wherein f (LDR) represents the result obtained after the LDR image is input into the lightweight neural network, and the result is mixed with the non-overexposed part of the original LDR image to obtain the final result.
S103, solving loss in a logarithmic domain, and optimizing the neural network by adopting a gradient descent method.
Specifically, the pixel value of the HDR image varies from zero to hundreds of thousands of pixels according to the ambient lighting conditions, and if the network output and the HDR image are directly lost, such as the commonly used L2 loss, the network output and the HDR image will have a very fluctuating result, and the network will oscillate when propagating in the reverse direction, so that the network is difficult to converge. To solve this problem, a strategy of loss in the logarithmic domain is adopted.
In this embodiment, the mathematical expression of the method is:
loss=mask*|log(predict+ε)-log(gt+ε)|
in the formula, as described in S102, mask is output of the neural network when the LDR image is input, gt is the HDR image corresponding to the input LDR image, and epsilon is a minimum value, so as to avoid mathematical errors when the pixel value is zero.
S104, deploying the neural network on the mobile equipment, inputting an image close to right exposure, completing end-to-end LDR to HDR reconstruction, finally performing tone mapping on an output result, storing and displaying the result on the equipment.
Specifically, after the training is completed, a network weight and a topology map are obtained, and when the network weight and the topology map are deployed on the mobile terminal device, the weight and the topology map need to be converted into a format that can be resolved by the mobile terminal. Because the tolerance of the cmos of the mobile phone is certain, an image exposed to the right means that the visibility in a dark place is improved, while a bright place may be overexposed, and the overexposed part is repaired by the local neural network, and finally, a final HDR image is obtained through a mask.
In this embodiment, the final network weight is loaded at the desktop end, and the tfliteconverter.from _ session tool in the tenserflow is used to directly convert from the session to the format required by the mobile end, thereby avoiding tedious operations such as curing. In order to adapt to the habit of the user, the embodiment adopts a mode of shooting by an original camera program, allows the user to control exposure time and gain to enable an image to be exposed to the right, then integrates the mobile terminal single-frame high dynamic range imaging method in the embodiment into the app, loads the image in the app when in use, and stores and displays the image on the device after being processed by the method in the embodiment.
The resulting mobile-end high dynamic range imaging system, comprising:
the low dynamic range image acquisition module is used for acquiring a low dynamic range image close to right exposure and outputting the low dynamic range image to the neural network forward reasoning module;
a neural network forward reasoning module that provides: the interface of the input data and the required input of the neural network and the interface of the neural network forward reasoning are output to the image display module;
and the image display module is used for carrying out tone mapping on the output so that the output can be displayed.
As will be appreciated by one skilled in the art, embodiments of the present disclosure may be provided as a method, system, or computer program product. Accordingly, the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present disclosure may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and so forth) having computer-usable program code embodied therein.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present disclosure and is not intended to limit the present disclosure, and various modifications and changes may be made to the present disclosure by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present disclosure should be included in the protection scope of the present disclosure.
Although the present disclosure has been described with reference to specific embodiments, it should be understood that the scope of the present disclosure is not limited thereto, and those skilled in the art will appreciate that various modifications and changes can be made without departing from the spirit and scope of the present disclosure.

Claims (10)

1. A high dynamic range imaging method facing a mobile terminal is characterized in that: the method comprises the following steps:
processing the high dynamic range image to generate a corresponding low dynamic range image, inputting the low dynamic range image into a neural network model based on an encoder-decoder structure, solving the loss of the input of the neural network model and the corresponding high dynamic range image in a logarithmic domain, optimizing the neural network model by using a gradient descent method, transplanting the optimized neural network model to a mobile terminal, regarding the optimized neural network model as a high-order mapping function, and completing the end-to-end mapping from the low dynamic range image to the high dynamic range image.
2. The mobile-end-oriented high dynamic range imaging method according to claim 1, wherein: and in the process of processing the high dynamic range image, acquiring the high dynamic range image, selecting a plurality of most representative response curves by operating a clustering algorithm in a camera response curve set, and applying the response curves to the high dynamic range image and simultaneously editing saturated pixel values to obtain a low dynamic range image.
3. The mobile-end-oriented high dynamic range imaging method according to claim 1, wherein: the method of acquiring the high dynamic range image includes downloading the disclosed high dynamic range image or a high dynamic range image photographed with a camera or synthesized by bracket exposure.
4. The mobile-end-oriented high dynamic range imaging method according to claim 1, wherein: as an alternative embodiment, before applying the camera response curve, a scaling factor is determined, the high dynamic range image is divided by the scaling factor so that a certain percentage of pixel values are saturated, at this time, an obtained camera response curve is randomly selected and applied to the processed high dynamic range image, and the saturated pixels in the result are clipped, so that the corresponding low dynamic range image can be obtained.
5. The mobile-end-oriented high dynamic range imaging method according to claim 1, wherein: the neural network model comprises two parts of an encoder and a decoder, wherein an input image enters the encoder, the encoder extracts features of different levels of the input image from a low dimension to a high dimension along with the improvement of depth, the features of the same level are used as the next level input of the encoder, and the features are connected to the same level of the decoder through short links to store the position information of the features, so that the image blurring is avoided.
6. The mobile-end-oriented high dynamic range imaging method according to claim 5, wherein: the encoder is configured to implement feature extraction and downsampling, in the feature extraction, the features are extracted by inputting 3x3 depth separable convolution with the step size of 1, then the features between the interaction channels are convolved by 1x1, and finally residual error linkage is adopted so as to relieve the gradient disappearance phenomenon and strengthen the multiplexing of the learned features; during down-sampling, firstly, the depth of the features is improved through 1x1 Conv to avoid the damage of a Relu function to low-dimensional features, then, a certain number of pixels are skipped through 3x3DW Conv with the step length of 2 to enlarge the receptive field, and finally, the depth of the features is reduced through 1x1 Conv to be directly output to the next layer and the corresponding part of a decoder;
or the like, or, alternatively,
the decoder is configured to upsample the input feature map by a nearest neighbor domain interpolation method, then complete feature extraction by DW Conv with the step size of 1, connect the result with the upsampled result in a depth dimension, and finally learn and combine the feature map with the depth of n by using 1x1 convolution to the feature map with the depth of 2 x n, thereby avoiding the occurrence of artifacts at the edge part of the image.
7. The mobile-end-oriented high dynamic range imaging method according to claim 1, wherein: and in the process from the low dynamic range image to the high dynamic range image, the neural network model only expands the dynamic range of the overexposed part of the low dynamic range image.
8. A high dynamic range imaging system facing a mobile terminal is characterized in that: the method comprises the following steps:
the low dynamic range image acquisition module is used for acquiring a low dynamic range image close to right exposure and outputting the low dynamic range image to the neural network forward reasoning module;
a neural network forward reasoning module that provides: the interface of the input data and the required input of the neural network and the interface of the neural network forward reasoning are output to the image display module;
and the image display module is used for carrying out tone mapping on the output so that the output can be displayed.
9. A computer-readable storage medium characterized by: the imaging system comprises a terminal device, a processor and a plurality of instructions stored in the terminal device, wherein the instructions are suitable for being loaded by the processor of the terminal device and executing the high dynamic range imaging method facing the mobile terminal.
10. A terminal device is characterized in that: the system comprises a processor and a computer readable storage medium, wherein the processor is used for realizing instructions; the computer readable storage medium stores instructions adapted to be loaded by a processor and to perform a method of mobile-end oriented high dynamic range imaging.
CN202010139138.5A 2020-03-03 2020-03-03 High dynamic range imaging method and system for mobile terminal Active CN111372006B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010139138.5A CN111372006B (en) 2020-03-03 2020-03-03 High dynamic range imaging method and system for mobile terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010139138.5A CN111372006B (en) 2020-03-03 2020-03-03 High dynamic range imaging method and system for mobile terminal

Publications (2)

Publication Number Publication Date
CN111372006A true CN111372006A (en) 2020-07-03
CN111372006B CN111372006B (en) 2021-05-07

Family

ID=71211683

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010139138.5A Active CN111372006B (en) 2020-03-03 2020-03-03 High dynamic range imaging method and system for mobile terminal

Country Status (1)

Country Link
CN (1) CN111372006B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096029A (en) * 2021-03-05 2021-07-09 电子科技大学 High dynamic range image generation method based on multi-branch codec neural network
CN115297254A (en) * 2022-07-04 2022-11-04 北京航空航天大学 Portable high-dynamic imaging fusion system under high-radiation condition
WO2022266955A1 (en) * 2021-06-24 2022-12-29 Oppo广东移动通信有限公司 Image decoding method and apparatus, image processing method and apparatus, and device

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894484A (en) * 2016-03-30 2016-08-24 山东大学 HDR reconstructing algorithm based on histogram normalization and superpixel segmentation
US20180293711A1 (en) * 2017-04-06 2018-10-11 Disney Enterprises, Inc. Kernel-predicting convolutional neural networks for denoising
CN108805836A (en) * 2018-05-31 2018-11-13 大连理工大学 Method for correcting image based on the reciprocating HDR transformation of depth
US20190164268A1 (en) * 2017-11-27 2019-05-30 Nvidia Corporation Deep-learning method for separating reflection and transmission images visible at a semi-reflective surface in a computer image of a real-world scene
US20190228510A1 (en) * 2018-01-24 2019-07-25 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method of thereof
CN110135501A (en) * 2019-05-17 2019-08-16 电子科技大学 High dynamic range images evidence collecting method based on neural network framework
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110197463A (en) * 2019-04-25 2019-09-03 深圳大学 High dynamic range image tone mapping method and its system based on deep learning
CN110428362A (en) * 2019-07-29 2019-11-08 深圳市商汤科技有限公司 Image HDR conversion method and device, storage medium
CN110728633A (en) * 2019-09-06 2020-01-24 上海交通大学 Multi-exposure high-dynamic-range inverse tone mapping model construction method and device
CN110832541A (en) * 2017-06-28 2020-02-21 华为技术有限公司 Image processing apparatus and method

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105894484A (en) * 2016-03-30 2016-08-24 山东大学 HDR reconstructing algorithm based on histogram normalization and superpixel segmentation
US20180293711A1 (en) * 2017-04-06 2018-10-11 Disney Enterprises, Inc. Kernel-predicting convolutional neural networks for denoising
CN110832541A (en) * 2017-06-28 2020-02-21 华为技术有限公司 Image processing apparatus and method
US20190164268A1 (en) * 2017-11-27 2019-05-30 Nvidia Corporation Deep-learning method for separating reflection and transmission images visible at a semi-reflective surface in a computer image of a real-world scene
US20190228510A1 (en) * 2018-01-24 2019-07-25 Samsung Electronics Co., Ltd. Electronic apparatus and controlling method of thereof
CN108805836A (en) * 2018-05-31 2018-11-13 大连理工大学 Method for correcting image based on the reciprocating HDR transformation of depth
CN110163815A (en) * 2019-04-22 2019-08-23 桂林电子科技大学 Low-light (level) restoring method based on multistage variation self-encoding encoder
CN110197463A (en) * 2019-04-25 2019-09-03 深圳大学 High dynamic range image tone mapping method and its system based on deep learning
CN110135501A (en) * 2019-05-17 2019-08-16 电子科技大学 High dynamic range images evidence collecting method based on neural network framework
CN110428362A (en) * 2019-07-29 2019-11-08 深圳市商汤科技有限公司 Image HDR conversion method and device, storage medium
CN110728633A (en) * 2019-09-06 2020-01-24 上海交通大学 Multi-exposure high-dynamic-range inverse tone mapping model construction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
李雪奥: "《基于卷积神经网络的多曝光图像融合方法研究与实现》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
魏妍妍: "《低照度图像增强与超分辨率重建算法研究》", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113096029A (en) * 2021-03-05 2021-07-09 电子科技大学 High dynamic range image generation method based on multi-branch codec neural network
WO2022266955A1 (en) * 2021-06-24 2022-12-29 Oppo广东移动通信有限公司 Image decoding method and apparatus, image processing method and apparatus, and device
CN115297254A (en) * 2022-07-04 2022-11-04 北京航空航天大学 Portable high-dynamic imaging fusion system under high-radiation condition
CN115297254B (en) * 2022-07-04 2024-03-29 北京航空航天大学 Portable high dynamic imaging fusion system under high radiation condition

Also Published As

Publication number Publication date
CN111372006B (en) 2021-05-07

Similar Documents

Publication Publication Date Title
CN113454981B (en) Techniques for multi-exposure fusion of multiple image frames based on convolutional neural networks and for deblurring the multiple image frames
CN111372006B (en) High dynamic range imaging method and system for mobile terminal
CN112001863B (en) Underexposure image recovery method based on deep learning
CN113450290B (en) Low-illumination image enhancement method and system based on image inpainting technology
CN112602088B (en) Method, system and computer readable medium for improving quality of low light images
CN112991227B (en) Weak light image enhancement method and device based on U-net + + network
CN112053308B (en) Image deblurring method and device, computer equipment and storage medium
CN116051428B (en) Deep learning-based combined denoising and superdivision low-illumination image enhancement method
CN112837245A (en) Dynamic scene deblurring method based on multi-mode fusion
US11887218B2 (en) Image optimization method, apparatus, device and storage medium
CN112164011A (en) Motion image deblurring method based on self-adaptive residual error and recursive cross attention
CN113674159A (en) Image processing method and device, electronic equipment and readable storage medium
Ke et al. Perceptual multi-exposure image fusion with overall image quality index and local saturation
WO2023202200A1 (en) Method for reconstructing hdr images, terminal, and electronic device
Lv et al. Low-light image enhancement via deep Retinex decomposition and bilateral learning
Que et al. Densely connected convolutional networks for multi-exposure fusion
Rasheed et al. LSR: Lightening super-resolution deep network for low-light image enhancement
Chen et al. End-to-end single image enhancement based on a dual network cascade model
CN113902658A (en) RGB image-to-hyperspectral image reconstruction method based on dense multiscale network
CN117152182A (en) Ultralow-illumination network camera image processing method and device and electronic equipment
CN111953888B (en) Dim light imaging method and device, computer readable storage medium and terminal equipment
CN116152128A (en) High dynamic range multi-exposure image fusion model and method based on attention mechanism
CN113810597B (en) Rapid image and scene rendering method based on semi-predictive filtering
CN111383188A (en) Image processing method, system and terminal equipment
CN111383171B (en) Picture processing method, system and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant