CN113852759A - Image enhancement method and shooting device - Google Patents

Image enhancement method and shooting device Download PDF

Info

Publication number
CN113852759A
CN113852759A CN202111124068.7A CN202111124068A CN113852759A CN 113852759 A CN113852759 A CN 113852759A CN 202111124068 A CN202111124068 A CN 202111124068A CN 113852759 A CN113852759 A CN 113852759A
Authority
CN
China
Prior art keywords
image
raw domain
preset
gain
denoising
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111124068.7A
Other languages
Chinese (zh)
Other versions
CN113852759B (en
Inventor
孙利虎
杨晓冬
刘关松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Haowei Technology Wuhan Co ltd
Original Assignee
Haowei Technology Wuhan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Haowei Technology Wuhan Co ltd filed Critical Haowei Technology Wuhan Co ltd
Priority to CN202111124068.7A priority Critical patent/CN113852759B/en
Publication of CN113852759A publication Critical patent/CN113852759A/en
Application granted granted Critical
Publication of CN113852759B publication Critical patent/CN113852759B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image enhancement method and a shooting device. The image enhancement method comprises the following steps: acquiring an original Raw domain image; acquiring a gain coefficient, wherein the original Raw domain image promotes the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; and obtaining a denoising Raw domain image based on a denoising model, wherein the input parameters of the denoising model comprise an image to be denoised and a gain coefficient, and the gain Raw domain image is configured as the image to be denoised. Due to the configuration, on one hand, a gain coefficient is added as an independent input parameter, the precision of the denoising model is improved, and on the other hand, the Raw domain image is taken as a key object for processing, and compared with other schemes for processing data based on formats such as RGB, the problem core is grasped, and a better effect can be obtained. The problems that the final imaging picture is not clear enough and the noise is more due to the low-light environment in the prior art can be solved.

Description

Image enhancement method and shooting device
Technical Field
The present invention relates to the field of image signal processing technologies, and in particular, to an image enhancement method and a shooting device.
Background
In low light environments (night, dark corner, etc.) with weak light or no light, the imaging quality of the camera faces a great challenge due to the limitation of the light sensing capability of the camera hardware. In order to solve the imaging problem in low-light environments, there are two common ideas.
The first method is to improve exposure and gain, so that a great deal of noise is introduced while imaging details are greatly improved, and meanwhile, negative effects such as blurring and artifacts are easy to occur.
The second method is to greatly increase the hardware cost by increasing the hardware photosensitive capability. In addition, the fact that the hardware quality is improved and the improvement algorithm do not conflict, even if the definition of the imaging is improved by improving the hardware quality, the definition can be further improved by improving the algorithm.
In a word, in the prior art, the final imaging picture is not clear enough and has more noise due to the low-light environment.
Disclosure of Invention
The invention provides an image enhancement method and a shooting device, which are used for solving the problems of the prior art that the final imaging picture is not clear enough and has more noise points due to a low-light environment.
In order to solve the above technical problem, the present invention provides an image enhancement method, comprising the steps of: acquiring an original Raw domain image; acquiring a gain coefficient, wherein the original Raw domain image promotes the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; and obtaining a denoising Raw domain image based on a denoising model, wherein the input parameters of the denoising model comprise an image to be denoised and the gain coefficient, and the gain Raw domain image is configured as the image to be denoised.
Optionally, the image enhancement method further includes: and converting the de-noised Raw domain image into a color image in a preset format.
Optionally, the denoising model is obtained based on a model training method; the model training method comprises the following steps: acquiring Raw domain images under at least two preset illumination intensities to obtain Raw data, wherein the Raw data is used for constructing a training set; and superposing at least two preset noise levels under each preset illumination intensity, wherein the preset noise levels correspond to preset gain coefficients.
Optionally, the preset illumination intensity includes 1lux, 5lux, 10lux, 20lux, 50lux, and 100 lux.
Optionally, the preset noise level includes noise levels corresponding to the gain factors being 1 time, 2 times, 4 times, 8 times, 16 times, 32 times, and 64 times.
Optionally, the step of acquiring Raw domain images under at least two preset illumination intensities to obtain the Raw data includes: shooting continuous Raw domain images with preset frame numbers; one frame of the continuous Raw domain images with the preset frame number is used for constructing an input image, and the average value of the continuous Raw domain images with the preset frame number is used for constructing a target image.
Optionally, the model training method includes: the original data are based on preset operation to obtain the training set; the preset operation comprises at least one of cutting, screening, rotary expansion and turnover expansion.
Optionally, the model training method includes: the denoising model is set to be a convolutional neural network based on a pyramid structure; the feature extraction network of the denoising model is set to be fused with residual learning on the basis of preset convolution calculation; and the image enhancement network of the denoising model is set to add sub-pixel convolution and global connection on the basis of the preset convolution calculation.
Optionally, the model training method includes: and training the denoising model based on a channel separation mode.
In order to solve the above technical problem, the present invention further provides a photographing apparatus, including: the image sensor is used for acquiring an original Raw domain image; an input device for obtaining the gain factor; the processor is used for promoting the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; obtaining a denoising Raw domain image based on a denoising model, wherein input parameters of the denoising model comprise an image to be denoised and the gain coefficient, and the gain Raw domain image is configured as the image to be denoised.
Compared with the prior art, the image enhancement method and the shooting device provided by the invention have the advantages that the image enhancement method comprises the following steps: acquiring an original Raw domain image; acquiring a gain coefficient, wherein the original Raw domain image promotes the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; and obtaining a denoising Raw domain image based on a denoising model, wherein the input parameters of the denoising model comprise an image to be denoised and a gain coefficient, and the gain Raw domain image is configured as the image to be denoised. Due to the configuration, on one hand, a gain coefficient is added as an independent input parameter, the precision of the denoising model is improved, and on the other hand, the Raw domain image is taken as a key object for processing, and compared with other schemes for processing data based on formats such as RGB, the problem core is grasped, and a better effect can be obtained. The problems that the final imaging picture is not clear enough and the noise is more due to the low-light environment in the prior art can be solved.
Drawings
It will be appreciated by those skilled in the art that the drawings are provided for a better understanding of the invention and do not constitute any limitation to the scope of the invention. Wherein:
FIG. 1 is a flow chart of an image enhancement method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart diagram illustrating a model training method according to an embodiment of the present invention;
FIG. 3 is a schematic diagram of channel split mode training according to an embodiment of the present invention;
FIG. 4a is a first test Raw field image;
FIG. 4b is a denoised Raw domain image obtained after the image shown in FIG. 4a is processed by the image enhancement method according to an embodiment of the present invention;
FIG. 5a is an RGB image of a second test Raw domain image after direct conversion;
FIG. 5b is an output result obtained after the image enhancement method according to an embodiment of the present invention processes the Raw domain image corresponding to the RGB image shown in FIG. 5 a;
FIG. 5c is a graph of the output of the prior art method after processing the RGB image of FIG. 5 a;
FIG. 5d is a graph of the output of the prior art method after processing the RGB image of FIG. 5 a;
FIG. 6a is an RGB image of the third test Raw domain image after direct conversion;
fig. 6b is an output result obtained after the image enhancement method according to an embodiment of the present invention processes the Raw domain image corresponding to the RGB image shown in fig. 6 a;
FIG. 6c is a graph of the output of the prior art method after processing the RGB image of FIG. 6 a;
fig. 6d is the output result of the prior art method after processing the RGB image shown in fig. 6 a.
Detailed Description
To further clarify the objects, advantages and features of the present invention, a more particular description of the invention will be rendered by reference to specific embodiments thereof which are illustrated in the appended drawings. It is to be noted that the drawings are in greatly simplified form and are not to scale, but are merely intended to facilitate and clarify the explanation of the embodiments of the present invention. Further, the structures illustrated in the drawings are often part of actual structures. In particular, the drawings may have different emphasis points and may sometimes be scaled differently.
As used in this application, the singular forms "a", "an" and "the" include plural referents, the term "or" is generally employed in a sense including "and/or," the terms "a" and "an" are generally employed in a sense including "at least one," the terms "at least two" are generally employed in a sense including "two or more," and the terms "first", "second" and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicit to the number of technical features indicated. Thus, features defined as "first", "second" and "third" may explicitly or implicitly include one or at least two of the features, "one end" and "the other end" and "proximal end" and "distal end" generally refer to the corresponding two parts, which include not only the end points, but also the terms "mounted", "connected" and "connected" should be understood broadly, e.g., as a fixed connection, as a detachable connection, or as an integral part; can be mechanically or electrically connected; either directly or indirectly through intervening media, either internally or in any other relationship. Furthermore, as used in the present invention, the disposition of an element with another element generally only means that there is a connection, coupling, fit or driving relationship between the two elements, and the connection, coupling, fit or driving relationship between the two elements may be direct or indirect through intermediate elements, and cannot be understood as indicating or implying any spatial positional relationship between the two elements, i.e., an element may be in any orientation inside, outside, above, below or to one side of another element, unless the content clearly indicates otherwise. The specific meanings of the above terms in the present invention can be understood by those skilled in the art according to specific situations.
The core idea of the invention is to provide an image enhancement method and a shooting device, so as to solve the problems of the prior art that the final imaging picture is not clear enough and has more noise points due to the low-light environment.
The following description refers to the accompanying drawings.
Please refer to fig. 1 to 6d, wherein fig. 1 is a schematic flowchart of an image enhancement method according to an embodiment of the present invention; FIG. 2 is a schematic flow chart diagram illustrating a model training method according to an embodiment of the present invention; FIG. 3 is a schematic diagram of channel split mode training according to an embodiment of the present invention; FIG. 4a is a first test Raw field image; FIG. 4b is a denoised Raw domain image obtained after the image shown in FIG. 4a is processed by the image enhancement method according to an embodiment of the present invention; FIG. 5a is an RGB image of a second test Raw domain image after direct conversion; FIG. 5b is an output result obtained after the image enhancement method according to an embodiment of the present invention processes the Raw domain image corresponding to the RGB image shown in FIG. 5 a; FIG. 5c is a graph of the output of the prior art method after processing the RGB image of FIG. 5 a; FIG. 5d is a graph of the output of the prior art method after processing the RGB image of FIG. 5 a; FIG. 6a is an RGB image of the third test Raw domain image after direct conversion; fig. 6b is an output result obtained after the image enhancement method according to an embodiment of the present invention processes the Raw domain image corresponding to the RGB image shown in fig. 6 a; FIG. 6c is a graph of the output of the prior art method after processing the RGB image of FIG. 6 a; fig. 6d is the output result of the prior art method after processing the RGB image shown in fig. 6 a.
The inventors have analyzed the methods of the prior art and found that the image enhancement schemes of the prior art have two drawbacks. First, the method is usually based on RGB image, and the imaging is realized by improving the contrast of the image and the local illumination is not uniform. But there is no good solution to the noise caused by image color, detail and high gain. Second, the method does not consider the substantial difference of images under different noise levels, and the denoising method for different noise levels is not designed specifically.
Based on the above understanding, the inventors have devised an image enhancement method, as shown in fig. 1, which includes: s110, acquiring an original Raw domain image; s130, a gain coefficient is obtained, and the original Raw domain image is used for improving the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; and S140, obtaining a denoising Raw domain image based on a denoising model, wherein input parameters of the denoising model comprise an image to be denoised and the gain coefficient, and the gain Raw domain image is configured as the image to be denoised.
In step S130, the gain factor is input by external input, including user input or other program or algorithm input.
The Raw domain image refers to an unprocessed Raw image obtained by an image sensor, however, the chinese description of the "Raw image" refers to the content that is not exactly the same as the content referred to by the "Raw". To avoid causing unnecessary erroneous understanding, the description of "Raw" is retained throughout the present application, and those skilled in the art can understand the format and source of the image data described by "Raw" without doubt.
First, the gain parameter is configured as an independent input parameter of the denoising model. According to common knowledge, the structure of the neural network model can be fundamentally changed by setting input parameters, the more input parameters are designed, the more reasonable the type division is, and the input-output relationship of a research object can be substantially fitted from the principle, so that the training speed is increased, and the model precision is improved. Secondly, the denoising model is designed for the Raw domain image, and compared with an algorithm or a model based on image setting in converted RGB (red, green, blue) and other formats, the denoising model better utilizes original information carried by the Raw domain image, so that more detail information can be retained during denoising, the denoised image is clearer, and an effect comparison graph shown later in the specification can also verify the point. Therefore, the image enhancement method can solve the problems of not clear imaging pictures and more noise caused by low-light environment in the prior art. The training method of the denoising model obtained by training can naturally solve the problems and has beneficial effects.
The image enhancement method further comprises: and S120, correcting the black level of the original Raw data to further improve the definition of the final image.
In order to facilitate the user to observe the effect, the image enhancement method further comprises: s150, converting the de-noised Raw domain image into a color image in a preset format. In this embodiment, the preset format is an RGB format, and in other embodiments, the preset format may also be a YUV format, a CYMK format, an HSB format, or the like.
The training method of the denoising model can be set according to the description (namely, the key point is that the gain parameter is used as an independent input, and the image adopts the Raw format), and the preferred scheme is as follows.
As shown in fig. 2, the denoising model is obtained based on a model training method; the model training method comprises the following steps:
s10, acquiring Raw domain images under at least two preset illumination intensities to obtain original data, wherein the original data are used for constructing a training set; and superposing at least two preset noise levels under each preset illumination intensity, wherein the preset noise levels correspond to preset gain coefficients.
In one embodiment, the preset illumination intensities include 1lux, 5lux, 10lux, 20lux, 50lux, and 100 lux.
The preset noise levels include noise levels corresponding to the gain factors being 1, 2, 4, 8, 16, 32, and 64 times.
It should be understood that two different preset illumination intensities and different preset noise levels are combined, that is, a picture of each preset noise level is collected under each preset illumination intensity. In the acquisition process, the shooting device is kept relatively still with the shot scene under the different preset illumination intensities and different preset noise levels through a fixing mechanism. The specific method of superimposing the preset noise level may be set according to the prior art, and is not described herein.
It should be understood that, in other embodiments, the preset illumination intensity and the preset noise level may be set to other schemes; preferably, the preset illumination intensity is less than or equal to 200 lux.
The step of obtaining the raw data comprises: shooting continuous Raw domain images with preset frame numbers, wherein one frame of the continuous Raw domain images with the preset frame numbers is used for constructing an input image, and the average value of the continuous Raw domain images with the preset frame numbers is used for constructing a target image. In an embodiment, the continuous Raw domain images of the preset number of frames are continuous Raw domain images of 64 frames. The selection manner of one frame for constructing the input image may be set according to actual needs, and is not described herein.
With continued reference to fig. 2, the model training method includes: s20, obtaining the training set based on the original data through preset operation; the preset operation comprises at least one of cutting, screening, rotary expansion and turnover expansion. In one embodiment, the preset operation includes: cropping the raw data into intermediate images of 128 x 128 size; screening based on the gradient of the intermediate image, and removing the image lacking texture, namely removing the image of which the gradient of the intermediate image does not meet the preset condition; and turning, rotating and turning and rotating the screened intermediate images, and expanding the sample capacity to obtain the training set. In other embodiments, the preset operation may be performed in other settings according to actual needs, and is not described herein. The process of classifying the images of the same shot scene under different working conditions to obtain paired training input images and training output images can also be set according to actual needs, and the description is not carried out here.
With continued reference to fig. 2, the model training method includes: s30, setting the denoising model into a convolutional neural network based on a pyramid structure; s40, the feature extraction network of the denoising model is set to be fused with residual learning on the basis of preset convolution calculation to extract image features of different scales; and S50, adding sub-pixel convolution and global connection to the image enhancement network of the denoising model based on the preset convolution calculation to save the high-frequency details of the image. The preset convolution calculation is a conventional convolution calculation.
The model training method comprises the following steps: s60, training the denoising model based on a channel separation mode. Referring to fig. 3, the process under two operating conditions can be understood based on fig. 3. Under the training working condition, the training input image is firstly split into four channels of input sub-images, namely R, G1, G2 and B, wherein G1 and G2 both represent G channels and are only different in positions; on the other hand, the training output image is also split into output sub-images of four channels, and the training goal of the network model is to make the output image corresponding to the input sub-image output the sub-image as close as possible to the corresponding output sub-image. Under the subsequent implementation condition, the image to be denoised is firstly split into sub-images to be denoised of four channels, the sub-images to be denoised are input into the trained network model, the denoised sub-images of the four channels are output, and the denoised sub-images are combined into a complete denoising Raw domain image. In fig. 3, the size of the graph of non-splitting channels is 2N × 2N, the size of the sub-image of splitting channels is N × N, N is an integer greater than 1, and in one embodiment, N is 64.
According to the arrangement, the data of the four channels are independently separated, so that mutual interference among the data of the four channels is avoided, and a more accurate model can be obtained.
In an embodiment, for different noise levels, a strategy of sequentially fusing different noise levels in a global training set is adopted to obtain an optimal model, including:
1/2(1/2 can be understood as a preset proportion, and can be replaced in other embodiments) of training sets with different noise levels are randomly extracted to serve as a global training set, and 3,080,000 training sets are obtained based on the global training set to obtain a basic model;
and then, according to the size of the gain coefficient, sequentially adding the rest training sets from small to large to perform detail adjustment on the basic model, and finally obtaining the ideal denoising model.
The present embodiment also provides a photographing apparatus, including: the image sensor is used for acquiring an original Raw domain image; an input device for obtaining the gain factor; the processor is used for promoting the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; obtaining a denoising Raw domain image based on a denoising model, wherein input parameters of the denoising model comprise an image to be denoised and the gain coefficient, and the gain Raw domain image is configured as the image to be denoised. Other components of the photographing device, as well as the connection mode and the communication mode between the components, can be set by those skilled in the art according to actual needs, and are not described herein.
The shooting device obtains the denoising model by training the model training method based on the embodiment, so that the shooting device has the advantage of high imaging definition in a low-light environment.
In order to verify the effectiveness of the image enhancement method, the embodiment further provides a first test Raw domain image, a second test Raw domain image and a third test Raw domain image.
Fig. 4a shows the first test Raw domain image, and fig. 4b shows the de-noised Raw domain image obtained by processing the first test Raw domain image by the image enhancement method, and as can be seen from the result, the image enhancement method greatly improves the definition of the de-noised Raw domain image, so that the definition of the RGB image in the low-light environment can be further improved.
The second test Raw domain image and the third test Raw domain image are not directly displayed, and for comparison, fig. 5a and 6a respectively show RGB images obtained by directly converting the second test Raw domain image and the third test Raw domain image. Fig. 5b shows the RGB image obtained after the second test Raw domain image is processed in this embodiment, and fig. 5c and 5d show the RGB image obtained after the control group is run, wherein fig. 5c is processed by method one, and fig. 5d is processed by method two. Method one is an image enhancement method under dark light environment disclosed in patent application No. 201811639590.7, and the specific implementation steps can be understood by referring to the content of the patent application document, and the inventor analyzes that method one is limited by global image-based, severe noise and color shift. The second method is a night color image enhancement method based on statistical rules, which is disclosed in the patent with application number 201410072449.9, and through the analysis of the inventor, the second method has the limitations of high complexity, color defects and detail loss. As can be seen from the comparison between fig. 5b, fig. 5c, and fig. 5d, the definition of the image enhancement method of the present embodiment is higher than that of the other two methods, and the effect is better. The correlation analysis of fig. 6b, 6c and 6d can be understood with reference to this paragraph, and fig. 6b, 6c and 6d also reflect that the effect of the image enhancement method of the present embodiment is better.
In the image enhancement method and the photographing apparatus provided in this embodiment, the image enhancement method includes the steps of: acquiring an original Raw domain image; acquiring a gain coefficient, wherein the original Raw domain image promotes the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; and obtaining a denoising Raw domain image based on a denoising model, wherein the input parameters of the denoising model comprise an image to be denoised and a gain coefficient, and the gain Raw domain image is configured as the image to be denoised. Due to the configuration, on one hand, a gain coefficient is added as an independent input parameter, the precision of the denoising model is improved, and on the other hand, the Raw domain image is taken as a key object for processing, and compared with other schemes for processing data based on formats such as RGB, the problem core is grasped, and a better effect can be obtained. The problems that the final imaging picture is not clear enough and the noise is more due to the low-light environment in the prior art can be solved.
The above description is only for the purpose of describing the preferred embodiments of the present invention, and is not intended to limit the scope of the present invention, and any variations and modifications made by those skilled in the art according to the above disclosure are within the scope of the present invention.

Claims (10)

1. An image enhancement method, comprising the steps of:
acquiring an original Raw domain image;
acquiring a gain coefficient, wherein the original Raw domain image promotes the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; and the number of the first and second groups,
obtaining a denoising Raw domain image based on a denoising model, wherein input parameters of the denoising model comprise an image to be denoised and the gain coefficient, and the gain Raw domain image is configured as the image to be denoised.
2. The image enhancement method of claim 1, further comprising: and converting the de-noised Raw domain image into a color image in a preset format.
3. The image enhancement method according to claim 1, wherein the denoising model is obtained based on a model training method; the model training method comprises the following steps:
acquiring Raw domain images under at least two preset illumination intensities to obtain Raw data, wherein the Raw data is used for constructing a training set; and superposing at least two preset noise levels under each preset illumination intensity, wherein the preset noise levels correspond to preset gain coefficients.
4. The image enhancement method of claim 3, wherein the preset illumination intensities comprise 1lux, 5lux, 10lux, 20lux, 50lux, and 100 lux.
5. The image enhancement method according to claim 3, wherein the preset noise level includes noise levels corresponding to the gain factors of 1 time, 2 times, 4 times, 8 times, 16 times, 32 times, and 64 times.
6. The image enhancement method of claim 3, wherein the step of acquiring Raw domain images at least two preset illumination intensities to obtain the Raw data comprises:
shooting continuous Raw domain images with preset frame numbers; one frame of the continuous Raw domain images with the preset frame number is used for constructing an input image, and the average value of the continuous Raw domain images with the preset frame number is used for constructing a target image.
7. The image enhancement method according to any one of claims 3 to 6, wherein the model training method comprises:
the original data are based on preset operation to obtain the training set; the preset operation comprises at least one of cutting, screening, rotary expansion and turnover expansion.
8. The image enhancement method of claim 7, wherein the model training method comprises:
the denoising model is set to be a convolutional neural network based on a pyramid structure;
the feature extraction network of the denoising model is set to be fused with residual learning on the basis of preset convolution calculation; and the number of the first and second groups,
and the image enhancement network of the denoising model is set to add sub-pixel convolution and global connection on the basis of the preset convolution calculation.
9. The image enhancement method of claim 7, wherein the model training method comprises: and training the denoising model based on a channel separation mode.
10. A camera, comprising:
the image sensor is used for acquiring an original Raw domain image;
an input device for obtaining the gain factor; and the number of the first and second groups,
the processor is used for promoting the brightness value of the pixel of the original Raw domain image based on the gain coefficient to obtain a gain Raw domain image; obtaining a denoising Raw domain image based on a denoising model, wherein input parameters of the denoising model comprise an image to be denoised and the gain coefficient, and the gain Raw domain image is configured as the image to be denoised.
CN202111124068.7A 2021-09-24 2021-09-24 Image enhancement method and shooting device Active CN113852759B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111124068.7A CN113852759B (en) 2021-09-24 2021-09-24 Image enhancement method and shooting device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111124068.7A CN113852759B (en) 2021-09-24 2021-09-24 Image enhancement method and shooting device

Publications (2)

Publication Number Publication Date
CN113852759A true CN113852759A (en) 2021-12-28
CN113852759B CN113852759B (en) 2023-04-18

Family

ID=78979406

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111124068.7A Active CN113852759B (en) 2021-09-24 2021-09-24 Image enhancement method and shooting device

Country Status (1)

Country Link
CN (1) CN113852759B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494063A (en) * 2022-01-25 2022-05-13 电子科技大学 Night traffic image enhancement method based on biological vision mechanism
CN117351216A (en) * 2023-12-05 2024-01-05 成都宜图智享信息科技有限公司 Image self-adaptive denoising method based on supervised deep learning

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
US20190378247A1 (en) * 2018-06-07 2019-12-12 Beijing Kuangshi Technology Co., Ltd. Image processing method, electronic device and non-transitory computer-readable recording medium
CN110610463A (en) * 2019-08-07 2019-12-24 深圳大学 Image enhancement method and device
CN111261183A (en) * 2018-12-03 2020-06-09 珠海格力电器股份有限公司 Method and device for denoising voice
CN111275653A (en) * 2020-02-28 2020-06-12 北京松果电子有限公司 Image denoising method and device
CN111325679A (en) * 2020-01-08 2020-06-23 深圳深知未来智能有限公司 Method for enhancing dark light image from Raw to Raw
CN111861902A (en) * 2020-06-10 2020-10-30 天津大学 Deep learning-based Raw domain video denoising method
US20210035273A1 (en) * 2019-07-30 2021-02-04 Nvidia Corporation Enhanced high-dynamic-range imaging and tone mapping
CN112381743A (en) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 Image processing method, device, equipment and storage medium
CN112581379A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Image enhancement method and device
CN113256508A (en) * 2021-04-09 2021-08-13 浙江工业大学 Improved wavelet transform and convolution neural network image denoising method
JP2021140663A (en) * 2020-03-09 2021-09-16 キヤノン株式会社 Image processing method, image processing device, image processing program, and recording medium

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190378247A1 (en) * 2018-06-07 2019-12-12 Beijing Kuangshi Technology Co., Ltd. Image processing method, electronic device and non-transitory computer-readable recording medium
CN109410127A (en) * 2018-09-17 2019-03-01 西安电子科技大学 A kind of image de-noising method based on deep learning and multi-scale image enhancing
CN111261183A (en) * 2018-12-03 2020-06-09 珠海格力电器股份有限公司 Method and device for denoising voice
US20210035273A1 (en) * 2019-07-30 2021-02-04 Nvidia Corporation Enhanced high-dynamic-range imaging and tone mapping
CN110610463A (en) * 2019-08-07 2019-12-24 深圳大学 Image enhancement method and device
CN112581379A (en) * 2019-09-30 2021-03-30 华为技术有限公司 Image enhancement method and device
CN111325679A (en) * 2020-01-08 2020-06-23 深圳深知未来智能有限公司 Method for enhancing dark light image from Raw to Raw
CN111275653A (en) * 2020-02-28 2020-06-12 北京松果电子有限公司 Image denoising method and device
JP2021140663A (en) * 2020-03-09 2021-09-16 キヤノン株式会社 Image processing method, image processing device, image processing program, and recording medium
CN111861902A (en) * 2020-06-10 2020-10-30 天津大学 Deep learning-based Raw domain video denoising method
CN112381743A (en) * 2020-12-01 2021-02-19 影石创新科技股份有限公司 Image processing method, device, equipment and storage medium
CN113256508A (en) * 2021-04-09 2021-08-13 浙江工业大学 Improved wavelet transform and convolution neural network image denoising method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114494063A (en) * 2022-01-25 2022-05-13 电子科技大学 Night traffic image enhancement method based on biological vision mechanism
CN114494063B (en) * 2022-01-25 2023-04-07 电子科技大学 Night traffic image enhancement method based on biological vision mechanism
CN117351216A (en) * 2023-12-05 2024-01-05 成都宜图智享信息科技有限公司 Image self-adaptive denoising method based on supervised deep learning
CN117351216B (en) * 2023-12-05 2024-02-02 成都宜图智享信息科技有限公司 Image self-adaptive denoising method based on supervised deep learning

Also Published As

Publication number Publication date
CN113852759B (en) 2023-04-18

Similar Documents

Publication Publication Date Title
CN113852759B (en) Image enhancement method and shooting device
US11625815B2 (en) Image processor and method
JP4375322B2 (en) Image processing apparatus, image processing method, program thereof, and computer-readable recording medium recording the program
US8018504B2 (en) Reduction of position dependent noise in a digital image
CN110246087B (en) System and method for removing image chroma noise by referring to multi-resolution of multiple channels
US10778917B2 (en) Joint dictionary generation method for image processing, interlace-based high dynamic range imaging apparatus using joint dictionaries and image processing method of the same
CN110930301B (en) Image processing method, device, storage medium and electronic equipment
CN104429056A (en) Image processing method, image processing device, imaging device, and image processing program
CN111353948A (en) Image noise reduction method, device and equipment
CN109118437B (en) Method and storage medium capable of processing muddy water image in real time
US7602967B2 (en) Method of improving image quality
US20210390658A1 (en) Image processing apparatus and method
CN110517206B (en) Method and device for eliminating color moire
CN114862698A (en) Method and device for correcting real overexposure image based on channel guidance
CN113379609B (en) Image processing method, storage medium and terminal equipment
WO2012008116A1 (en) Image processing apparatus, image processing method, and program
US9373158B2 (en) Method for reducing image artifacts produced by a CMOS camera
KR20090117617A (en) Image processing apparatus, method, and program
CN113554567B (en) Robust ghost-removing system and method based on wavelet transformation
CN115809966A (en) Low-illumination image enhancement method and system
CN114549386A (en) Multi-exposure image fusion method based on self-adaptive illumination consistency
CN113379608A (en) Image processing method, storage medium and terminal equipment
CN113379611B (en) Image processing model generation method, processing method, storage medium and terminal
CN114051125B (en) Image processing method and system
CN116744129A (en) Image mole pattern removing method based on focusing-virtual focusing double cameras

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant