CN110166706B - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN110166706B
CN110166706B CN201910509574.4A CN201910509574A CN110166706B CN 110166706 B CN110166706 B CN 110166706B CN 201910509574 A CN201910509574 A CN 201910509574A CN 110166706 B CN110166706 B CN 110166706B
Authority
CN
China
Prior art keywords
image
determining
noise reduction
exposure
original images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910509574.4A
Other languages
Chinese (zh)
Other versions
CN110166706A (en
Inventor
林泉佑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910509574.4A priority Critical patent/CN110166706B/en
Publication of CN110166706A publication Critical patent/CN110166706A/en
Application granted granted Critical
Publication of CN110166706B publication Critical patent/CN110166706B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application discloses an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: collecting a preview picture, and determining the dynamic range of the preview picture and the picture movement degree relative to the recently collected picture; determining an evaluation value according to the dynamic range and the picture moving degree; if the evaluation value is larger than or equal to the first threshold value, determining to adopt a surrounding exposure mode to collect multiple frames of original images; determining a plurality of noise reduction models corresponding to a plurality of frames of original images; the method comprises the steps of reducing noise of multiple frames of original images by adopting multiple noise reduction models, and synthesizing a target night scene image according to the multiple frames of original images after noise reduction.

Description

Image processing method, image processing apparatus, electronic device, and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
With the continuous development of intelligent terminal technology, more and more users prefer to take photos through the camera function on the intelligent terminal. With the normalized development of the photographing requirement, how to better satisfy the photographing requirement of the user becomes a main direction of development, for example, the clear photographing requirement of the user in multiple scenes at night and in the daytime is satisfied.
In the related technology, under a night scene or a dark light environment, a plurality of frames of images with the same exposure value and a plurality of under-exposed images are collected, multi-frame noise reduction is carried out on the images with the same exposure value, and then High-dynamic range (HDR) fusion is carried out on the images with the under-exposed images, so that a High-dynamic and clean night scene image is achieved. However, the difference in the image noise expression between different exposure values is large, and the noise reduction processing in the spatial domain is performed on the region of discontinuous noise caused by frame fusion from different exposure values to reduce the phenomenon of discontinuous noise, which may cause loss of details of partial luminance.
Disclosure of Invention
The present application aims to solve at least one of the technical problems in the related art to some extent.
Therefore, a first objective of the present application is to provide an image processing method, which ensures high dynamic and clean images, reduces discontinuous noise generation, retains more image details, and ensures image sharpness.
A second object of the present application is to provide an image processing apparatus.
A third object of the present application is to provide an electronic device.
A fourth object of the present application is to propose a computer readable storage medium.
To achieve the above object, an embodiment of a first aspect of the present application provides an image processing method, including: collecting a preview picture, and determining the dynamic range of the preview picture and the picture movement degree relative to the recently collected picture; according to dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion; if the evaluation value SfIf the value is larger than or equal to the first threshold value, determining to adopt a surround exposure mode to collect multiple frames of original images; determining a plurality of noise reduction models corresponding to a plurality of frames of original images; and denoising the multi-frame original images by adopting a plurality of denoising models, and synthesizing according to the denoised multi-frame original images to obtain a target night scene image.
The image processing method of the embodiment of the application acquires the preview picture, and determines the dynamic range of the preview picture and the picture movement degree relative to the recently acquired picture; according to dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion; if the evaluation value SfIf the value is larger than or equal to the first threshold value, determining to adopt a surround exposure mode to collect multiple frames of original images; determining a plurality of noise reduction models corresponding to a plurality of frames of original images; and denoising the multi-frame original images by adopting a plurality of denoising models, and synthesizing according to the denoised multi-frame original images to obtain a target night scene image. The method determines the optimal exposure mode based on the dynamic range and the picture moving degree of the preview picture, defines the noise reduction and fusion of the image under the enclosing exposure mode, ensures that the image is a cleaner image when in fusion, ensures the high dynamic and the cleanness of the image, reduces the generation of discontinuous noise and reduces the noise generationAnd more image details are kept, and the definition of the image is ensured.
To achieve the above object, a second aspect of the present application provides an image processing apparatus, comprising: the preview picture acquisition module is used for acquiring a preview picture; the first determining module is used for determining the dynamic range of the preview picture and the picture moving degree relative to the recently acquired picture; a second determination module for determining the dynamic range SdAnd the picture movement degree SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion; a raw image acquisition module for acquiring the evaluation value SfWhen the value is larger than or equal to the first threshold value, determining to adopt a surrounding exposure mode to collect multiple frames of original images; the noise reduction module determining module is used for determining a plurality of noise reduction models corresponding to the multi-frame original images; the noise reduction module is used for reducing the noise of the multi-frame original image by adopting the plurality of noise reduction models; and the image synthesis module is used for synthesizing a target night scene image according to the multi-frame original images subjected to noise reduction.
The image processing device of the embodiment of the application acquires the preview picture, and determines the dynamic range of the preview picture and the picture movement degree relative to the recently acquired picture; according to dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion; if the evaluation value SfIf the value is larger than or equal to the first threshold value, determining to adopt a surround exposure mode to collect multiple frames of original images; determining a plurality of noise reduction models corresponding to a plurality of frames of original images; and denoising the multi-frame original images by adopting a plurality of denoising models, and synthesizing according to the denoised multi-frame original images to obtain a target night scene image. The device can determine the optimal exposure mode based on the dynamic range and the picture moving degree of the preview picture, defines the noise reduction and fusion of the image under the enclosing exposure mode, ensures that the image is a cleaner image when being fused, reduces the generation of discontinuous noise and keeps more image details while ensuring the high dynamic and cleanness of the image, and ensures the definition of the image.
To achieve the above object, a third aspect of the present application provides an electronic device, including an image sensor, a memory, a processor, and a computer program stored in the memory and executable on the processor; the processor reads the executable program code stored in the memory to run a program corresponding to the executable program code, so as to implement the image processing method described in the embodiment of the first aspect of the present application.
To achieve the above object, a fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the image processing method according to the first aspect of the present application.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
FIG. 1 is a schematic flow chart diagram of an image processing method according to a first embodiment of the present application;
FIG. 2 is a schematic flow chart diagram of an image processing method according to a second embodiment of the present application;
FIG. 3 is a schematic flow chart of an image processing method according to a third embodiment of the present application;
fig. 4 is a schematic configuration diagram of an image processing apparatus according to a first embodiment of the present application;
FIG. 5 is a schematic structural diagram of an electronic device provided in accordance with an embodiment of the present application;
FIG. 6 is a schematic diagram of an electronic device provided in accordance with one embodiment of the present application;
FIG. 7 is a schematic diagram of an image processing circuit provided in accordance with one embodiment of the present application.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An image processing method, an apparatus, an electronic device, and a computer-readable storage medium according to embodiments of the present application are described below with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart diagram of an image processing method according to an embodiment of the present application.
Step 101, collecting a preview picture, and determining the dynamic range of the preview picture and the picture movement degree relative to the recently collected picture.
In the embodiment of the application, the preview picture can be an image picture displayed on a photographing interface of the imaging device, and in the process of acquiring the image picture by the imaging device, the preview picture can be displayed according to the photographing operation of a user so as to display the image picture on the preview interface of the electronic device and acquire the preview picture acquired by the imaging device, so that the user can clearly see the imaging effect of each frame of image in the process of acquiring the image.
The dynamic range refers to a range from the brightest area to the darkest area in the image frame.
In the embodiment of the application, the areas of an over-exposure area (for example, the brightness is greater than 220) and an over-dark area (for example, the brightness is less than 30) in a preview picture are counted, the proportion of the total area of the areas to the total area of the preview picture is calculated, the proportion is normalized to a score between 0 and 1, the score is taken as a dynamic range and is recorded as SdWhen S isdThe larger the size, the higher the dynamic range of the picture.
In the embodiment of the application, the area of the moving area in the preview picture is counted, the proportion of the total area of the moving area in the total area of the preview picture is calculated, the proportion is normalized to a fraction between 0 and 1, and the fraction is recorded as SmThe degree of movement of the preview picture relative to the most recently acquired picture is used.
Step 102, according to the dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion.
Specifically, SfAnd Sd(1-Sm) Is in direct proportion; that is, the larger the dynamic range, the larger the evaluation value; the larger the screen movement degree, the smaller the evaluation value.
In the embodiment of the present application, after determining the dynamic range of the preview image and the image movement degree relative to the most recently acquired image, the following formula may be adopted to determine the evaluation value.
Sf=Sd(1-Sm)*
Wherein, S in the formuladTo dynamic range, SmAs a degree of picture movement, SfAs evaluation values, as adjustment coefficients.
Step 103, if the evaluation value S is determinedfAnd if the value is larger than or equal to the first threshold value, determining to acquire multiple frames of original images by adopting a surrounding exposure mode.
Specifically, the bracket exposure refers to capturing a plurality of images with the same exposure amount according to the preset setting. For example, three images are taken, one for a normal exposure image, one for an underexposure image, and one for a more underexposure image.
In the embodiment of the present application, if it is determined that the dynamic range of the preview screen is large or the screen movement range is small, the evaluation value S is evaluatedfWhen larger, e.g. when the evaluation value SfWhen the value is larger than or equal to the first threshold value, a plurality of frames of original images can be acquired in a mode of surrounding exposure. However, if a multi-frame underexposure mode is adopted to acquire an image at this time, a multi-frame original image is acquired under the underexposure condition, which may result in poor imaging effect of a dark light area in the acquired multi-frame original image. Therefore, the surrounding exposure mode is adopted, so that the normally exposed images in the collected multi-frame original images are ensured, the object to be shot in a dim light area can be shot clearly, and more details of the images can be acquired when the multi-frame original images are synthesized in a high dynamic range.
And 104, determining a plurality of noise reduction models corresponding to the plurality of frames of original images.
Optionally, determining an exposure value corresponding to each frame of original image; and determining a plurality of noise reduction models corresponding to the plurality of frames of original images according to the exposure value corresponding to each frame of original image.
That is, in the evaluation value SfAnd when the value is greater than or equal to the first threshold value, determining to acquire multiple frames of original images in a bracketing mode, such as one original image with normal exposure, one original image with underexposure and one original image with more underexposure. Wherein the underexposed and underexposed original images can be based on the dynamic range SdIt is determined that the maximum range of under-exposed and under-exposed exposure values may be-6. In order to further improve the noise consistency and the definition of the night scene image, for the exposure mode, noise models with different exposure values corresponding to the sensitivity can be adopted to respectively reduce the noise of the original images corresponding to the different exposure values.
It can be understood that, in order to obtain a better artificial intelligence noise reduction effect, when a noise reduction model is selected for noise reduction, as shown in fig. 2, the noise reduction model is trained in advance by using a training sample set, so as to improve the capability of the noise reduction model for recognizing the noise characteristics. The method comprises the following specific steps:
step 201, a training sample set is obtained, wherein the training sample set includes sample images shot under each exposure value.
In the embodiment of the present application, an image taken by an imaging apparatus at each exposure value may be employed as a sample map.
Step 202, selecting a target sample image shot under the same exposure value from the training sample set.
Step 203, dividing the target sample graph into a plurality of groups according to the sensitivity adopted during shooting, and training a noise reduction model corresponding to each group, wherein the noise reduction model learns the mapping relationship between the sensitivity and the noise characteristics of the target sample graph.
And 204, determining a noise reduction model matched with the corresponding exposure value from the noise reduction models corresponding to each group according to the accuracy of the noise reduction model and the sensitivity of the target sample graph adopted by training.
Furthermore, after the noise reduction models corresponding to each group are trained, the noise reduction effect of each noise reduction model is evaluated to obtain the accuracy of each noise reduction model. And determining a noise reduction model matched with the corresponding exposure value from the noise reduction models corresponding to each group according to the accuracy of the noise reduction model and the sensitivity of a target sample image adopted by training, so as to reduce the noise of the high-dynamic-range image according to the noise reduction model, thereby improving the image quality.
As a possible implementation manner of the embodiment of the present application, when determining a noise reduction model matching a corresponding exposure value from noise reduction models corresponding to each group according to accuracy of the noise reduction model and sensitivity of a target sample map used for training, a noise reduction model having accuracy greater than a threshold may be determined from noise reduction models corresponding to each group as a candidate noise reduction model. And then selecting the candidate noise reduction model with the maximum target sample image sensitivity adopted in training as the noise reduction model matched with the corresponding exposure value from the candidate noise reduction models.
And 105, denoising the multi-frame original images by adopting a plurality of denoising models, and synthesizing to obtain a target night scene image according to the denoised multi-frame original images.
In the embodiment of the application, after a plurality of noise reduction models corresponding to a plurality of frames of original images are determined, the plurality of noise reduction models are adopted to reduce noise of the plurality of frames of original images, and the plurality of noise reduced frames of original images are input into an HDR synthesis model to obtain the synthesis weight of each region in the corresponding original images; and synthesizing the denoised multi-frame original images in regions according to the synthesis weight so as to obtain a target night scene image. It should be noted that, the HDR synthesis model has learned to obtain a mapping relationship between the features of each region in the original image and the synthesis weight; wherein the features are used to characterize the exposure and the image brightness of the corresponding region.
Based on the above embodiment, after it is determined that multiple frames of original images are acquired in the exposure bracketing mode in step 103, as shown in fig. 3, the reference exposure time may be compensated according to the exposure compensation mode, and the compensation exposure time corresponding to each frame of original image is determined. The method comprises the following specific steps:
step 301, determining a corresponding exposure compensation mode according to a surrounding exposure mode; the exposure compensation mode is used for indicating the frame number of the original image and the exposure compensation level corresponding to each frame of the original image.
In this embodiment, the determined number of image frames to be acquired may also be different according to the exposure mode of different exposure values in the exposure mode, and different exposure compensation levels need to be adopted when the number of original image frames to be acquired is different.
Step 302, according to the moving degree S of the picturemThe reference sensitivity is determined.
It is understood that, in the process of acquiring a plurality of frames of original images, the situation that the picture of the preview image moves relative to the picture of the most recently acquired image is caused by the shake of the imaging device acquiring the plurality of frames of original images. Also, the screen movement degree has a positive correlation with the shake degree of the imaging apparatus. Therefore, in the present embodiment, the reference sensitivity can be determined according to the degree of shake of the imaging apparatus that captures a plurality of frames of original images.
In the embodiment, when multiple frames of original images are collected under the condition that the reference sensitivity is a low value, the image noise can be reduced, the dynamic range and the overall brightness of the night scene shooting image can be improved by simultaneously collecting multiple frames of images with low sensitivity and synthesizing the collected multiple frames of images to generate a high dynamic range image, the noise in the image is effectively inhibited by controlling the value of the sensitivity, and the quality of the night scene shooting image is improved.
It can be understood that the sensitivity of the acquired image affects the overall shooting time, and the shooting time is too long, which may cause the shake degree of the imaging device to be aggravated during the handheld shooting, thereby affecting the image quality. Therefore, the reference sensitivity corresponding to the image to be acquired of each frame can be adjusted according to the current shake degree of the imaging device, so that the shooting time length is controlled within a proper range.
Specifically, if the current jitter degree of the imaging device is small, the reference sensitivity corresponding to each frame of image to be acquired can be properly compressed into a small value, so that the noise of each frame of image is effectively suppressed, and the quality of the shot image is improved; if the current shake degree of the imaging equipment is large, the reference sensitivity corresponding to each frame of image to be acquired can be properly improved to be a large value so as to shorten the shooting time.
For example, if it is determined that the current degree of shake of the imaging apparatus is "no shake", the reference sensitivity may be determined to be a smaller value to obtain an image of higher quality as much as possible, such as determining the reference sensitivity to be 100; if it is determined that the current shake degree of the imaging apparatus is "slight shake", the reference sensitivity may be determined to be a larger value to reduce the shooting time period, for example, the reference sensitivity is determined to be 200; if the current shake degree of the imaging device is determined to be "small shake", the reference sensitivity may be further increased to reduce the shooting time duration, for example, the reference sensitivity is determined to be 220; if it is determined that the current shake degree of the imaging apparatus is "large shake", it may be determined that the current shake degree is too large, and at this time, the reference sensitivity may be further increased to reduce the shooting time period, for example, the reference sensitivity is determined to be 250.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, when the degree of shake of the imaging apparatus changes, an optimal solution can be obtained by adjusting the reference sensitivity. The mapping relation between the jitter degree of the imaging equipment and the reference sensitivity corresponding to each frame of image to be acquired can be preset according to actual needs.
Note that, when the reference sensitivity corresponding to the degree of shake is adjusted in accordance with the degree of shake of the imaging apparatus, if the current reference sensitivity is just adapted to the degree of shake, the result of the adjustment is that the reference sensitivity remains unchanged. This also falls within the scope of "adjustment" in the embodiments of the present application.
In addition, in a possible application scenario, the camera module of the imaging apparatus is composed of multiple lenses, so that different lenses can also correspond to different sensitivities in the same shooting environment, and the reference sensitivity adjusted in this step should be the same for a shooting process performed by one of the multiple lenses, in which the same reference sensitivity is adopted for capturing multiple frames of images.
In addition, in the embodiment of the present application, the reference sensitivity is not limited to be adjusted only according to the shake degree of the imaging device, and may also be determined comprehensively according to a plurality of parameters such as the shake degree and the luminance information of the shooting scene, which is not limited herein.
Step 303, determining a reference exposure time length according to the brightness information of the shooting scene and the set reference sensitivity.
The exposure duration refers to the time of light passing through the lens.
In this embodiment of the application, the luminance information of the shot scene may be obtained by photometry with a photometry module in the imaging device, or obtained by the luminance information in the preview picture, which is not limited herein. The brightness information usually takes the illuminance of the shot scene as a brightness measurement index, and those skilled in the art can know that other indexes can be used for brightness measurement, which are all within the scope of the present embodiment.
Specifically, an Automatic Exposure Control (AEC) algorithm can be used to determine the Exposure corresponding to the current luminance information, and further, a reference Exposure duration is determined for each frame of image to be acquired in the multiple frames of images to be acquired according to the luminance information of the shooting scene and the reference sensitivity.
And 304, compensating the reference exposure duration according to the exposure compensation mode, and determining the compensation exposure duration corresponding to each frame of original image.
In the embodiment of the application, when the exposure modes adopted by the imaging equipment for collecting multiple frames of original images are different, the preset exposure compensation values of the determined images to be collected of the frames are different. In this case, a mapping relationship between the shake degree of the imaging device and the exposure compensation value may be preset, so as to determine the preset exposure compensation value of the image to be acquired in each current frame according to the shake degree of the imaging device.
For example, when the shake degree of the imaging device is "no shake", the exposure value range of the exposure compensation value corresponding to the image to be acquired of each frame is preset to be-6-2, and the difference value between adjacent exposure values is 0.5; the method comprises the steps of presetting the shaking degree of the imaging equipment as 'slight shaking', setting the exposure value range of the exposure compensation value corresponding to the image to be collected of each frame as-5-1, setting the difference value between adjacent exposure values as 1 and the like.
As another possible implementation form, detecting whether a preview picture of the imaging device contains a human face, wherein when the preview picture contains the human face and does not contain the human face, the night scene modes suitable for the current shooting scene are different, and the exposure compensation values preset for each frame of image to be acquired determined by the method are also different.
As another possible implementation manner, for the same shaking degree, it may be determined that different exposure compensation values are used for each frame of image to be acquired according to whether the preview picture contains a human face. Therefore, for the same degree of shaking, a plurality of exposure compensation values may be corresponded. For example, the degree of shake of the imaging device is "slight shake", and the preset exposure compensation value of each frame of image to be acquired includes two cases, namely including a human face and not including a human face.
In the night view mode, when an image to be acquired includes a face, the illumination intensity of a face region is usually low, so that a determined reference exposure is caused, which is higher than the determined reference exposure when the face is not included, if too many overexposed frames are still acquired when the face is included, the face region is easily overexposed, so that the imaging effect of the acquired image is poor, and the corresponding exposure compensation mode needs to have a low exposure compensation range. Therefore, for the same shake degree, when the preview picture contains a human face, compared with the preview picture without the human face, after the current shake degree of the imaging device is determined and whether the preview picture contains the human face or not, the preset exposure compensation value which is consistent with the current actual situation can be determined.
In the embodiment of the present application, after the reference sensitivity and the corresponding compensation exposure duration of each frame of the original image are determined, the imaging device is controlled to acquire an image according to the reference sensitivity and the corresponding compensation exposure duration of each frame of the original image, which is not described in detail herein.
According to the image processing method of the embodiment of the application, the preview picture is acquired, and the preview picture is determinedDynamic range and picture movement degree relative to the recently acquired picture; according to dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion; if the evaluation value SfIf the value is larger than or equal to the first threshold value, determining to adopt a surround exposure mode to collect multiple frames of original images; determining a plurality of noise reduction models corresponding to a plurality of frames of original images; and denoising the multi-frame original images by adopting a plurality of denoising models, and synthesizing according to the denoised multi-frame original images to obtain a target night scene image. The method determines the optimal exposure mode based on the dynamic range and the picture moving degree of the preview picture, defines the noise reduction and fusion of the image under the surrounding exposure mode, ensures that the image is a cleaner image when being fused, reduces the generation of discontinuous noise and keeps more image details while ensuring the high dynamic and cleanness of the image, and ensures the definition of the image.
In accordance with the image processing methods provided in the foregoing embodiments, an embodiment of the present application further provides an image processing apparatus, and since the image processing apparatus provided in the embodiment of the present application corresponds to the image processing methods provided in the foregoing embodiments, the embodiments of the image processing method described above are also applicable to the image processing apparatus provided in the embodiment, and are not described in detail in the embodiment. Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application. As shown in fig. 4, the image processing 400 includes: the image processing device comprises a preview picture acquisition module 410, a first determination module 420, a second determination module 430, an original image acquisition module 440, a noise reduction model determination module 450, a noise reduction module 460 and an image synthesis module 470.
Specifically, the preview screen capturing module 410 is configured to capture a preview screen.
The first determining module 420 is configured to determine a dynamic range of the preview screen and a screen movement degree relative to a most recently captured screen.
A second determining module 430 for determining the dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion.
A raw image acquisition module 440 for evaluating the value SfAnd when the value is larger than or equal to the first threshold value, determining to acquire a plurality of frames of original images by adopting a surrounding exposure mode.
The denoising model determining module 450 is configured to determine a plurality of denoising models corresponding to the plurality of frames of original images. As an example, the noise reduction model determination module 450 is specifically configured to: determining an exposure value corresponding to each frame of original image; and determining a plurality of noise reduction models corresponding to the plurality of frames of original images according to the exposure value corresponding to each frame of original image.
In one embodiment of the present application, the noise reduction model is pre-trained by: acquiring a training sample set, wherein the training sample set comprises sample images shot under each exposure value; selecting a target sample image shot under the same exposure value from the training sample set; dividing the target sample graph into a plurality of groups according to the sensitivity adopted during shooting, and training a noise reduction model corresponding to each group, wherein the noise reduction model is learned to obtain a mapping relation between the sensitivity and the noise characteristics of the target sample graph; and determining the noise reduction model matched with the corresponding exposure value from the noise reduction models corresponding to each group according to the accuracy of the noise reduction model and the sensitivity of the target sample graph adopted by training.
In the embodiment of the present application, according to the accuracy of the noise reduction model and the sensitivity of the target sample map adopted for training, a specific implementation process of determining the noise reduction model matched with the corresponding exposure value from the noise reduction models corresponding to each group may be as follows: determining candidate noise reduction models with accuracy greater than a threshold value from the noise reduction models corresponding to each group; and taking the candidate noise reduction model with the maximum sensitivity of the target sample graph adopted by training as the noise reduction model matched with the corresponding exposure value.
And a denoising module 460, configured to denoise the multiple frames of original images by using multiple denoising models.
And the image synthesis module 470 is configured to synthesize a target night scene image according to the de-noised multiple frames of original images. As an example, the image composition module 470 is specifically configured to: inputting the noise-reduced multi-frame original images into a high-dynamic synthesis model to obtain the synthesis weight of each region in the corresponding original images; and synthesizing the denoised multi-frame original images in regions according to the synthesis weight so as to obtain a target night scene image. The high dynamic synthesis model learns the mapping relation between the characteristics of each region in the original image and the synthesis weight; the features are used to characterize the exposure and image brightness of the corresponding region.
In one embodiment of the present application, the image processing apparatus may further include: the device comprises an exposure compensation mode determining module, a reference exposure duration determining module and a compensation exposure duration determining module. The exposure compensation mode determining module is used for determining a corresponding exposure compensation mode according to the surrounding exposure mode after determining that a plurality of frames of original images are collected in the surrounding exposure mode; the exposure compensation mode is used for indicating the frame number of the original image and the exposure compensation level corresponding to each frame of the original image. The reference exposure time length determining module is used for determining the reference exposure time length according to the brightness information of the shooting scene and the set reference sensitivity. And the compensation exposure duration determining module is used for compensating the reference exposure duration according to the exposure compensation mode and determining the compensation exposure duration corresponding to each frame of original image.
In an embodiment of the present application, the raw image capturing module 440 is specifically configured to: and acquiring images according to the reference sensitivity and the compensation exposure duration corresponding to each frame of original image.
In an embodiment of the application, the reference exposure time length determining module is further configured to determine the reference exposure time length according to the picture movement degree S before determining the reference exposure time length according to the brightness information of the shooting scene and the set reference sensitivitymThe reference sensitivity is determined.
According to the image processing device, the preview picture is collected, and the dynamic range of the preview picture and the picture movement degree relative to the recently collected picture are determined; according to dynamic range SdAnd degree of picture movement SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Is in direct proportion; if the evaluation value SfIf the value is larger than or equal to the first threshold value, determining to adopt a surround exposure mode to collect multiple frames of original images; determining a plurality of noise reduction models corresponding to a plurality of frames of original images; and denoising the multi-frame original images by adopting a plurality of denoising models, and synthesizing according to the denoised multi-frame original images to obtain a target night scene image. The device can determine the optimal exposure mode based on the dynamic range and the picture moving degree of the preview picture, defines the noise reduction and fusion of the image under the surrounding exposure mode, ensures that the image is a cleaner image when being fused, reduces the generation of discontinuous noise and keeps more image details while ensuring the high dynamic and cleanness of the image, and ensures the definition of the image.
In order to implement the foregoing embodiments, an embodiment of the present application further provides an electronic device 500, which includes, referring to fig. 5: the image sensor 510, the processor 520, the memory 530 and the computer program stored on the memory 530 and capable of running on the processor 520, the image sensor 510 is electrically connected with the processor 520, and the processor 520 executes the program to realize the image processing method as described in the above embodiments.
As one possible scenario, processor 520 may include: an Image Signal Processor (ISP) processor, and a Graphics Processing Unit (GPU) connected to the ISP processor.
As an example, please refer to fig. 6, and fig. 6 is a schematic diagram of an electronic device according to an embodiment of the present application on the basis of the electronic device shown in fig. 6. The memory 530 of the electronic device 500 includes the non-volatile memory 60, the internal memory 62, and the processor 520. The memory 530 has stored therein computer readable instructions. The computer readable instructions, when executed by the memory, cause the processor 530 to perform the image processing method of any of the above embodiments.
As shown in fig. 6, the electronic apparatus 500 includes a processor 520, a nonvolatile memory 60, an internal memory 62, a display screen 63, and an input device 64 connected via a system bus 61. The non-volatile memory 60 of the electronic device 500 stores, among other things, an operating system and computer readable instructions. The computer readable instructions can be executed by the processor 520 to implement the image processing method of the embodiment of the present application. The processor 520 is used to provide computing and control capabilities that support the operation of the overall electronic device 500. The internal memory 62 of the electronic device 500 provides an environment for the execution of computer-readable instructions in the non-volatile memory 60. The display 83 of the electronic device 500 may be a liquid crystal display or an electronic ink display, and the input device 64 may be a touch layer covered on the display 63, a button, a trackball or a touch pad arranged on a housing of the electronic device 500, or an external keyboard, a touch pad or a mouse. The electronic device 500 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc. Those skilled in the art will appreciate that the configuration shown in fig. 6 is merely a schematic diagram of a portion of the configuration associated with the present application, and does not constitute a limitation on the electronic device 500 to which the present application is applied, and that a particular electronic device 500 may include more or less components than those shown, or combine certain components, or have a different arrangement of components.
To implement the foregoing embodiments, the present application further provides an image processing circuit, please refer to fig. 7, fig. 7 is a schematic diagram of an image processing circuit according to an embodiment of the present application, and as shown in fig. 7, the image processing circuit 70 includes an image signal processing ISP processor 71 (the ISP processor 71 is used as the processor 520) and a graphics processor GPU.
The image data captured by the camera 73 is first processed by the ISP processor 71, and the ISP processor 71 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the camera 73. The camera module 710 may include one or more lenses 732 and an image sensor 734. The image sensor 734 may include an array of color filters (e.g., Bayer filters), and the image sensor 734 may acquire the light intensity and wavelength information captured by each imaging pixel and provide a raw set of image data that may be processed by the ISP processor 71. The sensor 74 (e.g., gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 71 based on the type of interface of the sensor 74. The sensor 74 interface may be a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interface, or a combination thereof.
In addition, the image sensor 734 may also send raw image data to the sensor 74, the sensor 94 may provide the raw image data to the ISP processor 71 based on the sensor 74 interface type, or the sensor 74 may store the raw image data in the image memory 75.
The ISP processor 71 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 71 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 71 may also receive image data from an image memory 75. For example, the sensor 74 interface sends raw image data to the image memory 75, and the raw image data in the image memory 75 is then provided to the ISP processor 71 for processing. The image Memory 75 may be the Memory 530, a portion of the Memory 530, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from image sensor 734 interface or from sensor 74 interface or from image memory 75, ISP processor 71 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 75 for additional processing before being displayed. The ISP processor 71 receives the processed data from the image memory 75 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 71 may be output to display 77 (display 77 may include display screen 63) for viewing by a user and/or further processing by a graphics engine or GPU. Further, the output of the ISP processor 71 may also be sent to an image memory 75, and the display 77 may read image data from the image memory 75. In one embodiment, image memory 75 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 71 may be sent to an encoder/decoder 76 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 77 device. The encoder/decoder 76 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 71 may be sent to the control logic 72 unit. For example, the statistical data may include image sensor 734 statistical information such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 732 shading correction, and the like. Control logic 72 may include a processing element and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters for camera 73 and control parameters for ISP processor 71 based on the received statistical data. For example, the control parameters of camera 73 may include sensor 74 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 732 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 732 shading correction parameters.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as described in the above embodiments.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (12)

1. An image processing method, characterized in that it comprises the steps of:
collecting a preview picture, and determining the dynamic range of the preview picture and the picture movement degree relative to the recently collected picture;
according to the dynamic range SdAnd the picture movement degree SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Proportional, said dynamic range SdAnd the picture movement degree SmNormalized to a fraction between 0 and 1;
if the evaluation value SfIf the value is larger than or equal to the first threshold value, determining to adopt a surround exposure mode to collect multiple frames of original images;
determining a plurality of noise reduction models corresponding to the plurality of frames of original images;
and denoising the multi-frame original images by adopting the plurality of denoising models, and synthesizing to obtain a target night scene image according to the denoised multi-frame original images.
2. The method according to claim 1, wherein after determining to capture multiple frames of original images in a bracketing mode, the method further comprises:
determining a corresponding exposure compensation mode according to the surrounding exposure mode; the exposure compensation mode is used for indicating the frame number of the original image and the exposure compensation level corresponding to each frame of the original image;
determining a reference exposure time length according to the brightness information of the shooting scene and the set reference sensitivity;
and compensating the reference exposure duration according to the exposure compensation mode, and determining the compensation exposure duration corresponding to each frame of original image.
3. The image processing method according to claim 2, wherein the acquiring multiple frames of original images comprises:
and acquiring images according to the reference sensitivity and the compensation exposure duration corresponding to each frame of original image.
4. The image processing method according to claim 2, wherein before determining the reference exposure time period based on the luminance information of the shooting scene and the set reference sensitivity, further comprising:
according to the picture moving degree SmThe reference sensitivity is determined.
5. The method according to claim 1, wherein the determining a plurality of noise reduction models corresponding to the plurality of frames of original images comprises:
determining an exposure value corresponding to each frame of original image;
and determining a plurality of noise reduction models corresponding to the plurality of frames of original images according to the exposure values corresponding to the frames of original images.
6. The image processing method according to claim 5, wherein the noise reduction model is pre-trained by:
acquiring a training sample set, wherein the training sample set comprises sample graphs shot under each exposure value;
selecting a target sample image shot under the same exposure value from the training sample set;
dividing the target sample graph into a plurality of groups according to the sensitivity adopted during shooting, and training a noise reduction model corresponding to each group, wherein the noise reduction model learns to obtain the mapping relation between the sensitivity and the noise characteristics of the target sample graph;
and determining the noise reduction model matched with the corresponding exposure value from the noise reduction models corresponding to each group according to the accuracy of the noise reduction model and the sensitivity of the target sample graph adopted by training.
7. The image processing method according to claim 6, wherein determining a noise reduction model matching the corresponding exposure value from the noise reduction models corresponding to the respective groups according to the accuracy of the noise reduction model and the sensitivity of the target sample map used for training comprises:
determining candidate noise reduction models with accuracy greater than a threshold value from the noise reduction models corresponding to each group;
and taking the candidate noise reduction model with the maximum sensitivity of the target sample graph adopted by training as the noise reduction model matched with the corresponding exposure value.
8. The image processing method according to claim 1, wherein the synthesizing of the target night scene image according to the de-noised multi-frame original images comprises:
inputting the noise-reduced multi-frame original images into a high-dynamic synthesis model to obtain the synthesis weight of each region in the corresponding original images;
and synthesizing the noise-reduced multi-frame original images in regions according to the synthesis weight to obtain the target night scene image.
9. The image processing method according to claim 8, wherein the high-dynamic synthesis model has learned a mapping relationship between the features and the synthesis weights of the regions in the original image; the features are used to characterize the exposure and image brightness of the corresponding region.
10. An image processing apparatus, characterized in that the apparatus comprises:
the preview picture acquisition module is used for acquiring a preview picture;
the first determining module is used for determining the dynamic range of the preview picture and the picture moving degree relative to the recently acquired picture;
a second determination module for determining the dynamic range SdAnd the picture movement degree SmDetermining an evaluation value Sf(ii) a Wherein S isfAnd Sd(1-Sm) Proportional, said dynamic range SdAnd the picture movement degree SmNormalized to a fraction between 0 and 1;
a raw image acquisition module for acquiring the evaluation value SfWhen the value is larger than or equal to the first threshold value, determining to adopt a surrounding exposure mode to collect multiple frames of original images;
the noise reduction model determining module is used for determining a plurality of noise reduction models corresponding to the multi-frame original images;
the noise reduction module is used for reducing the noise of the multi-frame original image by adopting the plurality of noise reduction models;
and the image synthesis module is used for synthesizing a target night scene image according to the multi-frame original images subjected to noise reduction.
11. An electronic device comprising an image sensor, a memory, a processor, and a computer program stored on the memory and executable on the processor, the image sensor being electrically connected to the processor, the processor implementing the image processing method according to any one of claims 1 to 9 when executing the program.
12. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out an image processing method according to any one of claims 1 to 9.
CN201910509574.4A 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium Active CN110166706B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910509574.4A CN110166706B (en) 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910509574.4A CN110166706B (en) 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN110166706A CN110166706A (en) 2019-08-23
CN110166706B true CN110166706B (en) 2020-09-04

Family

ID=67628836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910509574.4A Active CN110166706B (en) 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN110166706B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110572585B (en) * 2019-08-26 2021-03-23 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN110677557B (en) * 2019-10-28 2022-04-22 Oppo广东移动通信有限公司 Image processing method, image processing device, storage medium and electronic equipment
CN114727028B (en) * 2019-12-30 2024-04-16 深圳市道通智能航空技术股份有限公司 Image exposure method and device and unmanned aerial vehicle
CN113077378B (en) * 2021-03-31 2024-02-09 重庆长安汽车股份有限公司 Image processing and target identification method based on vehicle-mounted camera
CN113298735A (en) * 2021-06-22 2021-08-24 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN113409219B (en) * 2021-06-28 2022-11-25 展讯通信(上海)有限公司 Method and device for improving HDR image quality
CN116193264B (en) * 2023-04-21 2023-06-23 中国传媒大学 Camera adjusting method and system based on exposure parameters

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3801126B2 (en) * 2002-09-25 2006-07-26 ソニー株式会社 Imaging apparatus, image output method of imaging apparatus, and computer program
JP6159105B2 (en) * 2013-03-06 2017-07-05 キヤノン株式会社 Imaging apparatus and control method thereof
US9485435B2 (en) * 2014-04-22 2016-11-01 Shih-Chieh Huang Device for synthesizing high dynamic range image based on per-pixel exposure mapping and method thereof
KR20160138685A (en) * 2015-05-26 2016-12-06 에스케이하이닉스 주식회사 Apparatus For Generating Low Complexity High Dynamic Range Image, and Method Thereof
CN109005367B (en) * 2018-10-15 2020-10-13 Oppo广东移动通信有限公司 High dynamic range image generation method, mobile terminal and storage medium

Also Published As

Publication number Publication date
CN110166706A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110072051B (en) Image processing method and device based on multi-frame images
CN109068067B (en) Exposure control method and device and electronic equipment
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
AU2019326496B2 (en) Method for capturing images at night, apparatus, electronic device, and storage medium
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110062160B (en) Image processing method and device
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN110166708B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110191291B (en) Image processing method and device based on multi-frame images
JP6911202B2 (en) Imaging control method and imaging device
CN109005364B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
US11228720B2 (en) Method for imaging controlling, electronic device, and non-transitory computer-readable storage medium
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
CN108900782B (en) Exposure control method, exposure control device and electronic equipment
CN109788207B (en) Image synthesis method and device, electronic equipment and readable storage medium
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN109348088B (en) Image noise reduction method and device, electronic equipment and computer readable storage medium
CN110166709B (en) Night scene image processing method and device, electronic equipment and storage medium
CN109005369B (en) Exposure control method, exposure control device, electronic apparatus, and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant