CN110166711B - Image processing method, image processing apparatus, electronic device, and storage medium - Google Patents

Image processing method, image processing apparatus, electronic device, and storage medium Download PDF

Info

Publication number
CN110166711B
CN110166711B CN201910509592.2A CN201910509592A CN110166711B CN 110166711 B CN110166711 B CN 110166711B CN 201910509592 A CN201910509592 A CN 201910509592A CN 110166711 B CN110166711 B CN 110166711B
Authority
CN
China
Prior art keywords
image
exposure
frame
brightness
mode
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910509592.2A
Other languages
Chinese (zh)
Other versions
CN110166711A (en
Inventor
晏秀梅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201910509592.2A priority Critical patent/CN110166711B/en
Publication of CN110166711A publication Critical patent/CN110166711A/en
Application granted granted Critical
Publication of CN110166711B publication Critical patent/CN110166711B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image processing method, an image processing device, an electronic device and a storage medium, wherein the method comprises the following steps: and responding to user operation, switching to a night scene mode, acquiring a preview image in the night scene mode, and if the acquired preview image is identified to be a non-night scene, performing brightness adjustment on a shot image acquired in the night scene mode to reduce the brightness of the shot image. According to the method, the user operates in a non-night scene, the night scene mode is used for collecting the shot image, and the brightness of the shot image is adjusted, so that the shot image can keep more details in a highlight area and a dark area, and the imaging effect of the shot image is improved.

Description

Image processing method, image processing apparatus, electronic device, and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of science and technology, the technology of cameras relying on science and technology is becoming more mature, and in daily production and life, taking pictures by using built-in cameras of intelligent mobile terminals (such as smart phones, tablet computers and the like) becomes a normal state. Therefore, with the normalized development of the photographing requirement, how to better satisfy the photographing requirement of the user becomes a main direction of development, for example, satisfying the clear photographing requirement of the user in multiple scenes at night and in the daytime.
At present, in the process of shooting an image in the daytime or in a bright scene, if a user sets the shooting mode of the mobile terminal to be a night scene mode, the system automatically returns to a common mode to shoot according to the result of scene detection, so that the image shot by the user in the night scene mode cannot achieve the expected effect.
Disclosure of Invention
The present application is directed to solving, at least to some extent, one of the technical problems in the related art.
The application provides an image processing method and device, electronic equipment and a storage medium, so that the electronic equipment is controlled to shoot images in a night scene mode in the daytime or in a bright scene, more details of the shot images can be reserved in a highlight area and a dark area, and the imaging effect of the shot images is improved.
An embodiment of a first aspect of the present application provides an image processing method, including:
responding to user operation, and switching to a night scene mode;
acquiring a preview image in the night scene mode;
and if the non-night scene is identified according to the acquired preview image, adjusting the brightness of the acquired shooting image in the night scene mode to reduce the brightness of the shooting image.
According to the image processing method, the mode is switched to the night scene mode in response to the user operation, the preview image is collected in the night scene mode, and if the non-night scene is identified according to the collected preview image, the brightness of the shot image collected in the night scene mode is adjusted to reduce the brightness of the shot image. According to the method, the user operates in a non-night scene, the night scene mode is used for collecting the shot image, and the brightness of the shot image is adjusted, so that the shot image can keep more details in a highlight area and a dark area, and the imaging effect of the shot image is improved.
An embodiment of a second aspect of the present application provides an image processing apparatus, including:
the switching module is used for responding to user operation and switching to a night scene mode;
the preview image is used for acquiring the preview image in the night scene mode;
and the adjusting module is used for adjusting the brightness of the shot image collected in the night scene mode to reduce the brightness of the shot image if the shot image is identified as a non-night scene according to the collected preview image.
The image processing device of the embodiment of the application is switched to the night scene mode by responding to the user operation, the preview image is collected in the night scene mode, and if the non-night scene is identified according to the collected preview image, the brightness of the shot image collected in the night scene mode is adjusted to reduce the brightness of the shot image. According to the method, the user operates in a non-night scene, the night scene mode is used for collecting the shot image, and the brightness of the shot image is adjusted, so that the shot image can keep more details in a highlight area and a dark area, and the imaging effect of the shot image is improved.
An embodiment of a third aspect of the present application provides an electronic device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and when the processor executes the computer program, the image processing method described in the foregoing embodiment is implemented.
A fourth aspect of the present application provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the image processing method as described in the above embodiments.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of a first image processing method according to an embodiment of the present application;
fig. 2 is a schematic flowchart of a second image processing method according to an embodiment of the present application;
fig. 3 is a schematic flowchart of a third image processing method according to an embodiment of the present application;
fig. 4 is a schematic flowchart of a fourth image processing method according to an embodiment of the present application;
fig. 5 is a schematic flowchart of a fifth image processing method according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 8 is a schematic diagram of an electronic device according to an embodiment of the present disclosure;
fig. 9 is a schematic diagram of an image processing circuit according to an embodiment of the present disclosure.
Detailed Description
Reference will now be made in detail to embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
An image processing method, an apparatus, an electronic device, and a storage medium according to embodiments of the present application are described below with reference to the drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present application.
The image processing method is applied to electronic equipment, and the electronic equipment can be hardware equipment with various operating systems and imaging equipment, such as a mobile phone, a tablet computer, a personal digital assistant and a wearable device.
As shown in fig. 1, the image processing method includes the steps of:
and step 101, responding to user operation, and switching to a night scene mode.
In the embodiment of the application, a camera of the electronic device is set to be in a night scene mode through operation by a user, and the electronic device responds to the operation of the user and switches a shooting mode of the imaging device to be in the night scene mode so as to shoot an image with the characteristics of the night scene. Therefore, in the daytime or bright scene, the image is acquired in a night scene mode, so that the image can keep more details in a highlight area and a dark area, and the imaging effect of the image is improved.
For example, in a bright scene in the day, the electronic device sets the imaging device to a night scene mode in response to a user operation, converts strong light in the day into weak light through optical principles in the object, and creates a scene light effect at night, thereby imaging.
And 102, acquiring a preview image in a night scene mode.
The preview image is an image displayed on a photographing interface of the imaging device.
In the embodiment of the application, after the imaging device of the electronic device is switched to the night view mode, in the process that the imaging device acquires an image in the night view mode, the preview interface can be displayed according to the shooting operation of the user, so that the preview image is displayed on the preview interface of the electronic device, and the preview image acquired by the imaging device is acquired, so that the user can clearly see the imaging effect of each frame of image in the process of image acquisition.
And 103, if the non-night scene is identified according to the acquired preview image, adjusting the brightness of the acquired shooting image in the night scene mode to reduce the brightness of the shooting image.
In the embodiment of the application, whether a shooting scene for acquiring the preview image is a night scene or a non-night scene can be identified according to the preview image acquired by the electronic equipment in the night scene mode.
As a possible implementation manner, since the image contents of the preview images in different shooting scenes are different, the image content of the preview image of the current shooting scene may be identified to determine whether the current shooting scene belongs to a non-night scene.
As an example, the image content of the preview image includes sunlight, or the like, or the environmental brightness value in each area of the preview image conforms to the brightness distribution characteristic of the image in the non-night scene environment, that is, it can be determined that the current shooting scene belongs to the non-night scene.
As another possible implementation manner, whether the current shooting scene is a non-night scene may be determined according to the sensitivity of the acquired preview image. Specifically, when the sensitivity of the acquired preview image is less than or equal to the sensitivity threshold, the current shooting scene is determined to be a non-night scene. The sensitivity threshold is determined according to whether a human face exists in the preview image.
As another possible implementation manner, after a preview image acquired by the electronic device in the night view mode is acquired, image feature extraction is performed on the preview image, the extracted image feature is input into the recognition model, and it is determined that the current shooting scene belongs to a non-night view scene according to a scene type output by the recognition model, where the recognition model has learned to obtain a corresponding relationship between the image feature and the scene type.
As another possible implementation manner, when a user operation for switching to a non-night scene is detected, the ambient brightness is detected to obtain brightness information, and as a possible implementation manner, the current ambient brightness may be detected by a photometry module built in the electronic device to determine the brightness information of the current environment. According to the brightness information, determining that the current shooting scene belongs to a non-night scene, for example, the brightness can be measured through a brightness index Lix _ index, wherein the larger the value of the brightness information is, the lower the brightness of the current scene is represented, the acquired brightness information is compared with a preset brightness value, and if the acquired brightness information is larger than the preset brightness value, determining that the current shooting scene belongs to the night scene. Further, if the acquired brightness information is smaller than a preset brightness value, it is determined that the current shooting scene belongs to a non-night scene.
In the embodiment of the application, after the current shooting scene is identified to be the non-night scene according to the acquired preview image, the brightness of the shooting image acquired in the night scene mode is adjusted to reduce the brightness of the shooting image. Therefore, the image shot under the scene similar to the night scene is obtained, and the night scene image shot under the daytime or the bright environment is obtained.
As a possible implementation manner, when the brightness of the captured image acquired in the night view mode is adjusted, the captured image may be input into the corresponding brightness adjustment model according to whether the captured image includes a human face, so as to obtain the captured image with reduced brightness.
It should be noted that, in a bright scene, the control electronic device uses a night scene mode to collect a non-night scene image not including a face and a non-night scene image including a face, which are respectively used as training sample sets to train the brightness adjustment model, so as to obtain a brightness adjustment model corresponding to a shot image not including a face and including a face.
According to the image processing method, the mode is switched to the night scene mode in response to the user operation, the preview image is collected in the night scene mode, and if the non-night scene is identified according to the collected preview image, the brightness of the shot image collected in the night scene mode is adjusted to reduce the brightness of the shot image. According to the method, the user operates in a non-night scene, the night scene mode is used for collecting the shot image, and the brightness of the shot image is adjusted, so that the shot image can keep more details in a highlight area and a dark area, and the imaging effect of the shot image is improved.
On the basis of the embodiment shown in fig. 1, as a possible implementation manner, the exposure compensation mode for acquiring multiple frames of original images can be determined by identifying whether a preview image acquired in the night view mode contains a human face, and then the multiple frames of original images are acquired in the determined exposure compensation mode and synthesized to obtain a shot image. The above process is described in detail with reference to fig. 2, and fig. 2 is a flowchart illustrating a second image processing method according to an embodiment of the present application.
As shown in fig. 2, the following steps may be further included after step 102:
step 201, identifying whether the preview image contains a human face.
In the embodiment of the application, the face area identification is carried out on the preview image so as to identify whether the preview image contains a face or not.
As a possible implementation manner, a face detection algorithm, for example, a face detection algorithm based on skin color or a face detection algorithm based on facial features, etc., may be used to detect a face region of interest, identify a face region in a preview image, display a face region of interest face identification frame if the face region exists, and not display the face region of interest face identification frame if the face region does not exist. And determining whether the preview image contains a human face according to the recognition result.
Step 202, if the face is included, exposing to obtain a plurality of frames of original images according to the first exposure compensation mode.
And step 203, exposing to obtain a plurality of frames of original images according to a second exposure compensation mode if the human face is not included.
Wherein, the exposure compensation grade value upper limit of each frame of original image in the first exposure compensation mode is smaller than the exposure compensation grade value upper limit of each frame of original image in the second exposure compensation mode.
It should be noted that, when it is detected that a preview image currently acquired by the electronic device includes a human face, a light metering module of the electronic device may automatically perform light metering mainly based on a human face area, and determine a reference exposure amount according to a light metering result of the human face area. However, in the night view mode, the illuminance of the face region is usually low, which results in a determined reference exposure amount, which is higher than the reference exposure amount determined when the face is not included, and if too many overexposed frames are still acquired when the face is included, the face region is easily overexposed, which results in a poor target image effect. Therefore, for the same shake degree, the exposure compensation mode corresponding to the preview image acquired by the electronic device that contains the human face needs to have a lower exposure compensation range than when the preview image does not contain the human face.
In a possible implementation form of the embodiment of the application, for the same shaking degree, different exposure compensation strategies may be adopted according to whether a preview image currently acquired by the electronic device includes a human face. Therefore, for the same degree of shaking, it is possible to correspond to a plurality of exposure compensation modes. For example, the electronic device has a "slight shake" and the corresponding preset exposure compensation mode has a first mode and a second mode, where the EV values corresponding to the first mode are [0, -2, -4, -6] and the EV values corresponding to the second mode are [ +1, 0, -3, -6 ]. After the current shaking degree of the electronic equipment is determined and whether the preview image currently acquired by the electronic equipment contains the human face or not is determined, the preset exposure compensation mode which is consistent with the current actual situation can be determined.
For example, assuming that the current shake degree of the electronic device is "slight shake", the corresponding preset exposure compensation modes include a first exposure compensation mode and a second exposure compensation mode, wherein each EV value corresponding to the first exposure compensation mode is [0, -2, -4, -6], each EV value corresponding to the second exposure compensation mode is [ +1, 0, -3, -6], and it can be seen that the exposure compensation range of the first exposure compensation mode is smaller than that of the second mode. If the fact that the image acquired by the electronic equipment currently contains the face is detected, determining that the preset exposure compensation mode is the first exposure compensation mode, namely that each EV value is [0, -2, -4, -6 ]; if the fact that the face is not included in the image currently acquired by the electronic equipment is detected, the preset exposure compensation mode is determined to be a second exposure compensation mode, namely each EV value is [ +1, 0, -3, -6 ].
In the embodiment of the application, after the exposure compensation mode is determined according to whether the preview image contains the human face or not, the imaging equipment is controlled to be exposed according to the corresponding exposure compensation mode to obtain a plurality of frames of original images.
In the embodiment of the present application, the RAW image refers to an unprocessed RAW image acquired by an image sensor of an electronic device, where the RAW image is an original image obtained by converting a light source signal captured by the image sensor into a digital signal. RAW images record RAW information collected by a digital camera sensor, and also record some metadata generated by camera shooting, such as setting of sensitivity, shutter speed, aperture value, white balance, and the like.
And 204, synthesizing to obtain a shot image according to the multi-frame original image.
As a possible implementation mode, the collected multi-frame images can be subjected to high-dynamic synthesis to obtain shot images. High-Dynamic synthesis, that is, synthesizing pictures with different exposures in the same scene to obtain a High-Dynamic Range image (HDR). It should be noted that, compared with a common image, an HDR image can provide more Dynamic ranges and image details, and a final HDR image is synthesized by using an LDR image with the best details corresponding to each exposure time according to a Low-Dynamic Range (LDR) image with different exposure times, so that a visual effect in a real environment can be better reflected.
Specifically, the shot image is synthesized by extracting picture information in a plurality of frames of original images and superimposing the corresponding picture information.
It should be noted that, because the multi-frame original images are obtained by shooting under different exposure parameters, the multi-frame original images include screen information with different brightness. Different original images may be overexposed, underexposed, or properly exposed for the same scene. After the original images are synthesized, each scene in the shot image is properly exposed as much as possible and is more similar to an actual scene.
According to the image processing method in the embodiment of the application, whether the preview image contains the face or not is identified, if the preview image contains the face, a plurality of frames of original images are obtained through exposure according to the first exposure compensation mode, if the preview image does not contain the face, a plurality of frames of original images are obtained through exposure according to the second exposure compensation mode, and then the shot images are obtained through synthesis according to the plurality of frames of original images. Therefore, whether the preview image contains the human face or not is identified, the exposure compensation mode of each frame is adjusted, a plurality of frames of original images are obtained through shooting, and then the shot images are obtained through synthesis, so that the dynamic range and the overall brightness of the shot images in the night scene mode are improved, the noise in the shot images is effectively inhibited, the quality of the shot images is improved, and the user experience is improved.
Based on the embodiment shown in fig. 2, as a possible implementation manner, when multiple frames of original images are obtained by exposure according to the determined exposure compensation mode, the reference exposure amount may be determined according to the brightness information of the preview image, the reference exposure duration may be determined according to the set reference sensitivity, the reference exposure duration may be further compensated according to the exposure compensation mode, and the compensation exposure duration corresponding to each frame of original image is determined, so as to obtain the corresponding original image by exposure. The above process is described in detail with reference to fig. 3, and fig. 3 is a flowchart illustrating a third image processing method according to an embodiment of the present application. As shown in fig. 3, the image processing method specifically includes the following steps:
step 301, determining a reference exposure amount according to the brightness information of the preview image.
The exposure amount refers to how much a photosensitive device in the electronic equipment receives light within an exposure time, and the exposure amount is related to an aperture, the exposure time and sensitivity. Wherein, the aperture, namely the clear aperture, determines the quantity of light passing in unit time; the exposure duration refers to the time when light passes through the lens; the sensitivity, also called ISO value, is an index for measuring the sensitivity of the negative film to light, and is used for representing the photosensitive speed of the photosensitive element, and the higher the ISO value is, the stronger the photosensitive capability of the photosensitive element is.
Specifically, a preview image of a current shooting scene is acquired through an image sensor, and the ambient light brightness of each area of the preview image is further obtained through measurement of a photosensitive device, so that the reference exposure amount is determined according to the brightness information of the preview image. In the case where the aperture is fixed, the reference exposure amount may specifically include a reference exposure time period and a reference sensitivity.
In the embodiment of the present application, the reference exposure amount refers to an exposure amount that is determined to be suitable for luminance information of a current environment after luminance information of a current shooting scene is obtained by performing photometry on a preview image, and a value of the reference exposure amount may be a product of reference sensitivity and reference exposure duration.
In step 302, a reference exposure time period required to reach the reference exposure amount is determined according to the set reference sensitivity.
In the embodiment of the present application, the reference sensitivity may be a sensitivity that is set according to a frame shaking degree of the preview image and is suitable for a current shaking degree; the reference sensitivity corresponding to the current shake degree may be set according to the current shake degree of the image sensor that captures the preview image, and is not limited herein. The reference sensitivity may range from 100ISO to 200 ISO.
For example, if it is determined that the image sensor for capturing the preview image has a "shake-free" degree, the reference sensitivity may be determined to be a smaller value to obtain an image with a higher quality as much as possible, such as a reference sensitivity of 100 ISO; if the shake degree of the image sensor for acquiring the preview image is determined to be "slight shake", the reference sensitivity may be determined to be a larger value to reduce the shooting time length, for example, the reference sensitivity is determined to be 120 ISO; if the shaking degree of the image sensor for acquiring the preview image is determined to be small shaking, the reference sensitivity can be further increased to reduce the shooting time length, for example, the reference sensitivity is determined to be 180 ISO; if the shake degree of the image sensor for acquiring the preview image is determined to be "large shake", it may be determined that the current shake degree is too large, and at this time, the reference sensitivity may be further increased to reduce the shooting time duration, for example, the reference sensitivity is determined to be 200 ISO.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, when the shake degree of the image sensor for acquiring the preview image is changed, the reference sensitivity may be changed to obtain an optimal solution. The mapping relation between the jitter degree of the image sensor for acquiring the preview image and the reference sensitivity corresponding to each frame of image to be acquired can be preset according to actual needs.
Note that, when the reference sensitivity corresponding to the degree of shake is adjusted in accordance with the degree of shake of the imaging apparatus, if the current reference sensitivity is just adapted to the degree of shake, the result of the adjustment is that the reference sensitivity remains unchanged. This also falls within the scope of "adjustment" in the embodiments of the present application.
In addition, in a possible application scenario, the camera module of the imaging apparatus is composed of multiple lenses, so that different lenses can also correspond to different sensitivities in the same shooting environment, and the reference sensitivity adjusted in this step should be the same for a shooting process performed by one of the multiple lenses, in which the same reference sensitivity is adopted for capturing multiple frames of images.
In addition, in the embodiment of the present application, the reference sensitivity is not limited to be adjusted only according to the shake degree of the imaging device, and may also be determined comprehensively according to a plurality of parameters such as the shake degree and the luminance information of the shooting scene, which is not limited herein.
In the embodiment of the application, the picture shaking degree of the preview image and the shaking degree of the image sensor for collecting the preview image are in a positive correlation, and the implementation process of setting the reference sensitivity according to the picture shaking degree of the preview image is referred to in the above process, which is not described herein again.
In this embodiment, the value of the reference exposure may be a product of the reference sensitivity and the reference exposure time. Therefore, after the reference exposure amount is determined according to the brightness of the shooting scene and the reference sensitivity is determined according to the shake degree, the reference exposure time period required for reaching the reference exposure amount can be determined according to the reference exposure amount and the reference sensitivity.
And 303, compensating the reference exposure duration according to the exposure compensation level corresponding to each frame of original image to obtain the exposure duration of each frame of original image.
The exposure compensation is to increase or darken the image frame by the aperture and shutter speed. I.e. in aperture priority mode, if exposure compensation is increased, it is actually achieved by reducing the shutter speed; otherwise, the shutter speed is increased. In the shutter priority mode, if exposure compensation is increased, it is actually achieved by increasing the aperture (until the maximum aperture that can be reached by the lens is reached).
In the embodiment of the application, the exposure compensation grade corresponding to each acquired frame of original image can be set according to the brightness of the image in the preview picture, and the reference exposure duration is compensated according to the set exposure compensation grade, so that the exposure duration of each frame of original image is obtained.
For example, if the light-to-dark ratio of the image in the preview picture is 1:1, exposure compensation is not needed; if the light-dark ratio is 1:2, carrying out-0.3-grade exposure compensation; if the light-dark ratio is 2:1, carrying out + 0.3-gear exposure compensation; if the light-dark ratio is 1:3, carrying out-0.3-grade exposure compensation; if the light-dark ratio is 2:1, performing + 0.3-grade exposure compensation; if the light-dark ratio is 1:2, carrying out-0.3-grade exposure compensation; if the light-dark ratio is 2:1, performing + 0.3-grade exposure compensation; in short, the larger the light-dark ratio, the larger the exposure compensation value, and of course, the camera exposure compensation range cannot be exceeded.
And step 304, exposing to obtain a corresponding original image according to the reference sensitivity and the exposure time of each frame of original image.
In the embodiment of the application, the reference sensitivity of each frame of original image is determined, the reference exposure duration is compensated according to the exposure compensation level corresponding to each frame of original image, and after the exposure duration of each frame of original image is obtained, the imaging equipment is controlled to perform exposure to obtain the corresponding original image.
The image processing method of the embodiment of the application determines the reference exposure amount according to the brightness information of the preview image, determines the reference exposure time required by reaching the reference exposure amount according to the set reference sensitivity, compensates the reference exposure time according to the exposure compensation grade corresponding to each frame of original image to obtain the exposure time of each frame of original image, and exposes to obtain the corresponding original image according to the reference sensitivity and the exposure time of each frame of original image. Therefore, the dynamic range and the overall brightness of the shot image in the night scene mode are improved, the noise in the image is effectively inhibited, the ghost caused by handheld shaking is inhibited, the quality of the shot image is improved, and the user experience is improved.
Based on the embodiment shown in fig. 3, in another possible scenario, the reference exposure duration is compensated according to the exposure compensation level corresponding to each frame of original image, and after the exposure duration of each frame of original image is obtained, the exposure duration of each frame of original image is compared with the set lower limit of duration, so as to adjust the exposure duration less than the lower limit of duration according to the lower limit of exposure duration. The above process is described in detail with reference to fig. 4, where fig. 4 is a schematic flow chart of a fourth night-scene image processing method provided in the embodiment of the present application, and as shown in fig. 4, after step 303, the following steps are further included:
step 401, comparing the exposure time of each frame of original image with a set time lower limit.
As a possible case, the exposure time length lower limit may be determined according to the degree of shake of the electronic device. Specifically, in order to determine the degree of shaking, displacement information may be collected according to a displacement sensor provided in the electronic device, and then, the degree of shaking of the electronic device may be determined according to the collected displacement information of the electronic device. Further, the determined shaking degree of the electronic equipment is compared with a preset shaking threshold value to determine the lower limit of the exposure time.
In the embodiment of the application, after the reference exposure duration is compensated according to the set exposure compensation level to obtain the exposure duration of each frame of original image, the exposure duration of each frame of original image is compared with the set duration lower limit, so that the exposure duration smaller than the duration lower limit is adjusted according to the duration lower limit. Wherein the lower limit of the time length is greater than or equal to 10 ms.
Step 402, if there is an original image with an exposure duration less than the lower limit of the duration, increasing the exposure duration to the lower limit of the duration.
In the embodiment of the application, if the exposure duration corresponding to the original image to be acquired is less than the image with the lower time limit, the noise in the image may be too large and difficult to eliminate. Therefore, when the exposure duration corresponding to a certain frame of original image to be acquired is less than the lower duration limit, the exposure duration corresponding to the frame of original image to be acquired is increased to the lower duration limit.
In step 403, the ratio of the exposure time after the increase to the exposure time before the increase is determined.
For example, if the preset lower limit of the duration is equal to 10ms, the exposure duration before the exposure duration is increased is 8ms, and the exposure duration corresponding to the original image is increased to the preset lower limit of the duration 10ms, it may be determined that the ratio of the exposure duration after the exposure duration is increased to that before the exposure duration is increased is 10/8.
And step 404, updating the corresponding exposure duration or reference sensitivity of the original images of the other frames with the exposure duration not less than the lower limit of the duration according to the ratio.
Specifically, for the remaining frames of original images whose exposure duration is not less than the lower limit of the duration, after determining the ratio of the exposure duration of the original image less than the lower limit of the duration after being increased to before being increased, multiplying the ratio by the sensitivity or the exposure duration of the remaining frames of original images before being updated, and taking the product as the sensitivity or the exposure duration of the remaining frames of original images after being updated.
As an example, if the original image to be captured with the exposure time not less than the lower limit of the time is 4 frames, it is determined that the preset photosensitivity values of the frames are all 100ISO according to the jitter degree of the electronic device, and the exposure time of the 4 frames of the image to be captured is 100ms, 200ms, 400ms and 800ms respectively. Considering that the ratio of the exposure time length of the original image after the update to the exposure time length before the update, which is less than the lower limit of the time length, is 10ms/1.5ms, i.e., 20/3, it is determined that the exposure time length of the 4 frames of original images to be acquired is expanded to 20/3 times of the original 100ms, 200ms, 400ms and 800 ms.
The sensitivity updating method is similar to the exposure duration updating method, and only the exposure duration needs to be replaced by the sensitivity. It should be noted that only one of the exposure duration and the sensitivity may be updated according to the ratio between the exposure duration after the update of the original image smaller than the lower limit of the duration and the exposure duration before the update, and if the exposure duration and the sensitivity need to be updated simultaneously, the ratio needs to be assigned according to the weight and then updated. For example: and (3) respectively weighing half of the exposure time and the sensitivity, if the ratio of the exposure time after the updating of the original image smaller than the lower limit of the time to the exposure time before the updating is R, expanding the exposure time to be R/2 times of the original exposure time, and expanding the sensitivity to be R/2 times of the original exposure time.
In the embodiment of the application, the exposure time of each frame of original image is compared with the set time lower limit, if an original image with the exposure time less than the time lower limit exists, the exposure time is increased to the time lower limit, the ratio of the exposure time after being increased to the exposure time before being increased is determined, and the corresponding exposure time or the reference sensitization is updated according to the ratio for the rest frames of original images with the exposure time not less than the time lower limit. Therefore, the exposure time for acquiring the original image is determined, the reference light sensitivity and the exposure time of each frame of original image are updated according to the lower limit of the exposure time, finally, exposure control is carried out according to the updated exposure time and light sensitivity of each frame of image, and imaging is carried out, so that the dynamic range and the overall brightness of the image shot in the night scene mode are improved, noise in the image shot is effectively inhibited, the quality of the image shot in the night scene is improved, and user experience is improved.
On the basis of the embodiment shown in fig. 2, in a possible scenario, the multiple frames of original images acquired by exposure may include at least two first images with the same exposure and at least one second image with an exposure lower than that of the first image, and then the high dynamic range image is synthesized according to the at least two acquired first images and the at least one second image. The above process is described in detail with reference to fig. 5, and fig. 5 is a flowchart illustrating a fifth image processing method according to an embodiment of the present application. As shown in fig. 5, the method specifically includes the following steps:
step 501, according to at least two frames of first images, generating a first task for multi-frame noise reduction to obtain a synthesized noise-reduced image.
The multi-frame noise reduction is to acquire multi-frame images through an image sensor, find different pixel points with noise point properties under different frame numbers, and obtain a clean and pure image after weighted synthesis.
In the embodiment of the application, in order to reduce noise in an image acquired in a night view mode, a first task for multi-frame noise reduction to obtain a synthesized noise-reduced image may be generated according to at least two first images, and then a processor of an electronic device executes the first task to perform multi-frame noise reduction on the at least two first images to obtain the synthesized noise-reduced image.
Step 502, a second task for determining high dynamic synthesis weight information is generated according to a target image selected from at least two frames of first images and at least one frame of second image.
In the embodiment of the application, the definition of the at least two first images can be judged according to the definition of the image, the at least two first images are further screened, and the image with the highest definition is selected as the target image. And generating a second task for determining high dynamic synthesis weight information according to the target image and at least one frame of second image.
Step 503, executing the first task and executing the second task in parallel.
In the embodiment of the application, in the process of shooting the image in the night scene mode, a first task for multi-frame noise reduction to obtain a synthesized noise-reduced image is generated according to at least two frames of first images, and a second task for determining high-dynamic synthesis weight information is generated according to a target image selected from the at least two frames of first images and at least one frame of second image. The first task and the second task may be distributed to different processors of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), and a Digital Signal Processing (DSP) of the electronic device to be executed in parallel, so as to increase the Processing speed of the image.
For example, a first task may be assigned to a CPU of the electronic device, and the first task is executed by the CPU to achieve the purpose of performing multi-frame noise reduction to obtain a composite noise-reduced image according to at least two frames of the first image. Specifically, when the electronic device shoots an image through the image sensor, at least two frames of first images are collected, the number and the positions of noise points of a plurality of frames in the at least two frames of images are calculated and screened, the positions of the noise points are replaced by the frames without the noise points, and a clean synthesized noise reduction image is obtained through repeated weighting and replacement. Therefore, through the CPU executing the first task, the detail of the dark part in the image can be treated very softly, and more image detail is kept while noise is reduced.
Meanwhile, the second task can be distributed to a DSP of the electronic device, and the DSP executes the second task to determine high dynamic synthesis weight information according to a target image selected from at least two frames of first images and at least one frame of second image.
Specifically, the at least one frame of second image and the target image are subjected to high-dynamic synthesis to determine the weight occupied by the at least one frame of second image and the target image in different areas in the synthesized image. Since the target image is the image with the highest definition in the first images of at least two frames, and the information of the image is retained to the maximum extent, the weight of the target image can be used as the weight of the synthesized noise-reduced image. And generating high dynamic synthesis weight information according to the weight of the synthesized noise reduction image and the weight of at least one frame of second image.
And step 504, synthesizing at least one frame of second image and the synthesized noise-reduced image according to the high dynamic synthesis weight information determined by the second task to obtain a high dynamic range image.
In the embodiment of the application, at least one frame of second image and the synthesized noise reduction image are synthesized according to the high dynamic synthesis weight information determined by the second task to obtain the high dynamic range image. For example: if the composite noise-reduced image is obtained by multi-frame noise reduction of the original image using several frames of EV0, the high dynamic range image may be overexposed for high brightness regions and properly exposed for medium and low brightness regions, and the EV value of the at least one second image is usually negative, so that the second image may be properly exposed for high brightness regions and the medium and low brightness regions may be underexposed. By synthesizing the parts corresponding to the same area in different images according to the weight information, the images can be properly exposed in each area, and the imaging quality is improved.
It should be noted that, since the noise of the image has been effectively reduced in the synthesized noise-reduced image, and the information of the image is retained to the maximum extent, after the high-dynamic synthesis is performed on at least one frame of second image, the obtained high-dynamic range image contains more picture information, and is closer to the actual scene.
According to the image processing method, a first task used for multi-frame noise reduction to obtain a synthesized noise-reduced image is generated according to at least two frames of first images, a second task used for determining high-dynamic synthesis weight information is generated according to a target image selected from the at least two frames of first images and at least one frame of second image, the first task and the second task are executed in parallel, and at least one frame of second image and the synthesized noise-reduced image are synthesized according to the high-dynamic synthesis weight information determined by the second task to obtain the high-dynamic range image. Therefore, the first task and the second task are executed in parallel, so that the noise reduction image synthesis and the high dynamic synthesis weight information generation are processed in parallel, the image processing time is shortened, the image processing speed is increased, the imaging speed is increased, and the photographing experience of a user is improved.
In order to implement the above embodiments, the present application also provides an image processing apparatus.
Fig. 6 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 6, the image processing apparatus 100 includes: a switching module 110, a preview image 120, and an adjustment module 130.
And a switching module 110, configured to switch to a night scene mode in response to a user operation.
And the preview image 120is used for acquiring the preview image in the night scene mode.
The adjusting module 130 is configured to adjust the brightness of the captured image acquired in the night view mode to reduce the brightness of the captured image if the captured preview image is identified as a non-night view scene.
As a possible implementation manner, the image processing apparatus 100 further includes:
and the identification module is used for identifying the image content of the preview image.
And the first determining module is used for determining the scene as the non-night scene according to the image content.
As another possible implementation manner, the image processing apparatus 100 further includes:
and the second determining module is used for determining the sensitivity of the preview image.
And the third determining module is used for determining the non-night scene if the light sensitivity of the preview image is less than or equal to the light sensitivity threshold.
As another possible implementation, the sensitivity threshold is determined according to whether a human face exists in the preview image.
As another possible implementation manner, the image processing apparatus 100 further includes:
and the second identification module is used for identifying whether the preview image contains a human face.
The processing module is used for exposing to obtain a plurality of frames of original images according to a first exposure compensation mode if the face is included; if the face is not included, exposing to obtain a plurality of frames of original images according to a second exposure compensation mode; wherein, the exposure compensation grade value upper limit of each frame of original image in the first exposure compensation mode is smaller than the exposure compensation grade value upper limit of each frame of original image in the second exposure compensation mode.
And the synthesis module is used for synthesizing the shot image according to the multi-frame original image.
As another possible implementation manner, the processing module may further include:
a first determination unit configured to determine a reference exposure amount based on the luminance information of the preview image.
A second determination unit configured to determine a reference exposure time period required to reach the reference exposure amount, based on the set reference sensitivity.
And the compensation unit is used for compensating the reference exposure duration according to the exposure compensation grade corresponding to each frame of original image to obtain the exposure duration of each frame of original image.
And the exposure unit is used for exposing to obtain a corresponding original image according to the reference sensitivity and the exposure duration of each frame of original image.
As another possible implementation, the reference sensitivity is determined according to the degree of shaking.
As another possible implementation manner, the original images of the frames comprise at least two first images with the same exposure and at least one second image with the exposure lower than that of the first image; a synthesis module specifically configured to:
generating a first task for multi-frame noise reduction to obtain a synthesized noise-reduced image according to at least two frames of first images; generating a second task for determining high dynamic synthesis weight information according to a target image selected from at least two frames of first images and at least one frame of second image; executing the first task and the second task in parallel; and synthesizing at least one frame of second image and the synthesized noise reduction image according to the high dynamic synthesis weight information determined by the second task to obtain a high dynamic range image.
As another possible implementation manner, the adjusting module 130 is specifically configured to:
and inputting the shot image into a corresponding brightness adjustment model according to whether the shot image contains the human face or not to obtain the shot image with reduced brightness.
It should be noted that the foregoing explanation of the embodiment of the image processing method is also applicable to the image processing apparatus of this embodiment, and is not repeated here.
The image processing device of the embodiment of the application is switched to the night scene mode by responding to the user operation, the preview image is collected in the night scene mode, and if the non-night scene is identified according to the collected preview image, the brightness of the shot image collected in the night scene mode is adjusted to reduce the brightness of the shot image. According to the method, the user operates in a non-night scene, the night scene mode is used for collecting the shot image, and the brightness of the shot image is adjusted, so that the shot image can keep more details in a highlight area and a dark area, and the imaging effect of the shot image is improved.
In order to implement the above embodiments, the present application also proposes an electronic device, which includes a memory, a processor, and a computer program stored on the memory and executable on the processor, and when the processor executes the program, the electronic device implements the image processing method as described in the above embodiments.
As an example, the present application also proposes an electronic device 200, see fig. 7, comprising: the image sensor 210 is electrically connected with the processor 220, and the processor 220 executes the program to implement the image processing method as described in the above embodiments.
As one possible scenario, the processor 220 may include: an Image Signal Processor (ISP) Processor, and a Graphics Processing Unit (GPU) connected to the ISP Processor.
As an example, please refer to fig. 8, on the basis of the electronic device illustrated in fig. 7, fig. 8 is a schematic diagram illustrating an electronic device according to an embodiment of the present application. The memory 230 of the electronic device 200 includes the non-volatile memory 80, the internal memory 82, and the processor 220. Memory 230 has stored therein computer readable instructions. The computer readable instructions, when executed by the memory, cause the processor 230 to perform the image processing method of any of the above embodiments.
As shown in fig. 8, the electronic apparatus 200 includes a processor 220, a nonvolatile memory 80, an internal memory 82, a display screen 83, and an input device 84, which are connected via a system bus 81. The non-volatile memory 80 of the electronic device 200 stores, among other things, an operating system and computer readable instructions. The computer readable instructions can be executed by the processor 220 to implement the image processing method of the embodiment of the present application. The processor 220 is used to provide computing and control capabilities that support the operation of the overall electronic device 200. The internal memory 82 of the electronic device 200 provides an environment for the execution of computer readable instructions in the non-volatile memory 80. The display 83 of the electronic device 200 may be a liquid crystal display or an electronic ink display, and the input device 84 may be a touch layer covered on the display 83, a button, a trackball or a touch pad arranged on a housing of the electronic device 200, or an external keyboard, a touch pad or a mouse. The electronic device 200 may be a mobile phone, a tablet computer, a notebook computer, a personal digital assistant, or a wearable device (e.g., a smart bracelet, a smart watch, a smart helmet, smart glasses), etc. Those skilled in the art will appreciate that the structure shown in fig. 8 is merely a schematic diagram of a portion of the structure related to the present application, and does not constitute a limitation on the electronic device 200 to which the present application is applied, and that a particular electronic device 200 may include more or less components than those shown in the drawings, or combine certain components, or have a different arrangement of components.
To implement the foregoing embodiments, an image processing circuit is further provided in the present application, please refer to fig. 9, fig. 9 is a schematic diagram of an image processing circuit according to an embodiment of the present application, and as shown in fig. 9, the image processing circuit 90 includes an image signal processing ISP processor 91 (the ISP processor 91 serves as the processor 220) and a graphics processor GPU.
The image data captured by the camera 93 is first processed by the ISP processor 91, and the ISP processor 91 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the camera 93. The camera module 310 may include one or more lenses 932 and an image sensor 934. Image sensor 934 may include an array of color filters (e.g., Bayer filters), and image sensor 934 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 91. The sensor 94 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 91 based on the type of interface of the sensor 94. The sensor 94 interface may be a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interface, or a combination thereof.
In addition, the image sensor 934 may also send raw image data to the sensor 94, the sensor 94 may provide the raw image data to the ISP processor 91 based on the type of interface of the sensor 94, or the sensor 94 may store the raw image data in the image memory 95.
The ISP processor 91 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 91 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 91 may also receive image data from the image memory 95. For example, the sensor 94 interface sends raw image data to the image memory 95, and the raw image data in the image memory 95 is then provided to the ISP processor 91 for processing. The image Memory 95 may be the Memory 330, a portion of the Memory 330, a storage device, or a separate dedicated Memory within the electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 934 interface or from the sensor 94 interface or from the image memory 95, the ISP processor 91 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 95 for additional processing before being displayed. The ISP processor 91 receives the processed data from the image memory 95 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 91 may be output to display 97 (display 97 may include display screen 83) for viewing by a user and/or further processed by a graphics engine or GPU. Further, the output of the ISP processor 91 may also be sent to an image memory 95, and the display 97 may read image data from the image memory 95. In one embodiment, image memory 95 may be configured to implement one or more frame buffers. Further, the output of the ISP processor 91 may be transmitted to an encoder/decoder 96 for encoding/decoding the image data. The encoded image data may be saved and decompressed before being displayed on the display 97 device. The encoder/decoder 96 may be implemented by a CPU or GPU or coprocessor.
The statistical data determined by the ISP processor 91 may be sent to the control logic 92 unit. For example, the statistical data may include image sensor 934 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 932 shading correction, and the like. The control logic 92 may include a processing element and/or microcontroller that executes one or more routines (e.g., firmware) that determine control parameters of the camera 93 and control parameters of the ISP processor 91 based on the received statistical data. For example, the control parameters of camera 93 may include sensor 94 control parameters (e.g., gain, integration time for exposure control, anti-shake parameters, etc.), camera flash control parameters, lens 932 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), and lens 932 shading correction parameters.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the image processing method as described in the above embodiments.
It will be understood by those skilled in the art that all or part of the steps carried by the method for implementing the above embodiments may be implemented by hardware related to instructions of a program, which may be stored in a computer readable storage medium, and when the program is executed, the program includes one or a combination of the steps of the method embodiments.
In addition, functional units in the embodiments of the present application may be integrated into one processing module, or each unit may exist alone physically, or two or more units are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may also be stored in a computer readable storage medium.
The storage medium mentioned above may be a read-only memory, a magnetic or optical disk, etc. Although embodiments of the present application have been shown and described above, it is understood that the above embodiments are exemplary and should not be construed as limiting the present application, and that variations, modifications, substitutions and alterations may be made to the above embodiments by those of ordinary skill in the art within the scope of the present application.

Claims (9)

1. An image processing method, characterized in that it comprises the steps of:
responding to user operation, and switching to a night scene mode;
acquiring a preview image in the night scene mode;
if the non-night scene is identified according to the acquired preview image, adjusting the brightness of the shot image acquired in the night scene mode to reduce the brightness of the shot image;
after the preview image is acquired in the night scene mode, the method further comprises:
identifying whether the preview image contains a human face;
if the face is included, exposing to obtain a plurality of frames of original images according to a first exposure compensation mode;
if the face is not included, exposing to obtain a plurality of frames of original images according to a second exposure compensation mode; wherein, the exposure compensation grade value upper limit of each frame of original image in the first exposure compensation mode is smaller than that of each frame of original image in the second exposure compensation mode;
synthesizing to obtain the shot image according to the multi-frame original image;
the exposure is used for obtaining a plurality of frames of original images, and the method comprises the following steps:
determining reference exposure according to the brightness information of the preview image;
determining the reference exposure time required for reaching the reference exposure amount according to the set reference sensitivity;
compensating the reference exposure duration according to the exposure compensation level corresponding to each frame of original image to obtain the exposure duration of each frame of original image;
exposing to obtain a corresponding original image according to the reference sensitivity and the exposure duration of each frame of original image;
the reference sensitivity is determined according to the degree of shaking and brightness information of a shooting scene.
2. The image processing method according to claim 1, wherein before performing brightness adjustment on the captured image acquired in the night view mode if the captured preview image is identified as a non-night view scene, the method further comprises:
identifying image content of the preview image;
and determining the scene as a non-night scene according to the image content.
3. The image processing method according to claim 1, wherein before performing brightness adjustment on the captured image acquired in the night view mode if the captured preview image is identified as a non-night view scene, the method further comprises:
determining a sensitivity of the preview image;
and if the light sensitivity of the preview image is less than or equal to the light sensitivity threshold, determining the preview image as a non-night scene.
4. The image processing method according to claim 3, wherein the sensitivity threshold is determined according to whether or not a human face exists in the preview image.
5. The image processing method according to claim 1, wherein the plurality of frames of original images include at least two frames of first images with the same exposure, and include at least one frame of second images with exposure lower than that of the first images;
the synthesizing of the shot image according to the multiple frames of original images comprises:
generating a first task for multi-frame noise reduction to obtain a synthesized noise-reduced image according to the at least two frames of first images;
generating a second task for determining high dynamic synthesis weight information according to the target image selected from the at least two frames of first images and the at least one frame of second image;
executing the first task and executing the second task in parallel;
and synthesizing the at least one frame of second image and the synthesized noise-reduced image according to the high dynamic synthesis weight information determined by the second task to obtain a high dynamic range image.
6. The image processing method according to any one of claims 1 to 4, wherein the adjusting the brightness of the captured image acquired in the night view mode to reduce the brightness of the captured image includes:
and inputting the shot image into a corresponding brightness adjustment model according to whether the shot image contains the human face or not to obtain the shot image with reduced brightness.
7. An image processing apparatus, characterized in that the apparatus comprises:
the switching module is used for responding to user operation and switching to a night scene mode;
the preview image is used for acquiring the preview image in the night scene mode;
the adjusting module is used for adjusting the brightness of the shot image collected in the night scene mode to reduce the brightness of the shot image if the shot image is identified as a non-night scene according to the collected preview image;
the processing module is used for identifying whether the preview image contains a human face or not; if the face is included, exposing to obtain a plurality of frames of original images according to a first exposure compensation mode; if the face is not included, exposing to obtain a plurality of frames of original images according to a second exposure compensation mode; wherein, the exposure compensation grade value upper limit of each frame of original image in the first exposure compensation mode is smaller than that of each frame of original image in the second exposure compensation mode; synthesizing to obtain the shot image according to the multi-frame original image;
the processing module is specifically used for determining the reference exposure according to the brightness information of the preview image; determining the reference exposure time required for reaching the reference exposure amount according to the set reference sensitivity; compensating the reference exposure duration according to the exposure compensation level corresponding to each frame of original image to obtain the exposure duration of each frame of original image; exposing to obtain a corresponding original image according to the reference sensitivity and the exposure duration of each frame of original image;
the reference sensitivity is determined according to the degree of shaking and brightness information of a shooting scene.
8. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method according to any one of claims 1 to 6 when executing the program.
9. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the image processing method of any one of claims 1 to 6.
CN201910509592.2A 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium Active CN110166711B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910509592.2A CN110166711B (en) 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910509592.2A CN110166711B (en) 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium

Publications (2)

Publication Number Publication Date
CN110166711A CN110166711A (en) 2019-08-23
CN110166711B true CN110166711B (en) 2021-07-13

Family

ID=67628853

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910509592.2A Active CN110166711B (en) 2019-06-13 2019-06-13 Image processing method, image processing apparatus, electronic device, and storage medium

Country Status (1)

Country Link
CN (1) CN110166711B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112532855B (en) * 2019-09-17 2022-04-29 华为技术有限公司 Image processing method and device
CN110611750B (en) * 2019-10-31 2022-03-22 北京迈格威科技有限公司 Night scene high dynamic range image generation method and device and electronic equipment
CN110958401B (en) * 2019-12-16 2022-08-23 北京迈格威科技有限公司 Super night scene image color correction method and device and electronic equipment
CN116055855B (en) * 2022-07-28 2023-10-31 荣耀终端有限公司 Image processing method and related device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277394A (en) * 2007-02-19 2008-10-01 精工爱普生株式会社 Information processing method, information processing apparatus and program
CN101778220A (en) * 2010-03-01 2010-07-14 华为终端有限公司 Method for automatically switching over night scene mode and image pickup device
CN107220956A (en) * 2017-04-18 2017-09-29 天津大学 A kind of HDR image fusion method of the LDR image based on several with different exposures
CN107517351A (en) * 2017-10-18 2017-12-26 广东小天才科技有限公司 The switching method and equipment of a kind of exposal model
CN108989700A (en) * 2018-08-13 2018-12-11 Oppo广东移动通信有限公司 Image formation control method, device, electronic equipment and computer readable storage medium
CN109005366A (en) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 Camera module night scene image pickup processing method, device, electronic equipment and storage medium

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101046567A (en) * 2004-02-13 2007-10-03 钰瀚科技股份有限公司 LCD brightness compensating method and device
EP1959668A3 (en) * 2007-02-19 2009-04-22 Seiko Epson Corporation Information processing method, information processing apparatus, and program
JP2012119858A (en) * 2010-11-30 2012-06-21 Aof Imaging Technology Ltd Imaging device, imaging method, and program
CN108900782B (en) * 2018-08-22 2020-01-24 Oppo广东移动通信有限公司 Exposure control method, exposure control device and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101277394A (en) * 2007-02-19 2008-10-01 精工爱普生株式会社 Information processing method, information processing apparatus and program
CN101778220A (en) * 2010-03-01 2010-07-14 华为终端有限公司 Method for automatically switching over night scene mode and image pickup device
CN107220956A (en) * 2017-04-18 2017-09-29 天津大学 A kind of HDR image fusion method of the LDR image based on several with different exposures
CN107517351A (en) * 2017-10-18 2017-12-26 广东小天才科技有限公司 The switching method and equipment of a kind of exposal model
CN108989700A (en) * 2018-08-13 2018-12-11 Oppo广东移动通信有限公司 Image formation control method, device, electronic equipment and computer readable storage medium
CN109005366A (en) * 2018-08-22 2018-12-14 Oppo广东移动通信有限公司 Camera module night scene image pickup processing method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110166711A (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN110072051B (en) Image processing method and device based on multi-frame images
CN109068067B (en) Exposure control method and device and electronic equipment
CN110062160B (en) Image processing method and device
CN109040609B (en) Exposure control method, exposure control device, electronic equipment and computer-readable storage medium
CN110072052B (en) Image processing method and device based on multi-frame image and electronic equipment
CN110166708B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110191291B (en) Image processing method and device based on multi-frame images
CN108900782B (en) Exposure control method, exposure control device and electronic equipment
CN110290289B (en) Image noise reduction method and device, electronic equipment and storage medium
CN110166707B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN109788207B (en) Image synthesis method and device, electronic equipment and readable storage medium
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108322669B (en) Image acquisition method and apparatus, imaging apparatus, and readable storage medium
CN108683862B (en) Imaging control method, imaging control device, electronic equipment and computer-readable storage medium
CN108989700B (en) Imaging control method, imaging control device, electronic device, and computer-readable storage medium
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN110166706B (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN110166709B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110166711B (en) Image processing method, image processing apparatus, electronic device, and storage medium
AU2019326496A1 (en) Method for capturing images at night, apparatus, electronic device, and storage medium
CN109194882B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110264420B (en) Image processing method and device based on multi-frame images
CN109672819B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109151333B (en) Exposure control method, exposure control device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant