CN109218627B - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN109218627B
CN109218627B CN201811087069.7A CN201811087069A CN109218627B CN 109218627 B CN109218627 B CN 109218627B CN 201811087069 A CN201811087069 A CN 201811087069A CN 109218627 B CN109218627 B CN 109218627B
Authority
CN
China
Prior art keywords
image
exposure
frame image
target
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811087069.7A
Other languages
Chinese (zh)
Other versions
CN109218627A (en
Inventor
李小朋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201811087069.7A priority Critical patent/CN109218627B/en
Publication of CN109218627A publication Critical patent/CN109218627A/en
Application granted granted Critical
Publication of CN109218627B publication Critical patent/CN109218627B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The application provides an image processing method and device, electronic equipment and a storage medium, and belongs to the technical field of imaging. Wherein, the method comprises the following steps: determining the current target exposure according to the illuminance of the current shooting scene; determining a target aperture value according to the difference value between the target exposure and a preset exposure; adjusting the size of an aperture in the camera module according to the target aperture value; sequentially collecting multiple frames of images according to a preset exposure compensation mode when the size of an aperture in the camera module reaches a target aperture value; and synthesizing the collected multi-frame images to generate a target image. Therefore, by the image processing method, the dynamic range and the overall brightness of the shot image are improved, the quality of the shot image is improved, the exposure duration of shooting is always kept at a preset value when the illuminance of the shot scene is different, and the user experience is improved.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present application relates to the field of imaging technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
With the development of science and technology, intelligent mobile terminals (such as smart phones, tablet computers and the like) are increasingly popularized. The cameras are arranged in most smart phones and tablet computers, and along with the enhancement of the processing capacity of the mobile terminal and the development of the camera technology, the performance of the built-in cameras is more and more powerful, and the quality of shot images is more and more high. At present, the mobile terminal is simple to operate and convenient to carry, and people using the mobile terminals such as smart phones and tablet computers to take pictures in daily life become a normal state.
While the intelligent mobile terminal brings convenience to daily photographing of people, the requirement of people on the quality of photographed images is higher and higher. However, due to professional level limitation, people do not know how to set appropriate shooting parameters according to shooting scenes, so that images with the same good effect as a professional camera are difficult to shoot, and especially in special scenes such as rainy weather, backlit scenes, night scenes and the like, the quality of shot images is poor.
In the related art, when shooting in a night scene, due to poor illumination conditions, the quality of a night scene shot image needs to be improved through an exposure compensation strategy, so that the exposure duration of shooting is different when the ambient illumination is different, and the user experience is affected.
Disclosure of Invention
The image processing method, the image processing device, the electronic equipment and the storage medium are used for solving the problem that in the related art, when shooting is carried out in a night scene, due to the fact that the illumination condition is poor, the quality of the image shot in the night scene needs to be improved through an exposure compensation strategy, and therefore the ambient illumination is different, the shooting exposure duration is different, and user experience is affected.
An embodiment of an aspect of the present application provides an image processing method, including: determining the current target exposure according to the illuminance of the current shooting scene; determining a target aperture value according to the difference value between the target exposure and a preset exposure; adjusting the size of an aperture in the camera module according to the target aperture value; sequentially collecting multiple frames of images according to a preset exposure compensation mode when the size of an aperture in the camera module reaches the target aperture value; and synthesizing the collected multi-frame images to generate a target image.
Another embodiment of the present application provides an image processing apparatus, including: the first determination module is used for determining the current target exposure according to the illuminance of the current shooting scene; the second determination module is used for determining a target aperture value according to the difference value of the target exposure amount and a preset exposure amount; the adjusting module is used for adjusting the size of the aperture in the camera module according to the target aperture value; the acquisition module is used for sequentially acquiring multi-frame images according to a preset exposure compensation mode when the size of the aperture in the camera module reaches the target aperture value; and the synthesis module is used for synthesizing the collected multi-frame images to generate a target image.
An embodiment of another aspect of the present application provides an electronic device, which includes: the camera module, the memory, the processor and the computer program stored on the memory and capable of running on the processor are characterized in that the processor realizes the image processing method when executing the program.
In yet another aspect, the present application provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program is executed by a processor to implement the image processing method as described above.
In another aspect of the present application, a computer program is provided, which is executed by a processor to implement the image processing method according to the embodiment of the present application.
According to the image processing method, the image processing device, the electronic equipment, the computer readable storage medium and the computer program, the current target exposure amount can be determined according to the illuminance of the current shooting scene, the target aperture value can be determined according to the difference value between the target exposure amount and the preset exposure amount, then the size of the aperture in the camera module is adjusted to reach the target aperture value according to the target aperture value, and then multiple frames of images are sequentially collected and subjected to synthesis processing according to the preset exposure compensation mode to generate the target image. From this, through the illuminance according to the scene of shooing at present, the size of the diaphragm in the adjustment module of making a video recording, later can gather multiframe image and carry out the synthesis processing according to predetermined exposure compensation mode to not only promoted the dynamic range and the whole luminance of shooing the image, improved the quality of shooing the image, make the illuminance of shooing the scene different moreover, the exposure duration of shooing keeps the default all the time, has improved user experience.
Additional aspects and advantages of the present application will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the present application.
Drawings
The foregoing and/or additional aspects and advantages of the present application will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure;
fig. 2 is a schematic flowchart of another image processing method according to an embodiment of the present disclosure;
fig. 3 is a schematic flowchart of another image processing method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
Reference will now be made in detail to the embodiments of the present application, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the like or similar elements throughout. The embodiments described below with reference to the drawings are exemplary and intended to be used for explaining the present application and should not be construed as limiting the present application.
The embodiment of the application provides an image processing method aiming at the problems that in the related art, when shooting is carried out in a night scene, due to the fact that the illumination condition is poor, the quality of a night scene shooting image needs to be improved through an exposure compensation strategy, and therefore the ambient illumination is different, the shooting exposure time is different, and user experience is affected.
The image processing method provided by the embodiment of the application can determine the current target exposure according to the illuminance of the current shooting scene, determine the target aperture value according to the difference value between the target exposure and the preset exposure, adjust the size of the aperture in the camera module to reach the target aperture value according to the target aperture value, and then sequentially collect multi-frame images according to the preset exposure compensation mode and perform synthesis processing to generate the target image. From this, through the illuminance according to the scene of shooing at present, the size of the diaphragm in the adjustment module of making a video recording, later can gather multiframe image and carry out the synthesis processing according to predetermined exposure compensation mode to not only promoted the dynamic range and the whole luminance of shooing the image, improved the quality of shooing the image, make the illuminance of shooing the scene different moreover, the exposure duration of shooing keeps the default all the time, has improved user experience.
The image processing method, apparatus, electronic device, storage medium, and computer program provided by the present application are described in detail below with reference to the accompanying drawings.
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present disclosure.
As shown in fig. 1, the image processing method includes the following steps:
step 101, determining the current target exposure according to the illumination of the current shooting scene.
In the embodiment of the present application, a photometric module in the camera module may be used to obtain the illuminance of the current shooting scene, and an Automatic Exposure Control (AEC) algorithm is used to determine the current target Exposure.
And 102, determining a target aperture value according to the difference value of the target exposure amount and a preset exposure amount.
The preset exposure amount is a preset reference exposure amount at the time of shooting.
The aperture, which is a device for controlling the amount of light transmitted through the lens and entering the light-sensing surface of the camera body, is usually in the lens, and the aperture size is represented by the aperture factor (F value), and the complete aperture value is as follows: f/1.0, F/1.4, F/2.0, F/2.8, F/4.0, F/5.6, F/8.0, F/11, F/16, F/22, F/32, F/44, F/64. The gear design of the diaphragm is that the numerical value difference of two adjacent gears is 1.4 times (approximate value of square root 1.414 of 2), between two adjacent gears, the diameter difference of the light holes is 1.4 times, the area difference of the light holes is one time, the brightness difference of the imaging is one time, and the time difference required for maintaining the same exposure is one time. The aperture is used for determining the light entering amount of the lens, and the smaller the value behind F is, the larger the aperture is, and the more the light entering amount is; conversely, the smaller. Under the condition that the exposure time is not changed, the numerical value of the aperture F is smaller, the aperture is larger, the light inlet quantity is larger, and the picture is brighter; the larger the F number of the diaphragm is, the smaller the diaphragm is, and the darker the picture is.
In the embodiment of the application, during actual shooting, the target exposure amount is determined according to the illuminance of the current shooting scene, and then the size of the aperture can be adjusted according to the difference value between the target exposure amount and the preset exposure amount, so that the light incoming amount of the camera module in the current shooting scene is changed, and the target exposure amount is the same as or similar to the preset exposure amount.
It should be noted that, in order to ensure that the target aperture value does not exceed the adjustable range of the aperture in the camera module, a plurality of exposure values can be preset according to different ranges of the illuminance of the shooting scene. That is, in a possible implementation form of the embodiment of the present application, before the step 102, the method may further include:
and determining the preset exposure according to the illuminance of the current shooting scene, wherein the preset exposure is a target exposure corresponding to the illuminance of the current shooting scene measured by the camera module under a preset aperture value.
In the embodiment of the present application, the aperture value refers to the F value of the aperture. The preset aperture value can be the middle value of the adjustable range of the aperture in the camera module, so that when the target exposure amount is inconsistent with the preset exposure amount, the target aperture value is ensured to be in the adjustable range of the aperture. For example, if the adjustable range of the aperture in the camera module is F/1.0-F/64, F/8.0 can be determined as the predetermined aperture value.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. When the device is actually used, the aperture value is preset, and can be preset according to actual needs or experience, and the embodiment of the application does not limit the aperture value.
It should be noted that, in the embodiment of the present application, a plurality of exposure values may be preset, and a light intensity threshold of a shooting scene may be preset, so as to determine a preset exposure corresponding to the current shooting scene according to a relationship between the light intensity of the current shooting scene and the threshold.
For example, assume that there are A, B, C preset exposures, and a > B > C, and a first threshold and a second threshold of the illumination of the shooting scene are preset, and the first threshold is smaller than the second threshold. If the illuminance of the current shooting scene is smaller than the first threshold, determining that the preset exposure value corresponding to the current shooting scene is A; if the illuminance of the current shooting scene is greater than the first threshold and less than the second threshold, determining that the preset exposure value corresponding to the current shooting scene is B; if the illuminance of the current shooting scene is greater than the second threshold, it may be determined that the preset exposure value corresponding to the current shooting scene is C.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. In actual use, the exposure value, the number of the illuminance threshold values and the specific numerical value can be preset according to actual needs or experience so as to divide the illuminance range more finely.
In the embodiment of the application, after the preset exposure amount corresponding to the current shooting scene is determined, the target aperture value can be determined according to the difference value between the target exposure amount and the preset exposure amount. Specifically, if the target exposure amount is 2 times of the preset exposure amount, determining that the target aperture value is reduced by one gear relative to the preset aperture value; if the target exposure is 4 times of the preset exposure, determining that the target aperture value is reduced by two steps relative to the preset aperture value; and if the target exposure amount is 0.5 times of the preset exposure amount, determining that the target aperture value is increased by one gear relative to the preset aperture value, and so on.
For example, if the preset aperture value is F/8.0 and the target exposure amount of the current shooting scene is 4 times of the preset exposure amount, the target aperture value is determined to be F/4.0.
And 103, adjusting the size of the aperture in the camera module according to the target aperture value.
And step 104, sequentially collecting multiple frames of images according to a preset exposure compensation mode when the size of the aperture in the camera module reaches the target aperture value.
In the embodiment of the application, after the target aperture value is determined, if the size of the aperture in the current camera module is inconsistent with the target aperture value, the size of the aperture in the camera module needs to be adjusted to the target aperture value, and then, multi-frame images can be sequentially collected according to a preset exposure compensation mode.
It should be noted that in a possible implementation form of the embodiment of the present application, a dynamic range and an overall brightness of a captured image are improved by a manner of respectively capturing multiple frames of images with different exposure amounts and synthesizing the captured multiple frames of images to generate a target image, so as to improve quality of the captured image. The number of the images to be collected can be preset in advance according to actual needs, and the number of the preset images to be collected can be one group or multiple groups. If the number of the preset images to be acquired is multiple groups, the number of the images to be acquired can be determined in real time according to the specific situation of the current shooting scene.
For example, the number of images to be captured may be determined according to the degree of shake of the camera module. It can be understood that the number of the collected images influences the overall shooting duration, the shooting duration is too long, and the shaking degree of the camera module during handheld shooting can be aggravated, so that the image quality is influenced. Therefore, the number of the images to be acquired can be determined according to the current shaking degree of the camera module, so that the shooting duration is controlled within a proper range. Specifically, if the current shaking degree of the camera module is smaller, images of more frames can be collected, so that the quality of the shot images is improved; if the current shaking degree of the camera module is larger, images with fewer frames can be collected, so that the shooting time length is shortened.
In the embodiment of the present application, the preset Exposure compensation mode refers to a combination of Exposure values (EV for short) preset for each frame of image to be acquired. In the initial definition of exposure value, exposure value does not mean an exact numerical value, but means "a combination of all camera apertures and exposure periods that can give the same exposure amount". The sensitivity, aperture and exposure time determine the exposure of the camera, and different combinations of parameters can produce equal exposures, i.e., the EV values of these different combinations are the same, e.g., using an 1/125 second exposure time and F/11 aperture combination with the 1/250 second exposure time and F/8.0 shutter combination, the same exposure is obtained, i.e., the EV values are the same, with the same sensitivity. When the EV value is 0, the exposure is obtained when the light sensitivity is 100, the aperture coefficient is F/1 and the exposure time is 1 second; the exposure amount is increased by one step, namely, the exposure time is doubled, or the sensitivity is doubled, or the aperture is increased by one step, and the EV value is increased by 1, namely, the exposure amount corresponding to 1EV is twice as much as the exposure amount corresponding to 0 EV. As shown in table 1, the correspondence relationship between the exposure time, the aperture, and the sensitivity, when they were changed individually, and the EV value was obtained.
TABLE 1
Figure BDA0001803414200000051
After the digital era of photography, the photometric function inside the camera has been very powerful, EV is often used to represent a step difference on the exposure scale, and many cameras allow setting of exposure compensation and are usually represented by EV. In this case, EV refers to a difference between the exposure amount corresponding to the camera photometric data and the actual exposure amount, for example, exposure compensation of +1EV refers to an increase of one exposure with respect to the exposure amount corresponding to the camera photometric data, that is, the actual exposure amount is twice the exposure amount corresponding to the camera photometric data.
In the embodiment of the present application, in the preset exposure compensation mode, the EV value corresponding to the determined preset exposure amount may be preset to 0, where +1EV indicates increasing one-step exposure, that is, the exposure amount is 2 times of the preset exposure amount, and +2EV indicates increasing two-step exposure, that is, the exposure amount is 4 times of the preset exposure amount, and-1 EV indicates decreasing one-step exposure, that is, the exposure amount is 0.5 times of the preset exposure amount, and so on.
Further, the exposure compensation mode that predetermines can have the multiple, during the in-service use, can confirm the exposure compensation mode that accords with the current condition according to the real-time condition of the module of making a video recording. That is, in a possible implementation form of the embodiment of the present application, before the step 104, the method may further include:
and determining the preset exposure compensation mode according to the current shaking degree of the camera module.
In the embodiment of the application, the current shaking degree of the mobile phone, that is, the current shaking degree of the camera module, can be determined by acquiring the current gyroscope (Gyro-sensor) information of the electronic device.
The gyroscope is also called as an angular velocity sensor and can measure the rotation angular velocity of the physical quantity during deflection and inclination. In the electronic equipment, the gyroscope can well measure the actions of rotation and deflection, so that the actual actions of a user can be accurately analyzed and judged. The gyroscope information (gyro information) of the electronic device may include motion information of the mobile phone in three dimensions in a three-dimensional space, and the three dimensions of the three-dimensional space may be respectively expressed as three directions of an X axis, a Y axis, and a Z axis, where the X axis, the Y axis, and the Z axis are in a pairwise vertical relationship.
It should be noted that, in a possible implementation form of the embodiment of the present application, the current shake degree of the camera module may be determined according to the current gyro information of the electronic device. The larger the absolute value of gyro motion of the electronic apparatus in three directions is, the larger the degree of shake of the camera module is. Specifically, absolute value thresholds of gyro motion in three directions may be preset, and the current shake degree of the camera module may be determined according to a relationship between the sum of the acquired absolute values of gyro motion in the three directions and the preset threshold.
For example, it is assumed that the preset thresholds are a third threshold a, a fourth threshold B, and a fifth threshold C, where a < B < C, and the sum of absolute values of gyro motion in three directions currently acquired is S. If S is less than A, determining that the current shaking degree of the camera module is 'no shaking'; if A < S < B, the current shaking degree of the camera module can be determined to be 'slight shaking'; if B < S < C, the current shaking degree of the camera module can be determined to be 'small shaking'; if S > C, the current shaking degree of the camera module can be determined to be large shaking.
It should be noted that the above examples are only illustrative and should not be construed as limiting the present application. During actual use, the number of the threshold values and the specific numerical values of the threshold values can be preset according to actual needs, and the mapping relation between gyro information and the jitter degree of the camera module can be preset according to the relation between the gyro information and the threshold values.
It can be understood that the current shaking degree of the camera module is different, the number of the determined images to be collected can be different, and different exposure compensation modes are required when the number of the images to be collected is different. Therefore, in a possible implementation form of the embodiment of the application, a mapping relationship between the shake degree of the camera module and the exposure compensation mode can be preset, so that the preset exposure compensation mode corresponding to the number of the current images to be acquired is determined according to the current shake degree of the camera module.
For example, the shake degree of the camera module is "no shake", the corresponding EV value range of the exposure compensation mode is preset to-6 to 2, and the difference between the adjacent EV values is 0.5; the camera module shake degree is 'slight shake', the corresponding EV value range of the exposure compensation mode is preset to be-5-1, the difference value between the adjacent EV values is 1, and the like.
In the embodiment of the application, after the preset exposure compensation mode corresponding to the current shooting scene is determined, multiple frames of images can be sequentially collected according to the target aperture value and the preset exposure compensation mode.
And 105, synthesizing the collected multi-frame images to generate a target image.
In the embodiment of the application, after the multi-frame images are collected, the multi-frame images can be synthesized to generate the target image.
The image processing method provided by the embodiment of the application can determine the current target exposure according to the illuminance of the current shooting scene, determine the target aperture value according to the difference value between the target exposure and the preset exposure, adjust the size of the aperture in the camera module to reach the target aperture value according to the target aperture value, and then sequentially collect multi-frame images according to the preset exposure compensation mode and perform synthesis processing to generate the target image. From this, through the illuminance according to the scene of shooing at present, the size of the diaphragm in the adjustment module of making a video recording, later can gather multiframe image and carry out the synthesis processing according to predetermined exposure compensation mode to not only promoted the dynamic range and the whole luminance of shooing the image, improved the quality of shooing the image, make the illuminance of shooing the scene different moreover, the exposure duration of shooing keeps the default all the time, has improved user experience.
In a possible implementation form in the application, after the preset exposure compensation mode corresponding to the current shooting scene is determined, the exposure time of each frame of image to be collected is determined according to parameters such as sensitivity and preset exposure, so that different images with different exposure times of multiple frames are collected, and the quality of the shot image is improved.
Another image processing method provided in the embodiment of the present application is further described below with reference to fig. 2.
Fig. 2 is a schematic flowchart of another image processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the image processing method includes the following steps:
step 201, determining the corresponding sensitivity of the multi-frame image according to the current jitter degree of the camera module.
Wherein, the sensitivity, also called ISO value, is an index for measuring the sensitivity of the negative film to light. For a lower sensitivity film, a longer exposure time is required to achieve the same imaging as for a higher sensitivity film. The sensitivity of a digital camera is an index similar to the sensitivity of a film, and the ISO of a digital camera can be adjusted by adjusting the sensitivity of a photosensitive device or combining photosensitive points, that is, the ISO can be improved by increasing the light sensitivity of the photosensitive device or combining several adjacent photosensitive points. It should be noted that whether digital or film photography, the use of relatively high sensitivity generally introduces more noise in order to reduce the exposure time, resulting in reduced image quality.
In the embodiment of the application, the lowest light sensitivity which is suitable for the current shake degree can be determined according to the current shake degree of the camera module, multiple frames of images are collected simultaneously according to the light sensitivity, the collected multiple frames of images are synthesized to generate the target image, the dynamic range and the overall brightness of the shot image are improved, noise in the image can be effectively inhibited through controlling the value of the light sensitivity, and the quality of the shot image is improved.
For example, if it is determined that the current shake degree of the camera module is "no shake", it may be determined that the camera module may be in a tripod shooting mode, and at this time, the reference sensitivity may be determined to be a smaller value, so as to obtain an image with higher quality as much as possible, for example, the reference sensitivity is determined to be 100; if the current shake degree of the camera module is determined to be 'slight shake', the camera module can be determined to be possibly in a handheld shooting mode at present, and the reference sensitivity can be determined to be a larger value so as to reduce the shooting time length, for example, the reference sensitivity is determined to be 200; if the current shake degree of the camera module is determined to be small shake, the camera module can be determined to be possibly in a handheld shooting mode, and the reference sensitivity can be further increased to reduce the shooting time length, for example, the reference sensitivity is determined to be 220; if the current shake degree of the camera module is determined to be "large shake", it may be determined that the current shake degree is too large, and at this time, the reference sensitivity may be further increased to reduce the shooting time duration, for example, the reference sensitivity is determined to be 250.
Step 202, determining a reference exposure time according to the preset exposure and the corresponding sensitivity of the multi-frame image.
The exposure amount is related to the aperture, sensitivity, and exposure time. Therefore, after the corresponding light sensitivity of the multi-frame image to be collected is determined, the reference exposure time length can be determined according to the preset exposure amount, the preset aperture value and the light sensitivity.
Step 203, determining the exposure duration corresponding to each frame of image in the multiple frames of images according to the preset exposure compensation mode and the reference exposure duration.
In the embodiment of the application, after the reference exposure time length is determined, the exposure time length of each frame of the image to be acquired can be determined according to the preset exposure compensation mode and the reference exposure time length.
Specifically, if the exposure compensation mode corresponding to the image to be acquired is +1EV, the exposure duration of the image to be acquired is 2 times of the reference duration; if the exposure compensation mode corresponding to the image to be acquired is-1 EV, the exposure time length of the image to be acquired is 0.5 times of the reference time length, and so on.
For example, assuming that the number of images to be captured is 7 frames, the EV range corresponding to the corresponding preset exposure compensation mode may be [ +1, +1, +1, +1,0, -3, -6], and the exposure time of each frame of image to be captured is 200 milliseconds, 100 milliseconds, 12.5 milliseconds, and 6.25 milliseconds if the reference exposure time is 100 milliseconds according to the preset exposure amount and sensitivity.
And 204, sequentially collecting multiple frames of images according to the exposure time corresponding to each frame of image and the sensitivity corresponding to the multiple frames of images.
Step 205, synthesizing the collected multi-frame images to generate a target image.
It can be understood that after the exposure time of each frame of image to be acquired is determined, multiple frames of images can be sequentially acquired and subjected to synthesis processing according to the sensitivity, the target aperture value and the exposure time of each frame of image to be acquired, so as to generate a target image.
The image processing method provided by the embodiment of the application can determine the sensitivity corresponding to the multi-frame images according to the current jitter degree of the camera module, determine the reference exposure time according to the preset exposure amount and the sensitivity corresponding to the multi-frame images, then determine the exposure time corresponding to each frame of image in the multi-frame images according to the preset exposure compensation mode and the reference exposure time, and further sequentially collect and synthesize the multi-frame images according to the exposure time corresponding to each frame of image and the sensitivity corresponding to the multi-frame images to generate the target image. From this, through the current shake degree according to the module of making a video recording, confirm the sensitivity that the multiframe waited to gather the image and correspond to according to predetermineeing exposure and predetermined exposure compensation mode, confirmed every frame and waited to gather the exposure duration that the image corresponds, thereby synthesize through the image of shooing many different exposure durations, not only promoted the dynamic range and the whole luminance of shooing the image, effectively restrained the noise in the shooting image moreover, improved the quality of night scene shooting image, improved user experience.
In a possible implementation form of the method, the brightness information of the collected images can be synthesized while collecting the multi-frame images, and after all the images are collected, the non-brightness information of all the collected images is synthesized and then superposed with the synthesized brightness information, so that the data processing time is shortened, and the shooting time is shortened.
Another image processing method provided in the embodiment of the present application is further described below with reference to fig. 3.
Fig. 3 is a flowchart illustrating another image processing method according to an embodiment of the present application.
As shown in fig. 3, the image processing method includes the steps of:
step 301, determining the corresponding sensitivity of the multi-frame image according to the current shake degree of the camera module.
Step 302, determining a reference exposure time length according to the preset exposure and the corresponding sensitivity of the multiple frames of images, and determining an exposure time length corresponding to each frame of image according to the preset exposure compensation mode and the reference exposure time length.
The detailed implementation process and principle of the steps 301-302 can refer to the detailed description of the above embodiments, and are not described herein again.
And 303, acquiring a first frame of image according to the corresponding sensitivity of the multiple frames of images and the exposure time of the first frame of image to be acquired, and displaying the first frame of image on a preview picture.
And step 304, acquiring a second frame of image according to the corresponding sensitivity of the multiple frames of images and the exposure time of the second frame of image to be acquired.
Step 305, adjusting the brightness information of the first frame image displayed on the preview screen according to the metadata of the second frame image and the metadata of the first frame image.
The metadata is original data of a light source signal captured by an image sensor in the camera module and converted into a digital signal.
In the embodiment of the application, after the exposure time of each frame of image to be acquired is determined, each frame of image to be acquired can be acquired according to the corresponding sensitivity of multiple frames of images and the exposure time of each frame of image to be acquired. And synthesizing the brightness information of the metadata of the currently captured image and the previously captured image while capturing the image.
Specifically, after a first frame image is acquired, the first frame image is displayed in a preview screen, after a second frame image is acquired, metadata of the first frame image and luminance information in the metadata of the second frame image are extracted, the luminance information of the first frame image and the luminance information of the second frame image are synthesized, and then the luminance information of the first frame image displayed in the preview screen is adjusted by using the synthesized luminance information. Similarly, after the third frame of image is acquired, extracting the metadata of the image displayed in the current preview picture and the metadata of the third frame of image, synthesizing the brightness information of the two, then adjusting the brightness information of the image displayed in the preview picture again by using the synthesized brightness information, and so on until all the images to be acquired are acquired.
Further, when the brightness information of each frame of image is synthesized, different weight values can be set for each frame of image according to the current illuminance, so that the visual effect of the shot image is optimal. That is, in a possible implementation form of the embodiment of the present application, step 205 may include:
determining weighted values corresponding to the first frame image and the second frame image respectively according to the illuminance of the current shooting scene, the exposure duration of the first frame image and the exposure duration of the second frame image;
determining brightness information after the second frame image and the first frame image are synthesized according to the weight values respectively corresponding to the first frame image and the second frame image, and the metadata of the first frame image and the metadata of the second frame image;
and adjusting the brightness information of the first frame image displayed on the preview picture by using the synthesized brightness information.
It should be noted that, in the embodiment of the present application, a weight value corresponding to each frame of image to be acquired may be determined according to the illuminance of the current shooting scene and the exposure compensation mode of each frame of image to be acquired, so as to synthesize the luminance information of each frame of image to be acquired. The exposure compensation mode of the image to be acquired can be determined according to the corresponding exposure duration, that is, the longer the exposure duration, the larger the EV grade corresponding to the image to be acquired. Therefore, in a possible implementation form of the embodiment of the application, the weight values corresponding to the frames of images to be acquired respectively can be determined according to the illuminance of the current shooting scene and the exposure duration of the frames of images to be acquired.
Specifically, if the illuminance corresponding to the current shooting scene is smaller, the weight value corresponding to the image to be acquired with longer exposure time can be determined as a larger value, and the weight value corresponding to the image to be acquired with shorter exposure time can be determined as a smaller value, so as to improve the overall brightness and dark area details of the image; if the illuminance corresponding to the current shooting scene is large, the weighted value corresponding to the image to be collected with the long exposure time can be determined as a small value, and the weighted value corresponding to the image to be collected with the short exposure time can be determined as a large value, so that the dark area details are improved, and meanwhile, the overexposure of the highlight area is prevented.
It should be noted that, when determining the weight value corresponding to each frame of image to be acquired according to the exposure duration of each frame of image to be acquired according to the illuminance of the current shooting scene, it is further required to ensure that the range of the finally synthesized luminance information is between 0 and 255 to determine the constraint relationship between the weight values.
It can be understood that after the weight value corresponding to each frame of image to be acquired is determined, the brightness information of the metadata of the currently acquired image and the brightness information of the image displayed on the preview picture can be synthesized in real time according to the weight value, and the brightness information of the image displayed on the preview picture is adjusted by using the synthesized brightness information until all the images to be acquired are acquired and the brightness information is synthesized.
And step 306, synthesizing non-brightness information in the metadata of the collected multi-frame images to generate an initial target image.
Step 307, updating the brightness information of the initial target image according to the brightness information of the image currently displayed on the preview picture to generate the target image.
In the embodiment of the application, after the acquisition of multiple frames of images to be acquired is finished, non-brightness information in metadata of the acquired multiple frames of images can be synthesized to generate an initial target image, and then the brightness of the initial target image is updated according to the brightness information of the image currently displayed on the preview picture to generate the target image.
The image processing method provided by the embodiment of the application can determine the current target exposure and the weight value corresponding to each frame of image to be collected according to the illuminance of the current shooting scene, determine the target aperture value according to the difference value between the target exposure and the preset exposure, then adjust the size of the aperture in the camera module to the target aperture value, further sequentially collect multi-frame images according to the preset exposure compensation mode, and perform synthesis processing on the brightness information of the collected images in real time according to the weight value, further perform synthesis processing on non-brightness information in metadata of the collected multi-frame images to generate the initial target image, and update the brightness information of the initial target image according to the brightness information of the image currently displayed on the preview picture to generate the target image. Therefore, the exposure time and the weighted value of each frame of image to be collected are determined according to the illuminance of the current shooting scene and the preset exposure compensation mode, the quality of the shot image is further improved by shooting a plurality of images with different exposure time, synthesizing the brightness information of a plurality of frames of images in real time according to the weighted value, and then synthesizing the non-brightness information of the plurality of frames of images, so that the data processing time is shortened, and the user experience is improved.
In order to implement the above embodiments, the present application also provides an image processing apparatus.
Fig. 4 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present application.
As shown in fig. 4, the image processing apparatus 40 includes:
a first determining module 41, configured to determine a current target exposure amount according to the illuminance of the current shooting scene;
the second determination module 42 is used for determining a target aperture value according to the difference value of the target exposure amount and a preset exposure amount;
the adjusting module 43 is configured to adjust the size of the aperture in the camera module according to the target aperture value;
the acquisition module 44 is configured to sequentially acquire multiple frames of images according to a preset exposure compensation mode when the size of the aperture in the camera module reaches the target aperture value;
and a synthesizing module 45, configured to perform synthesizing processing on the acquired multiple frames of images to generate a target image.
In practical use, the image processing apparatus provided in the embodiment of the present application may be configured in any electronic device to execute the foregoing image processing method.
The image processing device that this application embodiment provided can confirm current target exposure according to the illuminance of current scene of shooing to according to target exposure and the difference of predetermineeing the exposure, confirm the target aperture value, later according to the target aperture value, the size of the diaphragm reaches the target aperture value in the adjustment camera module, and then according to predetermined exposure compensation mode, gather multiframe image in proper order and carry out the synthesis processing, generate the target image. From this, through the illuminance according to the scene of shooing at present, the size of the diaphragm in the adjustment module of making a video recording, later can gather multiframe image and carry out the synthesis processing according to predetermined exposure compensation mode to not only promoted the dynamic range and the whole luminance of shooing the image, improved the quality of shooing the image, make the illuminance of shooing the scene different moreover, the exposure duration of shooing keeps the default all the time, has improved user experience.
In one possible implementation form of the present application, the image processing apparatus 40 is specifically configured to:
and determining the preset exposure according to the illuminance of the current shooting scene, wherein the preset exposure is a target exposure corresponding to the illuminance of the current shooting scene measured by the camera module under a preset aperture value.
Further, in another possible implementation form of the present application, the image processing apparatus 40 is further configured to:
and determining the preset exposure compensation mode according to the current shaking degree of the camera module.
In a possible implementation form of the present application, the above-mentioned acquisition module 44 is specifically configured to:
determining the corresponding sensitivity of a plurality of frames of images according to the current jitter degree of the camera module;
determining a reference exposure time length according to the preset exposure and the corresponding sensitivity of the multi-frame image;
determining the exposure duration corresponding to each frame of image in the multi-frame image according to the preset exposure compensation mode and the reference exposure duration;
and sequentially collecting the multiple frames of images according to the exposure time corresponding to each frame of image and the sensitivity corresponding to the multiple frames of images.
Further, in another possible implementation form of the present application, the acquiring module 44 is further configured to:
acquiring a first frame of image according to the corresponding sensitivity of the multi-frame image and the exposure time of a first frame of image to be acquired, and displaying the first frame of image on a preview picture;
acquiring a second frame of image according to the corresponding sensitivity of the multi-frame image and the exposure time of a second frame of image to be acquired;
and adjusting the brightness information of the first frame image displayed on the preview picture according to the metadata of the second frame image and the metadata of the first frame image.
Further, in another possible implementation form of the present application, the above-mentioned acquisition module 44 is further configured to:
determining weighted values corresponding to the first frame image and the second frame image respectively according to the illuminance of the current shooting scene, the exposure duration of the first frame image and the exposure duration of the second frame image;
determining brightness information after the second frame image and the first frame image are synthesized according to the weight values respectively corresponding to the first frame image and the second frame image, and the metadata of the first frame image and the metadata of the second frame image;
and adjusting the brightness information of the first frame image displayed on the preview picture by using the synthesized brightness information.
In a possible implementation form of the present application, the synthesis module 45 is specifically configured to:
synthesizing non-brightness information in metadata of the collected multi-frame images to generate an initial target image;
and updating the brightness information of the initial target image according to the brightness information of the image currently displayed on the preview picture so as to generate the target image.
It should be noted that the foregoing explanation of the embodiments of the image processing method shown in fig. 1, fig. 2, and fig. 3 also applies to the image processing apparatus 40 of this embodiment, and details thereof are not repeated here.
The image processing device provided by the embodiment of the application can determine the current target exposure according to the illuminance of the current shooting scene, and the weight value corresponding to each frame of image to be collected, and according to the difference value between the target exposure and the preset exposure, the target aperture value is determined, then the size of the aperture in the camera module is adjusted to the target aperture value, further according to the preset exposure compensation mode, multi-frame images are collected in sequence, the brightness information of the collected images is subjected to synthesis processing in real time according to the weight value, then non-brightness information in metadata of the collected multi-frame images is subjected to synthesis processing, an initial target image is generated, and according to the brightness information of the image currently displayed by a preview picture, the brightness information of the initial target image is updated, and the target image is generated. Therefore, the exposure time and the weighted value of each frame of image to be collected are determined according to the illuminance of the current shooting scene and the preset exposure compensation mode, the quality of the shot image is further improved by shooting a plurality of images with different exposure time, synthesizing the brightness information of a plurality of frames of images in real time according to the weighted value, and then synthesizing the non-brightness information of the plurality of frames of images, so that the data processing time is shortened, and the user experience is improved.
In order to implement the above embodiments, the present application further provides an electronic device.
Fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
As shown in fig. 5, the electronic device 200 includes:
a memory 210 and a processor 220, a bus 230 connecting different components (including the memory 210 and the processor 220), wherein the memory 210 stores a computer program, and when the processor 220 executes the program, the image processing method according to the embodiment of the present application is implemented.
Bus 230 represents one or more of any of several types of bus structures, including a memory bus or memory controller, a peripheral bus, an accelerated graphics port, and a processor or local bus using any of a variety of bus architectures. By way of example, such architectures include, but are not limited to, Industry Standard Architecture (ISA) bus, micro-channel architecture (MAC) bus, enhanced ISA bus, Video Electronics Standards Association (VESA) local bus, and Peripheral Component Interconnect (PCI) bus.
Electronic device 200 typically includes a variety of electronic device readable media. Such media may be any available media that is accessible by electronic device 200 and includes both volatile and nonvolatile media, removable and non-removable media.
Memory 210 may also include computer system readable media in the form of volatile memory, such as Random Access Memory (RAM)240 and/or cache memory 250. The electronic device 200 may further include other removable/non-removable, volatile/nonvolatile computer system storage media. By way of example only, storage system 260 may be used to read from and write to non-removable, nonvolatile magnetic media (not shown in FIG. 5, commonly referred to as a "hard drive"). Although not shown in FIG. 5, a magnetic disk drive for reading from and writing to a removable, nonvolatile magnetic disk (e.g., a "floppy disk") and an optical disk drive for reading from or writing to a removable, nonvolatile optical disk (e.g., a CD-ROM, DVD-ROM, or other optical media) may be provided. In these cases, each drive may be connected to bus 230 by one or more data media interfaces. Memory 210 may include at least one program product having a set (e.g., at least one) of program modules that are configured to carry out the functions of embodiments of the application.
A program/utility 280 having a set (at least one) of program modules 270, including but not limited to an operating system, one or more application programs, other program modules, and program data, each of which or some combination thereof may comprise an implementation of a network environment, may be stored in, for example, the memory 210. The program modules 270 generally perform the functions and/or methodologies of the embodiments described herein.
Electronic device 200 may also communicate with one or more external devices 290 (e.g., keyboard, pointing device, display 291, etc.), with one or more devices that enable a user to interact with electronic device 200, and/or with any devices (e.g., network card, modem, etc.) that enable electronic device 200 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 292. Also, the electronic device 200 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network such as the Internet) via the network adapter 293. As shown, the network adapter 293 communicates with the other modules of the electronic device 200 via the bus 230. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 200, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
The processor 220 executes various functional applications and data processing by executing programs stored in the memory 210.
It should be noted that, for the implementation process and the technical principle of the electronic device of the embodiment, reference is made to the foregoing explanation of the image processing method of the embodiment of the present application, and details are not described here again.
The electronic device provided by the embodiment of the application can execute the image processing method, detect the current jitter degree of the camera module in the night scene shooting mode, determine the number of images to be acquired and the reference sensitivity corresponding to each frame of image to be acquired according to the current jitter degree, then determine the exposure duration corresponding to each frame of image to be acquired according to the illuminance of the current shooting scene and the reference sensitivity corresponding to each frame of image to be acquired, further sequentially acquire multiple frames of images according to the reference sensitivity and the exposure duration corresponding to each frame of image to be acquired, and perform synthesis processing on the acquired multiple frames of images to generate the target image. From this, through the current shake degree according to the module of making a video recording, confirm the quantity and the benchmark sensitivity of waiting to gather the image, and according to the illuminance of current shooting scene, the exposure duration that each frame waited to gather the image correspondence has been confirmed, thereby synthesize through the image of shooing many different exposure durations, the dynamic range and the whole luminance of shooting the image under the night scene shooting mode have not only been promoted, the noise in the shooting image has effectively been restrained, and ghost and the blurring that handheld shake leads to have been restrained, the quality of shooting the image in night scene has been improved, user experience has been improved.
In order to implement the above embodiments, the present application also proposes a computer-readable storage medium.
The computer readable storage medium stores thereon a computer program, and the computer program is executed by a processor to implement the image processing method according to the embodiment of the present application.
In order to implement the foregoing embodiments, an embodiment of a further aspect of the present application provides a computer program, which is executed by a processor to implement the image processing method according to the embodiment of the present application.
In an alternative implementation, the embodiments may be implemented in any combination of one or more computer-readable media. The computer readable medium may be a computer readable signal medium or a computer readable storage medium. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the context of this document, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
A computer readable signal medium may include a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take any of a variety of forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Computer program code for carrying out operations for aspects of the present invention may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the consumer electronic device, partly on the consumer electronic device, as a stand-alone software package, partly on the consumer electronic device and partly on a remote electronic device, or entirely on the remote electronic device or server. In the case of remote electronic devices, the remote electronic devices may be connected to the consumer electronic device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external electronic device (e.g., through the internet using an internet service provider).
Other embodiments of the present application will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the application and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the application being indicated by the following claims.
It will be understood that the present application is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the application is limited only by the appended claims.

Claims (7)

1. An image processing method, comprising:
determining the current target exposure according to the illuminance of the current shooting scene;
determining a target aperture value according to the difference value between the target exposure and a preset exposure;
adjusting the size of an aperture in the camera module according to the target aperture value;
when the size of the aperture in the camera module reaches the target aperture value, determining the corresponding sensitivity of the multi-frame image according to the current jitter degree of the camera module;
determining a reference exposure time length according to a preset exposure and the corresponding sensitivity of the multi-frame image;
determining the exposure duration corresponding to each frame of image according to a preset exposure compensation mode and the reference exposure duration;
sequentially collecting multiple frames of images according to the exposure time corresponding to each frame of image and the sensitivity corresponding to the multiple frames of images: acquiring a first frame image according to the corresponding sensitivity of the multiple frames of images and the corresponding exposure duration of the first frame image, and displaying the first frame image on a preview picture; acquiring a second frame image according to the corresponding sensitivity of the multiple frames of images and the corresponding exposure time of the second frame image; determining weighted values corresponding to the first frame image and the second frame image respectively according to the illuminance of the current shooting scene, the exposure duration corresponding to the first frame image and the exposure duration corresponding to the second frame image; determining brightness information after the second frame image and the first frame image are synthesized according to the weight values respectively corresponding to the first frame image and the second frame image, the metadata of the first frame image and the metadata of the second frame image; adjusting the brightness information of the first frame image displayed on the preview picture by using the synthesized brightness information;
and synthesizing the multi-frame images to generate a target image.
2. The method of claim 1, wherein prior to determining a target aperture value based on a difference between the target exposure and a preset exposure, further comprising:
and determining the preset exposure according to the illuminance of the current shooting scene, wherein the preset exposure is a target exposure corresponding to the illuminance of the current shooting scene measured by the camera module under a preset aperture value.
3. The method of claim 1, wherein before sequentially acquiring a plurality of frames of images according to a preset exposure compensation mode, further comprising:
and determining the preset exposure compensation mode according to the current shaking degree of the camera module.
4. The method according to claim 1, wherein the synthesizing the plurality of frames of images to generate the target image comprises:
synthesizing non-brightness information in the metadata of the multi-frame image to generate an initial target image;
and updating the brightness information of the initial target image according to the brightness information of the image currently displayed on the preview picture so as to generate the target image.
5. An image processing apparatus characterized by comprising:
the first determination module is used for determining the current target exposure according to the illuminance of the current shooting scene;
the second determination module is used for determining a target aperture value according to the difference value of the target exposure amount and a preset exposure amount;
the adjusting module is used for adjusting the size of the aperture in the camera module according to the target aperture value;
the acquisition module is used for determining the corresponding sensitivity of the multi-frame image according to the current jitter degree of the camera module when the size of the aperture in the camera module reaches the target aperture value; determining a reference exposure time length according to a preset exposure and the corresponding sensitivity of the multi-frame image; determining the exposure duration corresponding to each frame of image according to a preset exposure compensation mode and the reference exposure duration; sequentially collecting multiple frames of images according to the exposure time corresponding to each frame of image and the sensitivity corresponding to the multiple frames of images; the step of sequentially collecting the multiple frames of images according to the exposure time corresponding to each frame of image and the sensitivity corresponding to the multiple frames of images comprises the following steps: acquiring a first frame image according to the corresponding sensitivity of the multiple frames of images and the corresponding exposure duration of the first frame image, and displaying the first frame image on a preview picture; acquiring a second frame image according to the corresponding sensitivity of the multiple frames of images and the corresponding exposure time of the second frame image; determining weighted values corresponding to the first frame image and the second frame image respectively according to the illuminance of the current shooting scene, the exposure duration corresponding to the first frame image and the exposure duration corresponding to the second frame image; determining brightness information after the second frame image and the first frame image are synthesized according to the weight values respectively corresponding to the first frame image and the second frame image, the metadata of the first frame image and the metadata of the second frame image; adjusting the brightness information of the first frame image displayed on the preview picture by using the synthesized brightness information;
and the synthesis module is used for synthesizing the collected multi-frame images to generate a target image.
6. An electronic device, comprising: a camera module, a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the image processing method according to any one of claims 1 to 4 when executing the computer program.
7. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the image processing method according to any one of claims 1 to 4.
CN201811087069.7A 2018-09-18 2018-09-18 Image processing method, image processing device, electronic equipment and storage medium Active CN109218627B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811087069.7A CN109218627B (en) 2018-09-18 2018-09-18 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811087069.7A CN109218627B (en) 2018-09-18 2018-09-18 Image processing method, image processing device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109218627A CN109218627A (en) 2019-01-15
CN109218627B true CN109218627B (en) 2021-04-09

Family

ID=64984934

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811087069.7A Active CN109218627B (en) 2018-09-18 2018-09-18 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109218627B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109729274B (en) * 2019-01-30 2021-03-09 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium
CN110443766B (en) * 2019-08-06 2022-05-31 厦门美图之家科技有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112422807A (en) * 2019-08-23 2021-02-26 上海光启智城网络科技有限公司 Method for adjusting depth of field range
CN110830697A (en) * 2019-11-27 2020-02-21 Oppo广东移动通信有限公司 Control method, electronic device, and storage medium
CN111047723B (en) * 2019-12-12 2021-01-05 杭州昊恒科技有限公司 City wisdom behavior analysis system based on image processing
CN111970463B (en) * 2020-08-24 2022-05-03 浙江大华技术股份有限公司 Aperture correction method and apparatus, storage medium, and electronic apparatus
WO2022061934A1 (en) * 2020-09-28 2022-03-31 深圳市大疆创新科技有限公司 Image processing method and device, system, platform, and computer readable storage medium
CN112183346A (en) * 2020-09-28 2021-01-05 浙江大华技术股份有限公司 Scene judgment method and device and electronic device
CN114520881B (en) * 2020-11-18 2023-08-18 成都极米科技股份有限公司 Exposure parameter adjustment method, device, computer equipment and readable storage medium
CN113452924A (en) * 2021-06-30 2021-09-28 广州极飞科技股份有限公司 Camera diaphragm control method, device, equipment and storage medium
CN117135470B (en) * 2023-02-23 2024-06-14 荣耀终端有限公司 Shooting method, electronic equipment and storage medium
CN117257204B (en) * 2023-09-19 2024-05-07 深圳海业医疗科技有限公司 Endoscope control assembly control method and system

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4530961B2 (en) * 2005-06-30 2010-08-25 オリンパスイメージング株式会社 Electronic image stabilization device
CN101261422B (en) * 2007-03-08 2010-04-14 亚洲光学股份有限公司 Automatic exposure correction method and system
JP5163031B2 (en) * 2007-09-26 2013-03-13 株式会社ニコン Electronic camera
KR101097017B1 (en) * 2007-11-19 2011-12-20 후지쯔 가부시끼가이샤 Imaging device, imaging method, and computer readable medium
CN102104737A (en) * 2009-12-21 2011-06-22 展讯通信(上海)有限公司 Method and system for imaging high dynamic range image
JP5495841B2 (en) * 2010-02-22 2014-05-21 オリンパスイメージング株式会社 Camera and camera control method

Also Published As

Publication number Publication date
CN109218627A (en) 2019-01-15

Similar Documents

Publication Publication Date Title
CN109218628B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109218627B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109005366B (en) Night scene shooting processing method and device for camera module, electronic equipment and storage medium
CN109348089B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110445988B (en) Image processing method, image processing device, storage medium and electronic equipment
CN108900782B (en) Exposure control method, exposure control device and electronic equipment
WO2020207262A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN109194882B (en) Image processing method, image processing device, electronic equipment and storage medium
CN109729274B (en) Image processing method, image processing device, electronic equipment and storage medium
WO2020034737A1 (en) Imaging control method, apparatus, electronic device, and computer-readable storage medium
CN109361853B (en) Image processing method, image processing device, electronic equipment and storage medium
CN110191291B (en) Image processing method and device based on multi-frame images
CN110166708B (en) Night scene image processing method and device, electronic equipment and storage medium
CN110248106B (en) Image noise reduction method and device, electronic equipment and storage medium
WO2020207261A1 (en) Image processing method and apparatus based on multiple frames of images, and electronic device
CN109919116B (en) Scene recognition method and device, electronic equipment and storage medium
CN109005369B (en) Exposure control method, exposure control device, electronic apparatus, and computer-readable storage medium
CN109151333B (en) Exposure control method, exposure control device and electronic equipment
CN110264420B (en) Image processing method and device based on multi-frame images
CN109618102B (en) Focusing processing method and device, electronic equipment and storage medium
CN110971833B (en) Image processing method and device, electronic equipment and storage medium
CN110971812B (en) Shooting method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant