WO2021185374A1 - 一种拍摄图像的方法及电子设备 - Google Patents

一种拍摄图像的方法及电子设备 Download PDF

Info

Publication number
WO2021185374A1
WO2021185374A1 PCT/CN2021/082090 CN2021082090W WO2021185374A1 WO 2021185374 A1 WO2021185374 A1 WO 2021185374A1 CN 2021082090 W CN2021082090 W CN 2021082090W WO 2021185374 A1 WO2021185374 A1 WO 2021185374A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
image
electronic device
exposure
area
Prior art date
Application number
PCT/CN2021/082090
Other languages
English (en)
French (fr)
Inventor
秦超
张运超
武小宇
敖欢欢
苗磊
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2021185374A1 publication Critical patent/WO2021185374A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/682Vibration or motion blur correction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/73Circuitry for compensating brightness variation in the scene by influencing the exposure time

Definitions

  • the embodiments of the present application relate to the field of terminal technology and image processing technology, and in particular to a method and electronic device for capturing images.
  • a camera is installed in most electronic devices and has the function of taking images.
  • multiple cameras can be installed in the mobile phone, such as at least two of a main camera, a telephoto camera, a wide-angle camera, an infrared camera, a depth camera, or a black and white camera.
  • the mobile phone can use different cameras to capture images in different shooting scenarios to ensure the image quality of the captured images.
  • a mobile phone can use a telephoto camera to shoot objects far away from the mobile phone.
  • the mobile phone can adopt the main camera to shoot the subject in a dark light scene.
  • a mobile phone can use a wide-angle camera to shoot larger objects (such as buildings or landscapes).
  • each camera has its own disadvantages in other scenes.
  • This disadvantage may affect the image quality of captured images.
  • the focal length of the telephoto camera is long, the amount of light input by the telephoto camera is small; therefore, if you use the telephoto camera to shoot subjects far away from the phone in a dark light scene, it may be caused by insufficient light input. Affect the image quality.
  • the main camera has a large amount of light and high resolution, the focal length of the main camera is short; therefore, if the main camera is used to photograph subjects far away from the mobile phone, it may result in insufficient sharpness of the captured image and affect the image quality.
  • the present application provides a method and electronic device for shooting images. Multiple cameras can work together to improve the quality of images obtained by shooting.
  • the present application provides a method for capturing an image, and the method can be applied to an electronic device including multiple cameras.
  • the electronic device may include a first camera and a second camera. The first camera and the second camera are different cameras.
  • the electronic device can detect the preset operation.
  • the first camera of the electronic device can capture a first image
  • the electronic device can display the first image.
  • the second camera of the electronic device can capture the second image, but the electronic device does not display the second image.
  • the electronic device may display the first image collected by the first camera (referred to as the preview camera) as a preview image, instead of displaying the second image collected by the second camera (referred to as the auxiliary camera).
  • the above-mentioned second image includes a first area, and the first area is an area corresponding to the field of view of the first camera. Then, the electronic device can recognize the second image, and detect that the image of the preset object is included in the first area of the second image.
  • the aforementioned preset object includes at least one of the following: human face, human body, plant, animal, building, or text.
  • the electronic device can determine the exposure value of the second area.
  • the second area is the area where the image of the preset object in the first image is located. If the exposure value of the second area is less than the first exposure threshold, the electronic device can adjust the exposure parameters of the first camera to make the exposure value equal to or greater than the first exposure threshold.
  • the first camera of the electronic device may use the adjusted exposure parameters to collect the first preview image, and the electronic device may display the first preview image.
  • the electronic device may save a third image, which is captured by the first camera using the adjusted exposure parameters. Specifically, the third image may be obtained based on one or more frames of the first preview image collected by the first camera.
  • the electronic device when the electronic device uses the preview camera (ie, the first camera) to capture images, it can use the advantages of other cameras (called auxiliary cameras, such as the second camera) over the preview camera to control the cooperation of the auxiliary camera and the preview camera Work to improve the image quality of the images captured by the preview camera during shooting. That is to say, in the method of the present application, the electronic device can take advantage of each camera to control multiple cameras to work together to improve the image quality of the captured image.
  • auxiliary cameras such as the second camera
  • the above-mentioned exposure parameter may include at least one of exposure time, number of photographing frames, and ISO sensitivity. That is, the electronic device can adjust at least one of the exposure time, the number of photographed frames, and the ISO sensitivity, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • At least one exposure parameter such as the exposure time, the number of photographed frames, or the ISO sensitivity can be adjusted to achieve the purpose of updating the exposure value.
  • the longer the exposure time the greater the exposure value; the greater the number of photographed frames, the greater the exposure value; the higher the ISO sensitivity, the greater the exposure value.
  • any of the operations of "increase the exposure time”, “increase the number of photo frames” and “increase the ISO sensitivity” can achieve the purpose of increasing the above-mentioned exposure value.
  • the exposure parameter adjusted by the electronic device when the preset object is stationary is different from the exposure parameter adjusted by the electronic device when the preset object is moving.
  • the electronic device can adjust the exposure time of the first camera to achieve the purpose of increasing the exposure value.
  • the above electronic device adjusts the exposure parameter of the first camera to make the exposure value equal to or greater than the first exposure threshold, which may include: if the preset object is stationary, the electronic device adjusts the exposure time of the first camera to make the second area The exposure value of is equal to or greater than the first exposure threshold.
  • the electronic device may adjust the exposure time and ISO sensitivity of the first camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the electronic device can adjust the number of photographing frames of the first camera to achieve the purpose of increasing the exposure value.
  • the above electronic device adjusts the exposure parameter of the first camera so that the exposure value is equal to or greater than the first exposure threshold.
  • the exposure value of the two regions is equal to or greater than the first exposure threshold.
  • the electronic device may adjust the number of photographing frames and the ISO sensitivity of the first camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the electronic device saving the third image may include: the electronic device collects a third image from the first camera The first preview image of the frame is subjected to optical image stabilization (OIS) anti-shake, and the third image is obtained and saved.
  • OIS optical image stabilization
  • the anti-shake within the OIS shutter time ie exposure time
  • EIS electronic image stabilization
  • the electronic device when the object is preset to move, saves the third image in response to the user's photographing operation, which may include: the electronic device collects multiple images from the first camera.
  • the first preview image of the frame undergoes OIS anti-shake and EIS anti-shake fusion, and the third image is obtained and saved.
  • the electronic device performs an anti-shake operation on the preview image collected by the first camera may include OIS anti-shake and EIS anti-shake. In this way, the image quality of moving objects captured by the first camera can be improved.
  • the electronic device saving the third image may include: in response to the photographing operation, the electronic device performs OIS anti-shake on the multiple frames of the first preview image collected by the first camera, and performs multiple The image in the moving area of the first preview image of the frame is subjected to EIS anti-shake fusion to obtain and save the third image.
  • the electronic device may perform OIS anti-shake on the multi-frame preview image collected by the first camera, and perform EIS anti-shake fusion on the image of the moving area of the multi-frame preview image to obtain and save the third image. That is to say, when the electronic device obtains the third image based on the multi-frame preview image, for the image of the still area, it only needs to use the image of the still area in any frame of the multi-frame preview image; while for the image of the moving area In other words, it is possible to perform image fusion on the images in the moving area of the multi-frame preview image.
  • the above method further includes: whether the exposure value of the second area of the electronic device is less than a second exposure threshold.
  • the second exposure threshold is greater than the above-mentioned first exposure threshold. If the electronic device determines that the exposure value of the second area is greater than the second exposure threshold, the electronic device adjusts the exposure parameter of the first camera so that the exposure value of the second area is equal to or less than the second exposure threshold.
  • the electronic device can adjust the exposure parameters of the camera to reduce the exposure value of the second area. In this way, the image quality of the captured image can be improved.
  • the electronic device may not activate the second camera in response to the foregoing preset operation.
  • the electronic device may request the user to confirm whether to enter the smart shooting mode.
  • the electronic device uses the second camera to assist the first camera to capture images. If the user chooses to enter the smart shooting mode, the electronic device can activate the second camera to assist the first camera in capturing images.
  • the second camera of the electronic device collecting the second image may include: in response to the preset operation, the electronic device displays a first user interface, the first user interface is used to request the user to confirm whether to use The second camera assists the first camera to capture images. In response to the user's first operation on the first user interface, the second camera of the electronic device captures a second image.
  • the electronic device can request the user to confirm on the first user interface whether to use the second camera to assist the first camera in taking images; if the user chooses to use the second camera to assist the first camera in taking images, the electronic device will start the main camera to assist the user.
  • the focus camera captures images.
  • the electronic device can activate the second camera to assist the first camera in capturing images according to the user's wishes. In this way, the user experience during the interaction between the electronic device and the user can be improved.
  • the second camera of the electronic device in response to a second operation of the user on the first user interface, does not collect images. In other words, if the user chooses not to use the second camera to assist the first camera in taking images, the main camera of the electronic device will not assist the telephoto camera in taking images.
  • the foregoing first user interface may further include a first preview image.
  • the first preview image may be an effect preview image obtained by using the second camera to assist the first camera in shooting.
  • the electronic device may display the effect preview image obtained by using the second camera to assist the first camera to shoot on the first user interface for the user, so that the user can choose whether to enter the smart shooting mode according to the effect preview image.
  • the electronic device may also provide the user with the above-mentioned image effect preview function in other manners.
  • the method of the present application further includes: in response to a third operation of the user on the first user interface, the electronic device displays a second user interface, and the third operation is used to trigger the electronic device to display the first preview image collected by the first camera ,
  • the second user interface includes a first preview image; in response to the user's fourth operation on the second user interface, the second camera of the electronic device captures the second image.
  • the fourth operation is used to trigger the electronic device to use the second camera to assist the first camera in capturing images.
  • the electronic device can provide the user with a preview function of the first preview image. In this way, it is convenient for the user to decide whether to control the electronic device to use the second camera to assist the first camera in taking the image according to the image effect of the first preview image.
  • the foregoing first user interface includes a first control
  • the third operation is a user's click operation on the first control.
  • the above-mentioned third operation is a preset gesture.
  • the above-mentioned first camera is a telephoto camera
  • the second camera is a main camera.
  • the above preset operation is a zoom operation.
  • the light input of the main camera is greater than the light input of the telephoto camera.
  • the main camera can be used as the auxiliary camera.
  • the electronic device can take advantage of the large light input of the main camera to detect the position of the preset object (that is, the second area) from the first image collected by the telephoto camera.
  • the image quality of the first image is poor, and the reason why the preset object cannot be clearly distinguished from the first image is that the position of the preset object in the first image (such as the second area) has a low exposure value . Therefore, the electronic device can detect and adjust the exposure parameters of the telephoto camera to increase the aforementioned exposure value. In this way, the image quality of the image captured by the telephoto camera can be improved. In this way, after increasing the exposure value, the telephoto camera can shoot images with higher image quality (such as image c).
  • the second camera of the electronic device collects the second image, including: in response to the preset operation, the ambient light sensor of the electronic device detects the brightness of the ambient light The electronic device determines the first ambient light brightness value; if the first ambient light brightness value is lower than the first brightness threshold, the second camera of the electronic device captures the second image.
  • the first ambient light brightness value is lower than the first brightness threshold value, it means that the electronic device is in a dark light scene.
  • the first camera may affect the quality of the captured image due to insufficient light input and other reasons.
  • using the second camera to assist the first camera in capturing images can improve the image quality of the captured images.
  • the above-mentioned first camera is a color camera
  • the second camera is a black and white camera.
  • the light input of the black-and-white camera is greater than the light input of the color camera.
  • the color camera includes at least any one of a main camera, a telephoto camera, or a wide-angle camera.
  • the black-and-white camera has the advantage of a large amount of light.
  • the black-and-white camera is used as an auxiliary camera (that is, the second camera) to assist the color camera to work to improve the image quality of the image captured by the color focus camera.
  • the above-mentioned first camera is a visible light camera
  • the second camera is an infrared camera.
  • the infrared camera has the ability to perceive visible light and infrared light
  • the visible light camera has the ability to perceive visible light, but does not have the ability to perceive infrared light.
  • the above-mentioned visible light camera may be any camera such as a telephoto camera, a wide-angle camera, a main camera, or a black and white camera.
  • the infrared camera when the electronic device uses the visible light camera as the preview camera (ie the first camera) to collect images in the dark scene, in order to avoid the weak visible light and affect the image quality, the infrared camera can be used It can perceive the advantages of infrared light, and use the infrared camera as an auxiliary camera (that is, the second camera) to assist the visible light camera to work, so as to improve the image quality of the image captured by the visible light camera.
  • the above-mentioned first camera is a telephoto camera
  • the second camera is an infrared camera or a main camera.
  • the preset operation is a zoom operation, and the zoom operation is used to trigger the electronic device to start the telephoto camera.
  • the light input of the main camera is greater than the light input of the telephoto camera.
  • Infrared cameras have the ability to perceive visible light and infrared light
  • telephoto cameras have the ability to perceive visible light, but do not have the ability to perceive infrared light.
  • the second camera of the electronic device collects the second image, including: in response to the preset operation, the ambient light sensor of the electronic device detects the brightness of the ambient light The electronic device determines the second ambient light brightness value; if the second ambient light brightness value is lower than the first brightness threshold and lower than the second brightness threshold, the infrared camera of the electronic device collects the second image, and the second camera is an infrared camera, The second brightness threshold is less than the first brightness threshold; if the second ambient light brightness value is lower than the first brightness threshold, but higher than or equal to the second brightness threshold, the main camera of the electronic device captures the second image, and the second camera is the main camera .
  • the main camera or the infrared camera can be selected as the auxiliary camera to assist the telephoto camera to take pictures according to the brightness of the ambient light, so as to improve the telephoto camera shooting The image quality of the resulting image.
  • the above-mentioned first camera is a color camera
  • the second camera is a depth camera.
  • the depth camera has the ability to obtain the depth information of the object, and the depth information is used to identify the contour of the preset object.
  • an electronic device uses a color camera as a preview camera to collect images, it may not be able to clearly capture the outline of the preset object because the color of the shooting object (such as the aforementioned preset object) is close to the background color.
  • the depth camera can collect the depth information of the preset object, and the depth information can be used to detect the contour of the preset object. Therefore, in this embodiment, when the electronic device uses a color camera as the preview camera (i.e., the first camera) to collect images, the depth camera can be used as an auxiliary camera (i.e., the second camera) to assist with the work of the color camera, so as to improve the image captured by the color camera. The image quality of the image.
  • the second camera of the electronic device collects the second image in response to the preset operation. Including: in response to a preset operation, the electronic device determines the red green blue (RGB) value of each pixel in the first image; if the electronic device determines that the first image meets the first preset condition, the depth camera of the electronic device Acquire a second image.
  • the first preset condition refers to: the first image includes a third area, and the difference in RGB values of a plurality of pixels in the third area is less than a preset RGB threshold; if the first image meets the first preset condition.
  • the above-mentioned first camera is a black and white camera, and the camera is a color camera.
  • the preset advantages of the color camera compared to the black and white camera are: the color camera has the ability to collect color images; the color camera includes at least any one of a main camera, a telephoto camera, or a wide-angle camera.
  • the color camera can collect color images.
  • the images collected by the black-and-white camera can only show different levels of gray, and cannot show the true colors of the subject. Therefore, using a black-and-white camera to take pictures may affect the image quality because the photographed objects (such as the above-mentioned preset objects) include colors that are similar and not easily distinguishable by grayscale.
  • the electronic device uses a black-and-white camera as the preview camera (i.e., the first camera) to collect images, it can take advantage of the color camera to capture the true color of the subject.
  • the color camera is used as the auxiliary camera (i.e., the second camera). Camera) assist the work of the black and white camera to improve the image quality of the image captured by the black and white focus camera.
  • the second camera of the electronic device collects the second image, including : In response to the preset operation, the electronic device determines the gray value of each pixel in the first image; if the electronic device determines that the first image meets the second preset condition, the color camera of the electronic device captures the second image.
  • the second preset condition refers to that: the first image includes a fourth area, and the difference in gray values of multiple pixels in the fourth area is less than the preset gray threshold.
  • the above method further includes: the electronic device presets the position of the image of the object in the first area in the first image, Determine the second area where the image of the preset object in the first image is located.
  • the electronic device may save the correspondence between the field of view of the first camera and the field of view of the second camera.
  • the electronic device may determine the second area in the first image where the preset object is located in combination with the corresponding relationship between the field of view of the first camera and the field of view of the second camera according to the position of the image of the preset object in the first area.
  • the first camera is a telephoto camera
  • the second camera is a main camera
  • the preset operation is a zoom operation.
  • the main camera can be used as an auxiliary camera to assist the telephoto camera to capture images.
  • the ambient light sensor of the electronic device in response to a preset operation, can detect the brightness of the ambient light.
  • the electronic device can determine the third ambient light brightness value. If the third ambient light brightness value is lower than the first brightness threshold, it means that the electronic device is in a dark light scene, and the second camera (ie, the main camera) of the electronic device can collect the second image. In other words, in a dark scene, the main camera of the electronic device can assist the telephoto camera to capture images. Among them, the light input of the main camera is greater than the light input of the telephoto camera. In this way, even if the light input of the telephoto camera is small, with the advantage of the large light input of the main camera, the electronic device can also capture images with higher image quality.
  • the electronic device can adjust different exposure parameters of the tiger telephoto camera when the object is preset to be static or moving, so as to achieve the purpose of increasing the exposure value.
  • the electronic device can adjust the exposure time of the first camera, or adjust the exposure time and ISO sensitivity, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the electronic device can adjust the number of photo frames of the first camera, or adjust the number of photo frames and ISO sensitivity of the first camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the electronic device can adaptively adjust different exposure parameters of the telephoto camera according to the motion state (such as still or moving) of the preset object. In this way, the efficiency of the electronic device in adjusting the exposure parameters and increasing the exposure value can be improved.
  • the motion state of the preset object is different, and the anti-shake method used by the electronic device to generate the third image may be different.
  • the anti-shake within the OIS shutter time ie exposure time
  • EIS exposure time
  • the electronic device can perform OIS anti-shake on a frame of the first preview image collected by the first camera; when the preset object is moving, the electronic device can perform anti-shake on the multiple images collected by the first camera.
  • the first preview image of the frame is subjected to OIS anti-shake and EIS anti-shake. In this way, the image quality of the image captured by the electronic device can be further improved.
  • the present application provides an electronic device that includes a first collection module, a second collection module, and a display module.
  • the electronic device also includes a processing module and a storage module. Wherein, the above-mentioned first collection module is different from the second collection module.
  • the aforementioned processing module is used to detect preset operations.
  • the above-mentioned first acquisition module is configured to acquire a first image in response to a preset operation detected by the processing module.
  • the above-mentioned display module is used to display the first image.
  • the above-mentioned second acquisition module is used to acquire a second image. Wherein, the above-mentioned display module does not display the second image.
  • the second image includes a first area, and the first area is an area corresponding to the field of view of the first acquisition module.
  • the above-mentioned processing module is also used to detect the image including the preset object in the first area; and is also used to determine the exposure value of the second area.
  • the second area is the area where the image of the preset object in the first image is located.
  • the above processing module is also used to determine if the exposure value of the second area is less than the first exposure threshold, adjust the exposure parameters of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the above-mentioned first acquisition module is also used to acquire the first preview image by adopting the adjusted exposure parameters.
  • the above-mentioned display module is also used to display the first preview image.
  • the above-mentioned first acquisition module is further configured to take a third image with the adjusted exposure parameter in response to the user's photographing operation.
  • the above-mentioned storage module is used to save the third image.
  • the aforementioned preset object includes at least one of the following: human face, human body, plant, animal, building, or text.
  • the above-mentioned exposure parameter includes at least one of exposure time, number of photographing frames, and ISO sensitivity.
  • the above-mentioned processing module is configured to adjust the exposure parameters of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold, including: a processing module, Yu: If the preset object is still, adjust the exposure time of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold; or if the preset object is still, adjust the exposure time of the first acquisition module The exposure time and ISO sensitivity make the exposure value of the second area equal to or greater than the first exposure threshold.
  • the above-mentioned processing module is further configured to perform OIS anti-shake on a frame of the first preview image collected by the first collection module in response to a photographing operation to obtain a third image.
  • the above-mentioned processing module is configured to adjust the exposure parameters of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold, including: a processing module, Yu: If the preset object is moving, adjust the number of photographing frames of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold; or if the preset object is moving, adjust the first acquisition module The number of photographing frames and the ISO sensitivity of the camera make the exposure value of the second area equal to or greater than the first exposure threshold.
  • the above-mentioned processing module is further configured to perform OIS anti-shake and EIS anti-shake fusion on the multi-frame first preview image collected by the first collection module in response to the photographing operation, to obtain The third image.
  • the above-mentioned processing module is further configured to perform OIS anti-shake on the multi-frame first preview image collected by the first acquisition module in response to a photographing operation, and perform OIS anti-shake on the multi-frame first preview image.
  • the image in the moving area of the preview image is subjected to EIS anti-shake fusion to obtain a third image.
  • the above-mentioned processing module is further used to determine whether the exposure value of the second area is greater than the second exposure threshold; if the processing module determines that the exposure value of the second area is greater than the second exposure value, The exposure threshold, the processing module, is also used to adjust the exposure parameters of the first acquisition module so that the exposure value of the second area is equal to or less than the second exposure threshold.
  • the above-mentioned display module is further configured to display a first user interface in response to a preset operation, and the first user interface is used to request the user to confirm whether to use the second acquisition module to assist the first user interface.
  • An acquisition module captures images.
  • the above-mentioned processing module is also used to detect the first operation of the user on the first user interface.
  • the above-mentioned second acquisition module is further configured to acquire a second image in response to the first operation.
  • the above-mentioned processing module is further configured to detect a second operation of the user on the first user interface. Wherein, the second acquisition module does not acquire an image in response to the second operation.
  • the above-mentioned first user interface further includes a first preview image.
  • the above-mentioned processing module is further configured to detect a third operation of the user on the first user interface.
  • the above-mentioned display module is further configured to display the second user interface in response to the third operation.
  • the second user interface includes a first preview image.
  • the first preview image is collected by the first collection module.
  • the above-mentioned processing module is also used to detect the fourth operation of the user on the second user interface.
  • the above-mentioned second acquisition module is further configured to acquire a second image in response to the fourth operation.
  • the first user interface includes a first control
  • the third operation is a user's click operation on the first control.
  • the above-mentioned third operation is a preset gesture.
  • first collection module and the second collection module may be different.
  • various possible implementation manners of the first acquisition module and the second acquisition module can be referred to the descriptions in the following possible design manners, which are not repeated here.
  • the above-mentioned first collection module is a telephoto camera, and the second collection module is a main camera or an infrared camera.
  • the first acquisition module is a color camera
  • the second acquisition module is a black and white camera.
  • the first collection module is a visible light camera
  • the second collection module is an infrared camera.
  • the first acquisition module is a color camera
  • the second acquisition module is a depth camera.
  • the first acquisition module is a black and white camera
  • the camera is a color camera.
  • the color camera includes at least any one of a main camera, a telephoto camera, or a wide-angle camera.
  • the above electronic device further includes a sensor module.
  • the sensor module is used to detect the brightness of the ambient light in response to a preset operation.
  • the above-mentioned processing module is also used to determine the first ambient light brightness value.
  • the processing module is also used to determine whether the first ambient light brightness value is lower than the first brightness threshold value. If the processing module determines that the first ambient light brightness value is lower than the first brightness threshold value, the above-mentioned second collection module is also used to collect a second image.
  • the above-mentioned first collection module is a telephoto camera
  • the second collection module is an infrared camera or a main camera.
  • the above preset operation is a zoom operation.
  • the above electronic device also includes a sensor module.
  • the above-mentioned sensor module is used to detect the brightness of the ambient light in response to a preset operation.
  • the above processing module is also used to determine the second ambient light brightness value.
  • the processing module is also used to determine whether the second ambient light brightness value is lower than the first brightness threshold and the second brightness threshold. If the processing module determines that the second ambient light brightness value is lower than the first brightness threshold and the second brightness threshold, the second acquisition module is also used to acquire a second image; the second acquisition module is an infrared camera.
  • the above-mentioned processing module is also used to determine whether the second ambient light brightness value is lower than the first brightness threshold value and greater than or equal to the second brightness threshold value. If the processing module determines that the second ambient light brightness value is lower than the first brightness threshold value and greater than or equal to the second brightness threshold value, the above-mentioned second collection module is also used to collect the second image; the second collection module is the main camera. Wherein, the above-mentioned second brightness threshold is smaller than the first brightness threshold.
  • the above-mentioned first acquisition module is a color camera
  • the second acquisition module is a depth camera.
  • the above-mentioned processing module is further configured to determine the RGB value of the pixel in the first image in response to a preset operation.
  • the above-mentioned processing module is also used to determine whether the first image satisfies the first preset condition. If the processing module determines that the first image satisfies the first preset condition, the above-mentioned second acquisition module is also used to acquire a second image.
  • the first preset condition refers to that: the first image includes a third area, and the difference in RGB values of multiple pixels in the third area is less than the preset RGB threshold.
  • the above-mentioned first acquisition module is a black and white camera, and the camera is a color camera.
  • the processing module is also used to determine the gray value of the pixel in the first image in response to a preset operation.
  • the above-mentioned processing module is also used to determine whether the first image satisfies the second preset condition. If the processing module determines that the first image satisfies the second preset condition, the above-mentioned second acquisition module is also used to determine the acquisition of the second image.
  • the second preset condition refers to: the first image includes a fourth area, and the difference in gray values of multiple pixels in the fourth area is less than the preset gray threshold.
  • the above-mentioned processing module is further configured to determine the first area according to the position of the image of the preset object in the first area in the first image before determining the exposure value of the second area A second area in an image where the image of the preset object is located.
  • the above-mentioned first acquisition module is a telephoto camera
  • the second acquisition module is a main camera
  • the preset operation is a zoom operation.
  • the above electronic device also includes a sensor module.
  • the sensor module is used to detect the brightness of the ambient light in response to a preset operation.
  • the above-mentioned processing module is also used to determine the third ambient light brightness value.
  • the processing module is further configured to determine whether the third ambient light brightness value is lower than the first brightness threshold value. If the processing module determines that the third ambient light brightness value is lower than the first brightness threshold value, the above-mentioned second collection module is also used to collect a second image.
  • the processing module is used to adjust the exposure parameters of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold, and includes: a processing module, used to adjust the first acquisition if the preset object is still The exposure time of the module, or the electronic device adjusts the exposure time and ISO sensitivity of the first acquisition module so that the exposure value of the second area is equal to or greater than the first exposure threshold; if the preset object is moving, adjust the first acquisition module The number of photographing frames, or the electronic device adjusts the number of photographing frames of the first acquisition module and the ISO sensitivity, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the processing module is also used to respond to the photographing operation, if the preset object is still, perform OIS anti-shake on a frame of the first preview image collected by the first collection module to obtain a third image; if the preset object is moving Yes, perform OIS anti-shake on the multi-frame first preview image collected by the first collection module to obtain the third image.
  • the above-mentioned first acquisition module and the second acquisition module may be the same.
  • this application provides an electronic device including one or more touch screens, one or more storage modules, and one or more processing modules; wherein the one or more storage modules store one or more programs; When the one or more processing modules are executing the one or more programs, the electronic device is made to implement the method described in the first aspect and any one of its possible design manners.
  • the present application provides an electronic device that includes a first camera, a second camera, and a display screen.
  • the electronic device also includes a processor and a memory.
  • the second camera is different from the first camera.
  • the memory, the display screen, the first camera and the second camera are coupled with the processor.
  • the above-mentioned processor is used to detect a preset operation.
  • the above-mentioned first camera is used to collect a first image in response to a preset operation.
  • the above-mentioned display screen is used to display the first image.
  • the above-mentioned second camera is used to collect a second image.
  • the above-mentioned display screen does not display the second image, the second image includes the first area, and the first area is the area corresponding to the field of view of the first camera.
  • the above-mentioned processor is further configured to detect an image including a preset object in the first area.
  • the preset object includes at least one of the following: human face, human body, plant, animal, building, or text.
  • the above-mentioned processor is further configured to determine the exposure value of the second area, where the second area is the area where the image of the preset object in the first image is located.
  • the above-mentioned processor is further configured to determine that if the exposure value of the second area is less than the first exposure threshold, adjust the exposure parameters of the first camera so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the above-mentioned first camera is also used to collect a first preview image using the adjusted exposure parameters.
  • the above-mentioned display screen is also used to display the first preview image.
  • the above-mentioned first camera is also used for taking a third image with the adjusted exposure parameter in response to the user's photographing operation.
  • the aforementioned memory is used to store the third image.
  • the above-mentioned exposure parameter includes at least one of exposure time, number of photographing frames, and ISO sensitivity.
  • the above-mentioned processor is configured to adjust the exposure parameter of the first camera so that the exposure value of the second area is equal to or greater than the first exposure threshold, and includes: a processor for : If the preset object is still, adjust the exposure time of the first camera so that the exposure value of the second area is equal to or greater than the first exposure threshold; or if the preset object is still, adjust the exposure time of the first camera and The ISO sensitivity makes the exposure value of the second area equal to or greater than the first exposure threshold.
  • the above-mentioned processor is further configured to perform OIS anti-shake on a frame of the first preview image collected by the first camera in response to the photographing operation to obtain the third image.
  • the above-mentioned processor is configured to adjust the exposure parameter of the first camera so that the exposure value of the second area is equal to or greater than the first exposure threshold, and includes: a processor for : If the preset object is moving, adjust the number of photo frames of the first camera so that the exposure value of the second area is equal to or greater than the first exposure threshold; or if the preset object is moving, adjust the photo frame of the first camera The number and ISO sensitivity make the exposure value of the second area equal to or greater than the first exposure threshold.
  • the above-mentioned processor is further configured to perform OIS anti-shake and EIS anti-shake fusion on the multi-frame first preview image collected by the first camera in response to the photographing operation, to obtain the first Three images.
  • the above-mentioned processor is further configured to perform OIS anti-shake on the multi-frame first preview image collected by the first camera in response to the photographing operation, and perform the multi-frame first preview The image in the moving area of the image undergoes EIS anti-shake fusion to obtain a third image.
  • the above-mentioned processor is further configured to determine whether the exposure value of the second area is greater than the second exposure threshold. If the processor determines that the exposure value of the second area is greater than the second exposure threshold, the processor is further configured to adjust the exposure parameter of the first camera so that the exposure value of the second area is equal to or less than the second exposure threshold.
  • the above-mentioned display screen is also used to display a first user interface in response to a preset operation, and the first user interface is used to request the user to confirm whether to use the second camera to assist the first The camera captures images.
  • the above-mentioned processor is also configured to detect the first operation of the user on the first user interface.
  • the above-mentioned second camera is also used to collect a second image in response to the first operation.
  • the above-mentioned processor is further configured to detect a second operation of the user on the first user interface. Wherein, in response to the second operation, the second camera does not collect images.
  • the above-mentioned first user interface further includes a first preview image.
  • the above-mentioned processor is further configured to detect a third operation of the user on the first user interface.
  • the above-mentioned display screen is also used to display the second user interface in response to the third operation.
  • the second user interface includes a first preview image.
  • the first preview image is collected by the first camera.
  • the above-mentioned processor is further configured to detect a fourth operation of the user on the second user interface.
  • the above-mentioned second camera is also used to collect a second image in response to the fourth operation.
  • the above-mentioned first user interface includes a first control
  • the third operation is a user's click operation on the first control.
  • the third operation is a preset gesture.
  • the above-mentioned first camera is a telephoto camera
  • the second camera is a main camera or an infrared camera.
  • the first camera is a color camera
  • the second camera is a black and white camera.
  • the first camera is a visible light camera
  • the second camera is an infrared camera.
  • the first camera is a color camera
  • the second camera is a depth camera.
  • the first camera is a black and white camera
  • the camera is a color camera.
  • the color camera includes at least any one of a main camera, a telephoto camera, or a wide-angle camera.
  • the above electronic device further includes an ambient light sensor.
  • the ambient light sensor is used to detect the brightness of the ambient light in response to a preset operation.
  • the processor is also used to determine the first ambient light brightness value.
  • the above-mentioned processor is further configured to determine whether the first ambient light brightness value is lower than the first brightness threshold value. If the processor determines that the first ambient light brightness value is lower than the first brightness threshold value, the second camera is also used to collect a second image.
  • the above-mentioned first camera is a telephoto camera
  • the second camera is an infrared camera or a main camera.
  • the preset operation is a zoom operation.
  • the electronic device also includes an ambient light sensor.
  • the ambient light sensor is used to detect the brightness of the ambient light in response to a preset operation.
  • the processor is also used to determine the second ambient light brightness value.
  • the processor is further configured to determine whether the second ambient light brightness value is lower than the first brightness threshold and the second brightness threshold. If the processor determines that the second ambient light brightness value is lower than the first brightness threshold and the second brightness threshold, the second acquisition module is further configured to acquire a second image.
  • the second camera is an infrared camera.
  • the foregoing processor is further configured to determine whether the second ambient light brightness value is lower than the first brightness threshold value and greater than or equal to the second brightness threshold value. If the processor determines that the second ambient light brightness value is lower than the first brightness threshold value and greater than or equal to the second brightness threshold value, the second camera is also used to collect a second image. The second camera is the main camera. Wherein, the second brightness threshold is less than the first brightness threshold.
  • the above-mentioned first camera is a color camera
  • the second camera is a depth camera.
  • the above-mentioned processor is further configured to determine the RGB value of the pixel in the first image in response to a preset operation.
  • the processor is further configured to determine whether the first image meets the first preset condition. If the processor determines that the first image meets the first preset condition, the above-mentioned second camera is also used to collect a second image.
  • the first preset condition refers to that: the first image includes a third area, and the difference in RGB values of multiple pixels in the third area is less than the preset RGB threshold.
  • the above-mentioned first camera is a black and white camera, and the camera is a color camera.
  • the above-mentioned processor is further configured to determine the gray value of the pixel in the first image in response to a preset operation.
  • the above-mentioned processor is further configured to determine whether the first image satisfies the second preset condition. If the processor determines that the first image satisfies the second preset condition, the above-mentioned second camera is also used to collect a second image.
  • the second preset condition refers to: the first image includes a fourth area, and the difference in gray values of multiple pixels in the fourth area is less than the preset gray threshold.
  • the above-mentioned processor is further configured to determine the first area according to the position of the image of the preset object in the first area in the first image before determining the exposure value of the second area A second area in an image where the image of the preset object is located.
  • the aforementioned first camera is a telephoto camera
  • the second camera is a main camera
  • the preset operation is a zoom operation.
  • the above electronic device also includes an ambient light sensor.
  • the ambient light sensor is used to detect the brightness of the ambient light.
  • the aforementioned processor is further configured to determine the third ambient light brightness value.
  • the processor is further configured to determine whether the third ambient light brightness value is lower than the first brightness threshold. If the processor determines that the third ambient light brightness value is lower than the first brightness threshold value, the aforementioned second camera is also used to collect the second image.
  • the above-mentioned processor is configured to adjust the exposure parameter of the first camera so that the exposure value of the second area is equal to or greater than the first exposure threshold, and includes: the processor is configured to: The preset object is static, adjust the exposure time of the first camera, or the electronic device adjusts the exposure time and ISO sensitivity of the first camera, so that the exposure value of the second area is equal to or greater than the The first exposure threshold; if the preset object is moving, adjust the number of photographic frames of the first camera, or the electronic device adjusts the number of photographic frames and the ISO sensitivity of the first camera, so that the first The exposure value of the two regions is equal to or greater than the first exposure threshold.
  • the above-mentioned processor is further configured to perform OIS anti-shake on a frame of the first preview image collected by the first camera in response to the photographing operation, if the preset object is still, to obtain the For the third image, if the preset object is moving, perform OIS anti-shake on the multiple frames of the first preview image collected by the first camera to obtain the third image.
  • the present application provides an electronic device, including one or more touch screens, one or more memories, and one or more processors; wherein the one or more memories store one or more programs; when When the one or more processors execute the one or more programs, the electronic device implements the method described in the first aspect and any one of its possible design manners.
  • the memory is also used to save the image taken by the first camera.
  • the memory can also be used to buffer the images collected by the second camera.
  • an embodiment of the present application provides a computer storage medium, the computer storage medium including computer instructions, when the computer instructions run on an electronic device, the electronic device is caused to execute the first aspect and any one thereof Possible design methods described in the method.
  • embodiments of the present application provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute the method described in the first aspect and any of its possible design methods.
  • the second to fifth aspects provided above, the electronic device described in any one of the possible design manners, the computer storage medium described in the sixth aspect, and the computer program product described in the seventh aspect are
  • the beneficial effects that can be achieved refer to the beneficial effects in the first aspect and any of the possible design methods, which will not be repeated here.
  • FIG. 1 is a schematic diagram of the hardware structure of an electronic device provided by an embodiment of the application.
  • FIG. 2 is a schematic block diagram of a method for capturing an image provided by an embodiment of the application
  • FIG. 3 is a flowchart of a method for capturing an image provided by an embodiment of the application
  • FIG. 4 is a schematic diagram of an example of a display interface of a mobile phone provided by an embodiment of the application.
  • FIG. 5 is a schematic diagram of an example of a display interface of another mobile phone provided by an embodiment of the application.
  • FIG. 6 is a schematic diagram of an example of a first image and a second image provided by an embodiment of this application;
  • FIG. 7 is a schematic diagram of an example of a field of view of a camera provided by an embodiment of the application.
  • FIG. 8 is a schematic diagram of an example of the field of view of another camera provided by an embodiment of the application.
  • FIG. 9 is a schematic diagram of an example of an image of a preset object in a second image provided by an embodiment of this application.
  • FIG. 10 is a schematic diagram of an example of a display interface of another mobile phone provided by an embodiment of the application.
  • FIG. 11 is a schematic diagram of an example of a first image provided by an embodiment of this application.
  • FIG. 12 is a flowchart of another method for photographing an image provided by an embodiment of the application.
  • FIG. 13 is a flowchart of another method for photographing an image provided by an embodiment of the application.
  • FIG. 14 is a schematic diagram of an example of a display interface of another mobile phone provided by an embodiment of the application.
  • 15A is a schematic diagram of an example of a display interface of another mobile phone provided by an embodiment of the application.
  • 15B is a schematic diagram of an example of a display interface of another mobile phone provided by an embodiment of the application.
  • FIG. 16 is a flowchart of another method for photographing an image provided by an embodiment of the application.
  • FIG. 17 is a flowchart of another method for capturing images according to an embodiment of the application.
  • FIG. 18 is a flowchart of another method for capturing images according to an embodiment of the application.
  • FIG. 19 is a flowchart of another method for shooting an image provided by an embodiment of the application.
  • FIG. 21 is a schematic structural diagram of a chip system provided by an embodiment of the application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • first camera and the second camera refer to different cameras.
  • the features defined with “first” and “second” may explicitly or implicitly include one or more of these features.
  • plural means two or more.
  • the embodiment of the present application provides a method for capturing an image, and the method may be applied to an electronic device including multiple cameras.
  • the aforementioned multiple cameras may include at least two types of cameras, such as a main camera, a telephoto camera, a wide-angle camera, an infrared camera, a depth camera, or a black and white camera.
  • each camera has its own advantages and disadvantages in different scenarios.
  • the following describes the characteristics (ie advantages and disadvantages) and applicable scenarios of the cameras involved in the embodiments of the present application.
  • the main camera has the characteristics of a large amount of light, high resolution, and a centered field of view.
  • the main camera is generally used as the default camera of an electronic device (such as a mobile phone). That is to say, in response to the user's operation of starting the "camera" application, the electronic device (such as a mobile phone) can start the main camera by default, and display the image collected by the main camera on the preview interface.
  • the field of view of the camera is determined by the field of view (FOV) of the camera.
  • FOV field of view
  • the telephoto camera has a longer focal length and is suitable for shooting objects far away from the mobile phone (that is, distant objects). However, the amount of light entering the telephoto camera is small. Using a telephoto camera to shoot images in low light scenes may affect the image quality due to insufficient light input. Moreover, the telephoto camera has a small field of view, which is not suitable for shooting images of larger scenes, that is, it is not suitable for shooting larger subjects (such as buildings or scenery, etc.).
  • the wide-angle camera has a larger field of view and can be suitable for shooting larger subjects (such as buildings or landscapes). However, the resolution of the wide-angle camera is low. In addition, the subject presented in the image captured by the wide-angle camera is easily distorted, that is, the image of the subject is easily deformed.
  • Infrared camera has the characteristic of large spectral range. For example, infrared cameras can not only perceive visible light, but also infrared light. In dark scenes (that is, the visible light is weak), the infrared camera can be used to perceive the characteristics of infrared light, and the infrared camera can be used to capture images, which can improve the image quality. However, the resolution of infrared cameras is low.
  • Depth camera For example, time of flight (ToF) cameras or structured light cameras are all depth cameras.
  • the depth camera is a ToF camera as an example.
  • the ToF camera has the characteristic of accurately acquiring the depth information of the subject.
  • the ToF camera can be used in scenes such as face recognition. However, the resolution of the ToF camera is low.
  • Black and white camera Since the black-and-white camera has no filter; therefore, compared to the color camera, the black-and-white camera has a larger amount of light. However, the images collected by the black-and-white camera can only show different levels of gray, and cannot show the true colors of the subject. It should be noted that the above-mentioned main camera, telephoto camera, and wide-angle camera are all color cameras.
  • the electronic device when the electronic device uses the preview camera to capture images, it can take advantage of the advantages of other cameras (called auxiliary cameras) compared to the preview camera to control the auxiliary camera and the preview camera to work together to improve the preview camera.
  • auxiliary cameras cameras
  • the electronic device can take advantage of each camera to control multiple cameras to work together to improve the image quality of the captured image.
  • the above-mentioned preview camera is a camera used to collect (or photograph) the preview image displayed by the electronic device. That is to say, the preview image displayed by the electronic device in the process of taking the image (or photo) is collected by the above-mentioned preview camera.
  • the preview image displayed by the electronic device in the process of taking the image (or photo) is collected by the above-mentioned preview camera.
  • any of the aforementioned main camera, telephoto camera, wide-angle camera, or black and white camera can be used as the preview camera of the electronic device.
  • Any of the aforementioned infrared cameras, depth cameras, main cameras, telephoto cameras, wide-angle cameras, or black-and-white cameras can be used as auxiliary cameras of the electronic device.
  • the light input of the main camera is greater than the light input of the telephoto camera.
  • Electronic equipment may use a telephoto camera to collect images in a dark scene (that is, the telephoto camera is used as a preview camera).
  • the main camera in order to avoid affecting the image quality due to insufficient light input of the telephoto camera, the main camera can be used as an auxiliary camera to assist the telephoto camera to improve the telephoto camera shooting by taking advantage of the large light input of the main camera The image quality of the resulting image.
  • the light input of the black-and-white camera is greater than the light input of the color camera.
  • the electronic device may use a color camera to collect images in a dark scene (that is, the color camera is used as a preview camera).
  • the black and white camera can be used as an auxiliary camera to assist the color camera by taking advantage of the large light input of the black and white camera to improve the image captured by the color camera. Image quality.
  • an infrared camera has the ability to perceive visible light and infrared light; a visible light camera has the ability to perceive visible light, but does not have the ability to perceive infrared light. In dark scenes (such as evening, late night, or dark room), the intensity of visible light is low. The visible light camera cannot perceive light or perceive weak light, so it cannot collect a clear image of the preset object. The infrared light camera can perceive the infrared light emitted by a person or animal (that is, a preset object) with a temperature in the field of view, so that an image of the preset object can be collected.
  • electronic equipment can use visible light cameras as the preview camera (ie the first camera) to collect images in dark scenes.
  • the preview camera ie the first camera
  • infrared The camera can perceive the advantages of infrared light, and the infrared camera is used as an auxiliary camera (that is, the second camera) to assist the visible light camera to work, so as to improve the image quality of the image captured by the visible light camera.
  • a depth camera has the ability to acquire depth information of the preset object, and the depth information is used to identify the contour of the preset object.
  • a color camera is used as a preview camera to capture images, it may not be possible to clearly capture the outline of the preset object because the color of the shooting object (such as the aforementioned preset object) is close to the background color.
  • the depth camera can collect the depth information of the preset object, and the depth information can be used to detect the contour of the preset object.
  • the electronic device uses a color camera as a preview camera to collect images
  • the depth camera can be used as an auxiliary camera to assist the color camera to work to improve the image quality of the image captured by the color camera.
  • a color camera can collect color images.
  • the images collected by the black-and-white camera can only show different levels of gray, and cannot show the true colors of the subject. Therefore, using a black-and-white camera to take pictures may affect the image quality because the photographed objects (such as the above-mentioned preset objects) include colors that are similar and not easily distinguishable by grayscale.
  • the black and white camera is used as an auxiliary camera to assist the black and white camera to improve the image of the image captured by the black and white focus camera. quality.
  • the electronic equipment in the embodiments of the present application may be a mobile phone, a tablet computer, a wearable device (such as a smart watch), a smart TV, a camera, a personal computer (PC), a notebook computer, and a super mobile personal computer.
  • UMPC ultra-mobile personal computer
  • netbooks as well as cellular phones, personal digital assistants (PDAs), augmented reality (AR) ⁇ virtual reality (VR) devices, etc., including the above
  • PDAs personal digital assistants
  • AR augmented reality
  • VR virtual reality
  • the embodiment of the present application does not impose special restrictions on the specific form of the electronic device.
  • an electronic device 100 (such as a mobile phone) may include: a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 140, and power management Module 141, battery 142, antenna 1, antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display 194, subscriber identification module (SIM) card interface 195, etc.
  • SIM subscriber identification module
  • the sensor module 180 may include pressure sensors, gyroscope sensors, air pressure sensors, magnetic sensors, acceleration sensors, distance sensors, proximity light sensors, fingerprint sensors, temperature sensors, touch sensors, ambient light sensors, and bone conduction sensors.
  • the ambient light sensor in the embodiment of the present application may be used to detect the brightness of the ambient light.
  • the ambient light brightness collected by the ambient light sensor can be used for the electronic device 100 to determine whether the electronic device 100 is in a dark light scene. In other words, the ambient light brightness collected by the ambient light sensor can be used by the electronic device 100 to determine whether the electronic device 100 needs to activate the auxiliary camera to assist the preview camera to take pictures.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than shown, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural-network processing unit (NPU) Wait.
  • AP application processor
  • modem processor modem processor
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • NPU neural-network processing unit
  • the different processing units may be independent devices or integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate operation control signals according to the instruction operation code and timing signals to complete the control of fetching instructions and executing instructions.
  • a memory may also be provided in the processor 110 to store instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory can store instructions or data that have just been used or recycled by the processor 110. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. Repeated accesses are avoided, the waiting time of the processor 110 is reduced, and the efficiency of the system is improved.
  • the processor 110 may include one or more interfaces.
  • the charging management module 140 is used to receive charging input from the charger.
  • the power management module 141 is used to connect the battery 142, the charging management module 140 and the processor 110.
  • the power management module 141 receives input from the battery 142 and/or the charging management module 140, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 and the charging management module 140 may also be provided in the same device.
  • the wireless communication function of the electronic device 100 can be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • the antenna 1 of the electronic device 100 is coupled with the mobile communication module 150
  • the antenna 2 is coupled with the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 100 may send the above-mentioned first account and login password to other devices through wireless communication technology.
  • the electronic device 100 implements a display function through a GPU, a display screen 194, an application processor, and the like.
  • the GPU is an image processing microprocessor, which is connected to the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display screen 194 includes a display panel.
  • the display screen 194 may be used to display an image collected by a preview camera (ie, a preview image).
  • the display screen can also be used to display various interactive interfaces between the electronic device 100 and the user, such as an interface for requesting the user to confirm whether to enter the smart shooting mode.
  • the smart shooting mode described in the embodiment of the present application refers to a mode in which the electronic device 100 starts the auxiliary camera to assist the preview camera to take pictures when the electronic device 100 uses the preview camera to collect images.
  • the electronic device 100 can implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, and an application processor.
  • the ISP is used to process the data fed back from the camera 193.
  • the camera 193 is used to capture still images, moving images or videos.
  • the electronic device 100 may include N cameras 193, where N is a positive integer greater than 2.
  • the N cameras 193 may include at least two types of cameras, such as a main camera, a telephoto camera, a wide-angle camera, an infrared camera, a depth camera, or a black and white camera.
  • any camera such as a main camera, a telephoto camera, a wide-angle camera, or a black and white camera can be used as a preview camera (that is, the first camera) of the electronic device 100.
  • Any of the aforementioned infrared cameras, depth cameras, main cameras, telephoto cameras, wide-angle cameras, or black-and-white cameras can be used as an auxiliary camera (ie, a second camera) of the electronic device 100.
  • the preview camera is different from the auxiliary camera.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by running instructions stored in the internal memory 121.
  • the processor 110 may execute instructions stored in the internal memory 121, and the internal memory 121 may include a program storage area and a data storage area.
  • the storage program area can store an operating system, an application program (such as a sound playback function, an image playback function, etc.) required by at least one function, and the like.
  • the data storage area can store data (such as audio data, phone book, etc.) created during the use of the electronic device 100.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, a universal flash storage (UFS), and the like.
  • the electronic device 100 can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. For example, music playback, recording, etc.
  • the button 190 includes a power-on button, a volume button, and so on.
  • the button 190 may be a mechanical button. It can also be a touch button.
  • the motor 191 can generate vibration prompts.
  • the motor 191 can be used for incoming call vibration notification, and can also be used for touch vibration feedback.
  • the indicator 192 may be an indicator light, which may be used to indicate the charging status, power change, or to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to the SIM card.
  • the SIM card can be inserted into the SIM card interface 195 or pulled out from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or M SIM card interfaces, and M is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the above-mentioned electronic device 100 is a mobile phone as an example to introduce the method of the embodiment of the present application.
  • the mobile phone includes multiple cameras (such as N cameras).
  • the first camera of the plurality of cameras may be used as a preview camera, and the second camera may be used as an auxiliary camera.
  • the embodiment of the present application describes the principle of improving the image quality in the embodiment of the present application with reference to FIG. 2.
  • the mobile phone uses the first camera 210 (i.e., the preview camera) to collect images
  • some disadvantages of the first camera may result in poor image quality of the images collected by the first camera 210.
  • the preset object (such as a human face) cannot be clearly distinguished from the image.
  • the second camera 220 ie, the auxiliary camera
  • advantages for example, a large amount of light input. From the image collected by the second camera 220 in the scene, the preset object can be clearly distinguished.
  • the second camera 220 can be activated to capture images.
  • the first image 211 collected by the first camera 210 is displayed as a preview image on the preview interface, while the second image 221 collected by the second camera 220 is not displayed on the preview interface.
  • the second image 221 may also be referred to as a background image.
  • the positions of the first camera 210 and the second camera 220 in the mobile phone are similar. Therefore, generally speaking, if a preset object is included in the second image 221, the preset object is also included in the first image 211. Since the second camera 220 has the above advantages compared with the first camera 210; therefore, if the second image 221 includes a preset object; then the preset object can be clearly distinguished from the second image 221. In this way, the mobile phone can execute 222 shown in FIG. 2 (that is, detect whether the second image 221 includes a preset object).
  • the mobile phone can locate the position of the preset object in the second image 221; then, according to the position of the preset object in the second image, and the second camera 220 and The corresponding relationship of the field of view of the first camera 210 determines the position of the preset object in the first image (for example, the area where the image is located). That is, the operation of "locating a preset object" in 212 shown in FIG. 2 is performed.
  • the preset objects described in the embodiments of the present application may include a human face, a human body, an animal body (such as a cat's body) or a whole body (such as a cat's whole body, including the cat's face and body), an animal face ( Any object such as a cat’s face), plants, buildings, or text.
  • the mobile phone can detect and adjust the exposure parameters of the above-mentioned first camera (that is, perform the operation of "detecting the exposure value and adjusting the exposure parameters" in 212 shown in FIG. 2) to increase the above-mentioned exposure value.
  • the image quality of the image captured by the first camera can be improved. That is to say, after the above-mentioned exposure value is updated (such as increasing the exposure value), the first camera can shoot an image with higher image quality (such as the third image).
  • Exposure value The exposure value is used to represent the combination of shooting parameters (camera settings) when the camera shoots an image.
  • the shooting parameters are also called exposure parameters.
  • the size of the exposure value is expressed by the exposure level.
  • the exposure value can be -3, -2, -1, 0, 1, 2, or 3, etc.
  • the size of the exposure value is determined by multiple exposure parameters.
  • the multiple exposure parameters may include: exposure time, number of photographed frames, ISO sensitivity, aperture, and so on.
  • Exposure time is the time required for the shutter to open in order to project light onto the photosensitive surface of the photosensitive material of the camera's image sensor during the process of taking pictures by the camera.
  • the number of photographed frames is the number of images captured by the camera per second.
  • ISO sensitivity is the sensitivity of the camera (ie the image sensor in the camera) to brightness.
  • ISO is the International Organization for Standardization (International Organization for Standardization).
  • Standardization abbreviation.
  • the organization stipulates the sensitivity of the camera to brightness, which is represented by values such as ISO 100 and ISO 400.
  • Aperture is a device used to control the amount of light that passes through the lens of the camera and enters the photosensitive surface of the camera (ie, the image sensor of the camera).
  • the aperture of a camera is not easy to adjust automatically.
  • at least one exposure parameter such as the exposure time, the number of photographed frames, or the ISO sensitivity may be adjusted to achieve the above-mentioned purpose of updating the exposure value.
  • the longer the exposure time the greater the exposure value; the greater the number of photo frames, the greater the exposure value; the higher the ISO sensitivity, the greater the exposure value.
  • the method of adjusting the aperture to increase the exposure value is not excluded.
  • the mobile phone includes a main camera and a telephoto camera.
  • the main camera has the characteristics of a large amount of light, high resolution, and a centered field of view.
  • the telephoto camera has a longer focal length, which is suitable for shooting subjects far away from the mobile phone (that is, distant objects); however, the amount of light entering is small.
  • the main camera can be used as an auxiliary camera (that is, the second camera) to assist the telephoto camera by taking advantage of the large amount of light input by the main camera, so as to improve the image quality of the image captured by the telephoto camera.
  • an embodiment of the present application provides a method for capturing an image, and the method may be applied to a mobile phone including a main camera and a telephoto camera.
  • the method may include S301-S310.
  • the zoom operation is used to trigger the telephoto camera of the mobile phone to collect images.
  • the mobile phone can activate the telephoto camera, and the telephoto camera can collect images.
  • This zoom operation is a preset operation.
  • the lens of the camera in a mobile phone is generally a fixed-focus lens, and the adjustable range of the focal length is very small.
  • zooming is achieved by switching cameras with different focal lengths.
  • the above zoom operation can be used to trigger a high-power camera of a mobile phone (such as a camera with a focal length of 3 times/5 times that of the main camera, such as a telephoto camera) to collect images. That is, in response to the zoom operation, the preview camera of the mobile phone can be switched from a low-power camera (ie a camera with a smaller focal length, such as a main camera) to a high-power camera (ie a camera with a larger focal length, such as a telephoto camera).
  • a low-power camera ie a camera with a smaller focal length, such as a main camera
  • high-power camera ie a camera with a larger focal length, such as a telephoto camera
  • the above-mentioned zooming operation can also be referred to as a zooming operation.
  • the above zoom operation can be used to trigger the mobile phone to start the telephoto camera, and zoom the focal length of the camera (such as the telephoto camera) to 2 times, 3 times, 5 times, 10 times, and the default camera (such as the main camera).
  • Any optical magnification such as 15 times or 20 times.
  • the method of the embodiment of the present application is introduced by taking the optical magnification of 5 times that is triggered by the above-mentioned zooming operation as an example.
  • the optical magnification that triggers the magnification by the above-mentioned variable magnification operation may also be 10 times or other data, and the specific value of the optical magnification is not limited in the embodiment of the present application.
  • the zooming operation described above may be an operation input on the image preview interface to control the zoom of the camera of the mobile phone when the image preview interface is displayed on the mobile phone.
  • the mobile phone can start the default camera (such as the main camera) of the mobile phone in response to the user's operation of starting the "camera” application (operation 1 shown in (a) of FIG. 4).
  • the operation 1 may be a single-click operation.
  • the mobile phone can display the image preview interface shown in Figure 4(b), which includes the viewfinder 401, the camera conversion button 408, the shooting button 407, the album button 406, the flash option 411, the filter option 412, "Video” option, "Photograph” option, "Panorama” option, etc.
  • the view frame 401 shown in (b) of FIG. 4 is used to display the preview image (such as the preview image 402) collected by the above-mentioned default camera.
  • the preview image 402 is the same as the image 602 shown in FIG. 6.
  • the aforementioned zooming operation may be a two-finger outstretching operation input by the user on the preview image 402 (such as operation 2).
  • the viewing frame 401 as shown in (b) of FIG. 4 also displays an optical magnification indicator 409 of the mobile phone.
  • the optical magnification mark 409 is "1 ⁇ ", which indicates that the optical magnification is 1 times.
  • the mobile phone may display the image preview interface shown in (c) in FIG. 4.
  • the image preview interface shown in (c) of FIG. 4 includes an optical magnification indicator 410 (for example, "5 ⁇ ").
  • "5 ⁇ ” means that the optical magnification is 5 times. That is to say, in response to the above operation 2 (that is, the zoom operation), the optical magnification of the camera used by the mobile phone has changed.
  • the flash option 411 shown in (b) of FIG. 4 is used to trigger the mobile phone to turn on or turn off the flash when taking a photo.
  • the filter option 412 is used to select the shooting style to be adopted when the phone takes photos.
  • the shooting style can include: standard, small and fresh, blues and black and white.
  • the "video” option is used to trigger the mobile phone to display the viewfinder interface of the video (not shown in the drawings).
  • the “photograph” option is used to trigger the mobile phone to display the viewfinder interface for taking pictures (the image preview interface shown in (b) in Figure 4).
  • the “panoramic” option is used to trigger the mobile phone to display the viewfinder interface of the panoramic photo taken by the mobile phone (not shown in the drawings).
  • the camera conversion key 408 is used to trigger the mobile phone to convert the front camera and the rear camera to collect images.
  • the shooting key 407 is used to control the mobile phone to save the preview image displayed in the viewfinder 401.
  • the album key 406 is used to view the images saved in the mobile phone.
  • the two-finger outstretching operation input by the user on the preview image 402 can be used to trigger the mobile phone to zoom in on the preview image.
  • the user wants to shoot the subject far away from the mobile phone, so the user wants to trigger the mobile phone to zoom in on the preview image, so that the user can see the image of the distant subject more clearly on the image preview interface .
  • the telephoto camera has a longer focal length and is suitable for shooting subjects far away from the mobile phone. Therefore, the aforementioned two-finger outstretching operation is used to trigger the mobile phone to activate the telephoto camera, so as to take a photograph of an object far away from the mobile phone (that is, a distant object).
  • first camera such as a telephoto camera
  • second camera such as the main camera
  • first camera and the second camera are both front cameras; or, the first camera and the second camera are both rear cameras.
  • the above may also be in the focus mode based on tracking of the object (ie, the subject), the subject moving from near to far.
  • the mobile phone in the focus mode based on object tracking, can receive the user's selection operation of the photographic object 501 shown in (a) in FIG.
  • the mobile phone can detect the position change of the tracking object.
  • S301 may specifically be: the mobile phone detects that the tracking object has moved from near to far, and the moving distance is greater than a preset distance threshold. For example, if the mobile phone detects that the tracking object 501 has moved from the position shown in (a) in FIG. 5 to the position shown in (b) in FIG.
  • the zooming operations described in the embodiments of the present application include but are not limited to the above two zooming operations.
  • the zoom operation described in the embodiment of the present application may include all operations that can trigger the mobile phone to start the telephoto camera (that is, trigger the telephoto camera of the mobile phone to collect images).
  • the zooming operation may also be an automatic zooming operation.
  • the mobile phone can automatically trigger the above zooming operation.
  • the embodiment of the present application will not repeat them here.
  • the telephoto camera of the mobile phone collects the image a, and the mobile phone displays the image a collected by the telephoto camera.
  • the mobile phone in response to the aforementioned zoom operation, can activate the telephoto camera.
  • the telephoto camera can capture images (such as image a).
  • the mobile phone can use the image a collected by the telephoto camera as a preview image and display it on the image preview interface.
  • the image a in the embodiment of the present application is the first image.
  • the zoom operation is operation 2 shown in (b) in FIG. 4.
  • operation 2 ie, zooming operation
  • the mobile phone can display the preview image 404 shown in (c) of FIG. 4.
  • the preview image 404 is an image collected by a telephoto camera, such as the aforementioned image a.
  • the preview image 402 shown in (b) in FIG. 4 is an image collected by the main camera
  • the preview image 404 shown in (c) in FIG. 4 is an image collected by a telephoto camera.
  • the viewing range of the preview image 402 is larger than the viewing range of the preview image 404.
  • the area of the image of the subject 405 in the preview image 404 is larger than that of the image of the subject 405 in the preview image 402
  • the area occupied, in other words, the area ratio of the image of the shooting object 405 in the preview image 404 is greater than the area ratio of the image of the shooting object 405 in the preview image 402. Since the light input of the telephoto camera is small; therefore, the image quality of the preview image 404 is poor, and the user cannot clearly view the image of the photographing object 405 from the preview image 404.
  • the main camera can be used as an auxiliary camera to assist the telephoto camera by taking advantage of the large light input of the main camera.
  • the mobile phone in response to the aforementioned zoom operation, can not only activate the telephoto camera, but also activate the main camera.
  • the method in this embodiment of the present application further includes S303.
  • the main camera of the mobile phone collects the image b, and the mobile phone does not display the image b.
  • the main camera of the mobile phone can collect the image b.
  • the image b captured by the main camera will not be displayed on the preview interface.
  • the preview image 404 displayed by the mobile phone is the image captured by the telephoto camera (ie a).
  • the mobile phone will not display the image b collected by the main camera, that is, the image b will not be presented to the user on the mobile phone.
  • the mobile phone can cache the image b collected by the main camera.
  • the mobile phone can also cache the image a collected by the telephoto camera. Exemplarily, it is cached in the internal memory 121 of the mobile phone.
  • the images collected by any camera can be cached by the mobile phone. Specifically, taking the mobile phone buffering the image b collected by the main camera as an example, starting from the image b collected by the main camera, the mobile phone can buffer the The image b. When the second preset time period expires, the mobile phone can delete the image b. It can also be cached in the internal memory 121 until it is periodically deleted or replaced by other cached images.
  • the mobile phone displays the image a collected by the telephoto camera as a preview image in the viewfinder frame, instead of displaying the image b collected by the main camera; therefore, the image a can be called the preview image and the image b is called Background image.
  • the image b in the embodiment of the present application is the second image.
  • the mobile phone can start the main camera in response to the user's operation of starting the "camera" application (operation 1 shown in (a) in Figure 4).
  • the mobile phone in response to the aforementioned zooming operation, the mobile phone can activate the telephoto camera, which can collect images; and the mobile phone can turn off the main camera, and the main camera stops collecting images.
  • the mobile phone in response to the zoom operation, the mobile phone can start the telephoto camera, which can collect images, but the mobile phone will not turn off the main camera, and the main camera continues to collect images to assist the telephoto camera Take pictures.
  • the image quality of the image b can refer to the image quality of the preview image 402 shown in (b) in FIG. 4 .
  • the user can clearly view the image of the photographic subject 403 from the preview image 402, but cannot clearly view the photographic subject 405 (such as a human face) from the preview image 404 (ie image a).
  • the shooting object 403 and the shooting object 405 are the same person.
  • the small amount of light input by the telephoto camera may result in poor image quality of the image a collected by the telephoto camera.
  • the image a includes an image of a preset object (such as a human face), it is difficult for the user to clearly distinguish the preset object from the image a.
  • the main camera has a large amount of light, and the image b collected by the main camera has a higher image quality.
  • the image b includes the image of the preset object, the user can clearly distinguish the preset object from the image b.
  • the positions of the telephoto camera and the main camera in the mobile phone are similar. Therefore, generally speaking, if the preset object is included in the image b, the preset object is also included in the image a. In this way, even if the preset object cannot be clearly distinguished from the image a, the preset object can be clearly distinguished from the image b.
  • the method in the embodiment of the present application further includes S304.
  • the mobile phone detects that the image of the preset object is included in the first area of the image b.
  • the image b includes a first area, which corresponds to the area of the initial field of view of the telephoto camera.
  • the initial field of view of the telephoto camera refers to the field of view of the telephoto camera before zooming.
  • the field of view of the telephoto camera also changes.
  • the longer the focal length of the telephoto camera the smaller the field of view of the telephoto camera; the shorter the focal length of the telephoto camera, the larger the field of view of the telephoto camera.
  • the center point of the initial field of view of the telephoto camera coincides with the center point of the field of view of the main camera.
  • there are also some telephoto cameras whose center point of the initial field of view does not coincide with the center point of the main camera's field of view.
  • the center point of the initial field of view of the telephoto camera coincides with the center point of the field of view of the main camera as an example to introduce the method of the embodiment of the present application.
  • the field of view of the telephoto camera (such as the initial field of view) is smaller than the field of view of the main camera.
  • the dotted rectangular frame 620 shown in FIG. 6 represents the field of view of the main camera
  • the dotted rectangular frame 610 shown in FIG. 6 represents the field of view of the telephoto camera.
  • the field of view 610 of the telephoto camera is smaller than the field of view 620 of the main camera.
  • image 601 is the first image (ie image a) collected by the telephoto camera
  • image 602 is the second image (ie image b) collected by the main camera.
  • the above-mentioned first area may be an area in the image 602 (ie, image b) that corresponds to the field of view of the telephoto camera (such as the dashed rectangular frame 610).
  • the first area is the area corresponding to the dotted rectangular frame 610 in the image 602 (ie, the image b).
  • the first area includes an image of a preset object 603 (such as a human face).
  • the mobile phone can save the correspondence between the field of view of the telephoto camera and the field of view of the main camera. In this way, the mobile phone can determine the first area included in the image b according to the correspondence between the field of view of the telephoto camera and the field of view of the main camera, and then determine whether the first area includes an image of a preset object.
  • the method for the mobile phone to determine whether the image of the preset object is included in the first region of the image b may refer to the method of identifying whether an image of the image includes the image of the preset object in the conventional technology. To repeat.
  • the mobile phone may adopt any one of the following implementation manners (1) and implementation manner (2) to determine the first area of the image b.
  • the mobile phone can save the two diagonals (such as the upper left corner and the lower right corner, or the upper right corner and the lower left corner) in the initial field of view of the telephoto camera in the coordinate system of the main camera’s field of view.
  • Two-dimensional coordinates can reflect the corresponding relationship between the field of view of the telephoto camera and the field of view of the main camera.
  • the coordinate origin of the coordinate system of the view range of the main camera is any corner (such as the upper left corner or the lower left corner) in the view range of the main camera, and the x-axis and the y-axis are two adjacent sides.
  • FIG. 7 shows an example of a coordinate system of the field of view 720 of the main camera of the main camera.
  • the point o is the origin of coordinates
  • the x-axis is the lower side of the field of view 720
  • the y-axis is the left side of the field of view 720.
  • the mobile phone can save the two-dimensional coordinates A1 (x1, y1) and A2 (x2, y2) of the upper left corner A1 and the lower right corner A2 of the initial field of view 710 of the telephoto camera in the xoy coordinate system shown in FIG. 7.
  • the above-mentioned two-dimensional coordinates A1 (x1, y1) and A2 (x2, y2) may reflect the correspondence between the field of view of the telephoto camera and the field of view of the main camera.
  • the mobile phone can determine the first area of the image b according to the saved two-dimensional coordinates A1 (x1, y1) and A2 (x2, y2).
  • the mobile phone can divide the initial field of view of the telephoto camera into multiple areas 1 at equal intervals (such as A*B areas 1), and divide the initial field of view of the main camera into multiple areas at equal intervals.
  • Area 2 (such as C*D area 1).
  • the size (such as area) of the region 1 and the region 2 may be the same or different.
  • the mobile phone can save the correspondence between the multiple regions 1 and the partial regions 2 of the multiple regions 2 (for example, the region 2 in the first region in the multiple regions 2), and the multiple regions 1 and the partial regions of the multiple regions 2
  • the corresponding relationship of 2 can reflect the corresponding relationship between the field of view of the telephoto camera and the field of view of the main camera.
  • the rectangular frame 810 shown in (a) of FIG. 8 is used to indicate the initial field of view range of the telephoto camera (denoted as the field of view 810), and shown in (b) of FIG. 8
  • the rectangular frame 820 represents the field of view of the main camera (denoted as field of view 820).
  • the nine areas 1 of the field of view 810 shown in (a) of FIG. 8 may correspond to one area 2 of the field of view 820 shown in (b) of FIG. 8.
  • the mobile phone can store a plurality of areas 1 in the field of view 810 shown in (a) of FIG. 8 and a partial area 2 of the plurality of areas 2 in the field of view 820 shown in (b) of FIG. 8.
  • the partial area 2 may be area 2 in the field of view range 810 (that is, the field of view corresponding to the first area) shown in (b) of FIG. 8, such as area 2 corresponding to the thick line frame b1 and the thick line frame b2 Corresponding area 2 etc.
  • the 9 areas 1 in the thick-line frame a1 in the field of view range 810 shown in (a) of FIG. 8 correspond to the thick-line frame b1 in the field of view 820 shown in (b) in FIG. 8 Corresponding area 2.
  • the nine areas 1 in the thick-line frame a2 in the field of view range 810 shown in (a) of FIG. 8 correspond to the area 2 corresponding to the thick-line frame b2 of the field of view 820 shown in (b) in FIG. 8.
  • the nine areas 1 in the thick-line frame a4 in the field of view range 810 shown in (a) of FIG. 8 correspond to the area 2 corresponding to the thick-line frame b4 of the field of view 820 shown in (b) in FIG. 8.
  • the nine areas 1 in the thick-line frame a5 in the field of view range 810 shown in (a) of FIG. 8 correspond to the area 2 corresponding to the thick-line frame b5 of the field of view 820 shown in (b) in FIG. 8.
  • the correspondence between the multiple regions 1 and the partial regions 2 of the multiple regions 2 may reflect the correspondence between the field of view of the telephoto camera and the field of view of the main camera.
  • the mobile phone can save the correspondence between multiple regions 1 and some of the multiple regions 2 and determine the first region of the image b according to the saved correspondence.
  • each area 1 may correspond to a pixel point in the initial field of view of the telephoto camera
  • each area 2 described above may correspond to a pixel point in the field of view of the main camera.
  • A*B is the resolution of the telephoto camera
  • C*D is the resolution of the main camera.
  • the method for the mobile phone to determine the first region of the image b includes, but is not limited to, the methods described in the foregoing implementation (1) and implementation (2).
  • the corresponding relationship between the field of view of the telephoto camera and the field of view of the main camera includes, but is not limited to, the corresponding relationship described in the foregoing implementation (1) and implementation (2).
  • the mobile phone can use various methods to save the correspondence between the field of view of the telephoto camera and the field of view of the main camera, for example, a table is used to save the correspondence. In the embodiments of the present application, there is no restriction on the specific manner in which the mobile phone saves the above-mentioned corresponding relationship.
  • the mobile phone determines the exposure value of the second area.
  • the second area is the area where the image of the aforementioned preset object in image a is located.
  • the mobile phone can determine the second area in image a where the image of the preset object is located according to the position of the image of the preset object in the first area of image b, and detect the exposure value of the second area.
  • the image of the aforementioned preset object may occupy a part of the position of the first area (that is, the first area of the image a).
  • the image of the preset object 603 (such as a human face) occupies the position corresponding to the dashed frame 901 in the first area 610 (that is, a part of the position of the first area 610 ).
  • the image of the preset object 603 (such as a human face) occupies the position corresponding to the dashed frame 902 in the first area 610 (that is, a part of the position of the first area 610 ).
  • the image of the aforementioned preset object may also occupy all positions of the first area (not shown in the drawings).
  • the position of the image of the preset object in the first area is the entire first area.
  • the second area of the image a is: the area in the image a where the preset object is located.
  • the above-mentioned first area is the area in the image b corresponding to the initial field of view of the telephoto camera.
  • the image (such as image a) collected by the telephoto camera may include the image features in the first region of the image b collected by the main camera.
  • the relative position of the image of the preset object in the image a is consistent with the relative position of the image of the preset object in the first area. Therefore, the mobile phone can determine the second area in the image a where the preset object is located according to the position of the image of the preset object in the first area.
  • the mobile phone may save the correspondence between the field of view of the telephoto camera and the field of view of the main camera.
  • the mobile phone can determine the second area in the image a where the preset object is located in combination with the corresponding relationship between the field of view of the telephoto camera and the field of view of the main camera according to the position of the image of the preset object in the first area.
  • the mobile phone can save the two-dimensional coordinates of the two diagonals in the initial field of view of the telephoto camera in the coordinate system of the field of view of the main camera.
  • the two-dimensional coordinates can reflect the corresponding relationship between the field of view of the telephoto camera and the field of view of the main camera.
  • the mobile phone can save the correspondence between the multiple areas 1 obtained by dividing the initial field of view of the telephoto camera and the partial areas 2 of the multiple areas 2 obtained by dividing the field of view of the main camera.
  • the correspondence between the multiple regions 1 and the partial regions 2 of the multiple regions 2 may reflect the correspondence between the field of view of the telephoto camera and the field of view of the main camera.
  • each area 1 described in the foregoing implementation (2) corresponds to a pixel point in the initial field of view of the telephoto camera
  • each area 2 corresponds to a pixel point in the field of view of the main camera. That is to say, the correspondence between the multiple regions 1 and the partial regions 2 of the multiple regions 2 is the correspondence between the pixels in the initial field of view of the telephoto camera and the pixels in the field of view of the main camera.
  • Case (1) The case where the telephoto camera is not zoomed. That is, when the telephoto camera collects the image a, the field of view of the telephoto camera is the aforementioned initial field of view.
  • the mobile phone can execute the following S00-S03 to determine the second area in the image a where the preset object (such as a human face) is located.
  • S00 The mobile phone determines the position of the image of the preset object from the first area of the image b, such as the area corresponding to the dashed frame 902.
  • S01 The mobile phone determines multiple pixels (denoted as multiple pixels 1) in the area corresponding to the dashed frame 902.
  • S02 The mobile phone determines multiple pixels of image a (denoted as multiple A plurality of pixel points (denoted as a plurality of pixel points 3) corresponding to the aforementioned plurality of pixel points 1 in each pixel point 2).
  • S03 The mobile phone determines that the area including the multiple pixel points 3 in the image a is the second area.
  • Case (2) The case of telephoto camera zooming. That is, when the telephoto camera collects the image a, the field of view of the telephoto camera is not the aforementioned initial field of view.
  • the mobile phone can execute the following S10-S15 to determine the second area in the image a where the preset object (such as a human face) is located.
  • S10 The mobile phone determines the position of the image of the preset object from the first area of the image b, such as the area corresponding to the dashed frame 902.
  • S11 The mobile phone determines multiple pixels in the area corresponding to the dashed frame 902 (denoted as multiple pixels 1).
  • the mobile phone determines that the telephoto camera is collected without zooming Among the multiple pixels of the image (denoted as multiple pixels 2), the pixels corresponding to the above-mentioned multiple pixels 1 (denoted as multiple pixels 3).
  • the mobile phone obtains the zoom information of the telephoto camera.
  • the zoom information may include the zoom ratio and the position of the center focus.
  • the zoom ratio may be the ratio of the field of view of the telephoto camera after zooming to the initial field of view.
  • the center focus may be the center point of the field of view of the telephoto camera after zooming.
  • the mobile phone determines multiple pixels (denoted as pixel 4) corresponding to the multiple pixels 3 in the image a (ie, the image collected by the telephoto camera after zooming).
  • S15 The mobile phone determines that the area including the multiple pixel points 4 in the image a is the second area.
  • the above zoom information can be used to determine each pixel point in the field of view of the telephoto camera after zooming (that is, each pixel point in image a), and the above initial field of view Correspondence (denoted as Correspondence 2) of each pixel (such as the above-mentioned pixel 2).
  • the multiple pixels 2 are the pixels in the initial field of view of the telephoto camera, and the multiple pixels 1 are the pixels corresponding to the image of the preset object in the image b.
  • the mobile phone executes S12 to determine the above-mentioned multiple pixels 3 (that is, the pixels corresponding to the multiple pixels 1 among the multiple pixels 2)
  • the mobile phone executes S14-S15, and according to the above corresponding relationship 2, the image a
  • the area corresponding to the multiple pixel points 4 corresponding to the multiple pixel points 3 is determined as the second area.
  • the correspondence between each pixel in the zoomed field of view of the telephoto camera (ie each pixel in image a) and each pixel in the initial field of view (such as the above pixel 2), that is, the above corresponding relationship 2, can It is determined according to the optical magnification of the telephoto camera after zooming.
  • the optical magnification of the telephoto camera before zooming is “1 ⁇ ” (that is, 1 ⁇ ).
  • the method for the mobile phone to detect the exposure value of the second area can refer to the method of checking the exposure value of the image by the electronic device in the conventional technology, which will not be repeated here in this embodiment.
  • S306 The mobile phone judges whether the exposure value of the second area is less than the first exposure threshold.
  • each area in the image captured by the camera may be different.
  • the mobile phone cannot judge from the user's visual angle whether the user can clearly detect the preset object in the image a.
  • the mobile phone can determine whether the image of the preset object in the image a is clearly visible to the user through the size of the exposure value of the second area where the preset object is located in the above image a.
  • the mobile phone does not need to update the exposure value of the second area. Specifically, the mobile phone can execute S310.
  • the mobile phone can adjust the exposure parameters of the telephoto camera to increase the above exposure value. Specifically, the mobile phone can execute S307.
  • the above-mentioned first exposure threshold may be an exposure threshold pre-configured in the mobile phone.
  • the first exposure threshold may be determined according to the brightness value of the ambient light around the mobile phone.
  • the ambient light brightness value can be collected by the ambient light sensor in the mobile phone.
  • the mobile phone can save different ambient light brightness values and the first exposure threshold corresponding to each ambient light brightness value. From the description in the above term introduction, it can be seen that the size of the exposure value is expressed by the exposure level.
  • the exposure value can be -2, -1, 0, 1, 2, or 3, etc.
  • the first exposure threshold may also be an exposure level, such as any exposure level such as 0 or 1.
  • the first exposure threshold may be an exposure level of 0.
  • the exposure level 0 is an appropriate exposure level for light and dark, which helps to ensure the image quality of the image.
  • the average gray value of the second area or the average RGB value of the second area may be used instead of the exposure value of the second area.
  • the average gray value of the second area refers to the average value of the gray value of each pixel in the second area.
  • the average RGB value of the second area refers to the average value of the RGB values of each pixel in the second area. It can be understood that after the average gray value of the second region is used instead of the exposure value of the second region, the first exposure threshold and the second exposure threshold described in the embodiment of the present application can be replaced with corresponding gray thresholds. After the average gray value of the second area is used to replace the exposure value of the second area, the first exposure threshold and the second exposure threshold described in the embodiment of the present application can be replaced with corresponding RGB thresholds.
  • S307 The mobile phone adjusts the exposure parameter of the telephoto camera so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the mobile phone can adjust the exposure time of the telephoto camera (such as increasing the exposure time) to increase the above-mentioned exposure value.
  • the mobile phone can adjust the exposure time of the telephoto camera (such as increasing the exposure time), and adjust the ISO sensitivity (such as increasing the ISO sensitivity) to increase the above exposure value.
  • the mobile phone can adjust the number of photo frames of the telephoto camera (for example, increase the number of photo frames) to increase the aforementioned exposure value.
  • the mobile phone can adjust the number of photo frames of the telephoto camera (such as increasing the number of photo frames), and adjust the ISO sensitivity (such as increasing the ISO sensitivity) to increase the aforementioned exposure value.
  • the purpose of adjusting the exposure parameters of the telephoto camera by the mobile phone is to make the exposure value of the image of the preset object captured by the telephoto camera equal to or greater than the first exposure threshold.
  • the mobile phone can save the correspondence table of the exposure value and the exposure parameter.
  • the mobile phone can adjust the above-mentioned exposure parameters according to the correspondence table, so that the exposure value is greater than the first exposure threshold.
  • Table 1 shows an example of a correspondence relationship between exposure values and exposure parameters provided in the embodiments of the present application.
  • the exposure time shown in Table 1 is T1 ⁇ T2 ⁇ T3 ⁇ T4 ⁇ T5.
  • the number of photographing frames shown in Table 1 is F1 ⁇ F2 ⁇ F3 ⁇ F4 ⁇ F5.
  • the mobile phone can only adjust the exposure time to increase the exposure value. For example, the mobile phone can adjust the exposure time to T4; in this way, the exposure value can be the exposure value 2 corresponding to the sequence number 9 shown in Table 1. Alternatively, the mobile phone can also adjust the exposure parameters according to other options to increase the exposure value.
  • the mobile phone can adjust the exposure time to T3, the number of photo frames to F4, and the number of photo frames to ISO 1; in this way, the exposure value can be the exposure value 2 corresponding to the number 8 shown in Table 1.
  • the mobile phone can also take the average value of the above two items, for example, the average value of the data corresponding to the serial number 9 and the data corresponding to the serial number 8 shown in Table 1.
  • the mobile phone can only adjust the number of photo frames to increase the exposure value.
  • the mobile phone can adjust the number of camera frames to F4 and the ISO sensitivity to ISO 3; in this way, the exposure value can be the exposure value 3 corresponding to the serial number 10 shown in Table 1.
  • the mobile phone can adjust the exposure parameters according to other options to increase the exposure value.
  • the mobile phone can adjust the exposure time to T3, the number of shooting frames to F5, and the ISO sensitivity to ISO 3; in this way, the exposure value can be the exposure value 3 corresponding to the number 12 shown in Table 1.
  • the mobile phone can also take the average value of the above two items, for example, the average value of the data corresponding to the serial number 10 and the data corresponding to the serial number 12 shown in Table 1.
  • the mobile phone may also take the average value of three items, for example, the average value of the data corresponding to the serial number 10, the data corresponding to the serial number 11, and the data corresponding to the serial number 12 shown in Table 1.
  • the aperture in Table 1 is NA, which means that the aperture is not adjusted.
  • the method of adjusting the aperture to increase the exposure value is not excluded. It can be understood that if the above exposure parameters are adjusted excessively, the image taken by the camera (such as a telephoto camera) may be overexposed and affect the image quality. Therefore, if the exposure value of the second area is less than the first exposure threshold, the mobile phone only needs to update the exposure parameters of the telephoto camera according to the exposure parameters corresponding to the first exposure threshold, without excessively increasing the above exposure parameters. In this way, the image quality of the image captured by the telephoto camera can be guaranteed. Therefore, in the above example, the mobile phone adjusts the exposure parameters of the telephoto camera based on the standard that the exposure value is equal to the first exposure threshold. In this way, you can avoid excessively increasing the exposure parameters and affecting the image quality.
  • the telephoto camera of the mobile phone uses the adjusted exposure parameters to collect a first preview image, and the mobile phone displays the first preview image.
  • the telephoto camera uses the adjusted exposure parameters to collect the first preview image, which may be the preview image 1001 shown in (a) in FIG. 10.
  • the mobile phone can execute S308 to display the preview image 1001 shown in (a) in FIG. 10. Comparing the preview image 1001 shown in FIG. 10(a) with the preview image 404 shown in FIG. 4(c), it can be seen that the image quality of the image captured by the telephoto camera can be improved by the method of the embodiment of the present application.
  • the mobile phone saves the image c.
  • the image c is taken by a telephoto camera with adjusted exposure parameters.
  • the image c is acquired based on one or more frames of preview images collected by the telephoto camera using the adjusted exposure parameters.
  • the photographing operation may be a click operation (such as a single-click operation) of the photographing key 1003 shown in (a) in FIG. 10 by the user.
  • the photographing operation may also be a voice command received when the mobile phone executes S308 to display the preview image, and the voice command is used to trigger the mobile phone to take a photograph.
  • the voice command may be voice information such as "photograph", "please take a photo", or "321".
  • the image c in the embodiment of the present application is the third image.
  • the image c may be a frame of the first preview image collected by the mobile phone when the mobile phone receives the photographing operation.
  • the image c may be generated based on multiple frames of first preview images collected by the mobile phone since the start of the photographing operation.
  • taking the photographing operation may be the user's clicking operation on the photographing key 1003 shown in (a) of FIG. 10 as an example.
  • the mobile phone can save the image c to the mobile phone's album.
  • the mobile phone may display the image preview interface shown in (b) of FIG. 10.
  • the preview image 1002 in the image preview interface shown in (b) of FIG. 10 may be the aforementioned image c.
  • the photo displayed on the icon corresponding to the album key 1004 is changed from the little girl shown in (a) in Figure 10 to the preview image shown in (b) in Figure 10 1001 zoomed out photo.
  • the quality of the captured image may be affected due to the optical jitter of the camera or the jitter caused by the user's operation.
  • the mobile phone may capture the image 1101 shown in FIG. 11.
  • the mobile phone can perform anti-shake processing on the first preview image collected by the telephoto camera using the adjusted exposure parameters. That is to say, the aforementioned image c is an image obtained by performing anti-shake processing on the first preview image collected by the telephoto camera using the adjusted exposure parameters.
  • the image 1101 shown in FIG. 11 is the image before the anti-shake processing
  • the preview image 1002 is the image after the anti-shake processing.
  • the preview image 1002 has higher definition and better image quality.
  • the aforementioned anti-shake processing may include optical image stabilization (OIS) and electronic image stabilization (EIS).
  • OIS is the anti-shake within the shutter time (ie exposure time), used to stabilize the camera, and the OIS module is integrated in the camera.
  • EIS is realized by the EIS sensor in the mobile phone, and is used to reduce the possibility of multi-frame blurring when shooting subjects in motion.
  • the mobile phone In response to the user's photographing operation, the mobile phone saves the image d.
  • the image d is taken by a telephoto camera using the exposure parameters before adjustment.
  • the image d is obtained based on the image a collected by the telephoto camera.
  • the image d in the embodiment of the present application is the fourth image.
  • the photographing operation may be a click operation (such as a single click operation) of the photographing key 407 shown in (c) of FIG. 4 by the user.
  • the photographing operation may also be a voice command received when the mobile phone executes S302 to display image a (ie, preview image), and the voice command is used to trigger the mobile phone to take a photo.
  • the voice command may be voice information such as "photograph", "please take a photo", or "321".
  • the image d saved by the mobile phone executing S310 may be the image 601 shown in FIG. 6.
  • the embodiment of the application provides a method for capturing images. Based on the feature that the light input of the main camera is greater than the light input of the telephoto camera, when the telephoto camera of the mobile phone collects images, the main camera can be used as the auxiliary camera. Specifically, the mobile phone can take advantage of the large light input of the main camera to detect the position of the preset object (that is, the second area) from the image a collected by the telephoto camera.
  • the image quality of the image a is poor, and the reason why the preset object cannot be clearly distinguished from the image a is that the position of the preset object in the image a (such as the second area) has a low exposure value.
  • the mobile phone can detect and adjust the exposure parameters of the telephoto camera to increase the above-mentioned exposure value. In this way, the image quality of the image captured by the telephoto camera can be improved. In this way, after increasing the exposure value, the telephoto camera can shoot images with higher image quality (such as image c).
  • auxiliary cameras such as the main camera
  • the mobile phone can use the advantages of each camera to control multiple cameras to work together to improve the image quality of the captured image.
  • the low exposure value of the preset object position (such as the second area) in the first image (such as image a) will affect the image quality of the first image. Therefore, in the embodiment of the present application, the above-mentioned exposure parameters can be adjusted to increase the exposure value. However, if the exposure value of the image is too high, it may affect the image quality due to the overexposure of the image. In other words, if the exposure value of the image is too low or too high, it will affect the image quality of the image.
  • the foregoing image capturing method further includes S306'.
  • S306' The mobile phone judges whether the exposure value of the second area is less than the second exposure threshold.
  • the second exposure threshold is greater than the above-mentioned first exposure threshold.
  • the mobile phone can execute S306 to determine whether the exposure value of the second area is less than the first exposure threshold.
  • the phone can adjust the exposure parameters of the telephoto camera to reduce the above exposure value.
  • the mobile phone can execute S307'.
  • S307′ The mobile phone adjusts the exposure parameter of the telephoto camera to reduce the exposure value of the image of the preset object captured by the telephoto camera.
  • the method of the embodiment of the present application further includes S308-S310.
  • the mobile phone to perform S307' to reduce the exposure value of the image please refer to the related introduction of “the mobile phone adjusts the exposure parameters to increase the exposure value” in S307 in the embodiment of the present application, which will not be repeated here.
  • the mobile phone can adjust the exposure parameters of the camera to reduce the exposure value of the above-mentioned image. In this way, the image quality of the captured image can be improved.
  • the mobile phone executes S305-S310. If the preset object is moving, the mobile phone may not perform S305-S310. If the preset object is moving, the mobile phone can take images according to the conventional scheme.
  • the mobile phone executes S303, and the main camera can collect image b.
  • the mobile phone can determine whether the preset object is stationary or moving according to the position of the image of the preset object in the multiple images b collected by the main camera. For example, if the mobile phone has two frames of image b collected by the first preset time interval (such as 10 seconds, 5 seconds, or 3 seconds), the position change of the image of the preset object (such as the distance of position movement) is greater than the preset distance threshold , The phone can determine that the preset object is moving. If the position change of the image of the preset object in the two frames of images b collected by the mobile phone at the first preset time interval is less than or equal to the preset distance threshold, the mobile phone can determine that the preset object is stationary.
  • the first preset time interval such as 10 seconds, 5 seconds, or 3 seconds
  • the exposure parameters adjusted by the mobile phone in S307 may include: exposure time; or, exposure time and ISO sensitivity.
  • the specific method for adjusting the exposure parameters of the mobile phone can be referred to the related description in the following embodiments, which will not be repeated in this embodiment.
  • the method for the mobile phone to determine whether the preset object in the image is static or moving according to the image collected by the camera includes but is not limited to the above method; other methods can refer to the related methods in the conventional technology, which are not provided here in this embodiment. Go into details.
  • the mobile phone After the mobile phone detects that the image of the preset object is included in the first area of the image b, if the preset object is moving, the mobile phone executes S305-S310. If the preset object is stationary, the mobile phone may not perform S305-S310. If the preset object is still, the mobile phone can take the image according to the conventional scheme.
  • the exposure parameters adjusted by the mobile phone in S307 may include: the number of photographed frames; or, the number of photographed frames and ISO sensitivity.
  • the specific method for adjusting the exposure parameters of the mobile phone can be referred to the related description in the following embodiments, which will not be repeated in this embodiment.
  • the mobile phone after the mobile phone detects that the image of the preset object is included in the first area of the image b, the mobile phone can perform S305-S310 regardless of whether the preset object is stationary or moving.
  • the exposure parameter adjusted by the mobile phone when the preset object is stationary is different from the exposure parameter adjusted by the mobile phone when the preset object is moving.
  • the exposure parameters adjusted by the mobile phone in S307 may include the number of photographing frames in addition to the exposure time and ISO.
  • the exposure parameter adjusted by the mobile phone in S307 may include the exposure time.
  • the method of the embodiment of the present application further includes S1201; S307 may include S307a and S307b.
  • the mobile phone judges that the preset object is stationary or moving.
  • the mobile phone can execute S307a; if the preset object moves, the mobile phone can execute S307b.
  • S307a The mobile phone adjusts the exposure time (ie, exposure parameter) of the telephoto camera so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • At least one exposure parameter such as the exposure time, the number of photographed frames, or the ISO sensitivity can be adjusted to achieve the purpose of updating the exposure value.
  • the longer the exposure time the greater the exposure value; the greater the number of photographed frames, the greater the exposure value; the higher the ISO sensitivity, the greater the exposure value.
  • any of the operations of "increase the exposure time”, “increase the number of photo frames” and “increase the ISO sensitivity” can achieve the purpose of increasing the above-mentioned exposure value.
  • the exposure time of the telephoto camera can be adjusted to achieve the purpose of increasing the exposure value.
  • the mobile phone can adjust the exposure time to T3; in this way, the exposure value can be the exposure value 1 corresponding to the sequence number 7 shown in Table 1.
  • the mobile phone can not only adjust the exposure time of the telephoto camera to increase the aforementioned exposure value; it can also adjust the ISO sensitivity of the telephoto camera to increase the aforementioned exposure value.
  • the exposure parameters described in S307 may include exposure time and ISO sensitivity.
  • the exposure value of the above second area is the exposure value -1 corresponding to the number 1 shown in Table 1.
  • the exposure time of the telephoto camera is T1
  • the number of shooting frames is F2
  • the ISO sensitivity is ISO 1.
  • the above-mentioned first exposure threshold is 2.
  • the mobile phone can adjust the exposure time to T4, and the ISO sensitivity to ISO 2; in this way, the exposure value can be the exposure value 2 corresponding to the serial number 9 shown in Table 1.
  • the anti-shake within the OIS shutter time (ie, the exposure time) is used to stabilize the camera.
  • EIS the exposure time
  • the mobile phone can perform OIS anti-shake on the preview image collected by the telephoto camera, and there is no need to perform EIS anti-shake on the preview image collected by the telephoto camera.
  • the mobile phone responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the telephoto camera includes OIS anti-shake.
  • S307b The mobile phone adjusts the number of photographing frames (ie, exposure parameters) of the telephoto camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the camera when the camera is shooting a moving object (such as the aforementioned preset object), the effect of adjusting the exposure time on the exposure value of the image will not be great, and can even be ignored.
  • the main factor that affects the above-mentioned exposure value is the number of photographed frames. Therefore, in the embodiment of the present application, when the object is preset to move, the number of shooting frames of the telephoto camera can be adjusted to achieve the purpose of increasing the exposure value.
  • the exposure value of the above second area is the exposure value -1 corresponding to the number 2 shown in Table 1; at this time, the exposure time of the telephoto camera is T2, the number of photographing frames is F1, and the ISO sensitivity is ISO 3.
  • the above-mentioned first exposure threshold is 1.
  • the mobile phone can adjust the number of photographing frames to F3; in this way, the exposure value can be the exposure value 1 corresponding to the sequence number 6 shown in Table 1.
  • the ISO sensitivity of the telephoto camera will also have a certain impact on the exposure value.
  • the mobile phone can not only adjust the number of shooting frames of the telephoto camera to increase the aforementioned exposure value; it can also adjust the ISO sensitivity of the telephoto camera to increase the aforementioned exposure value. That is to say, in the case of preset object motion, the exposure parameters described in S307 may include the number of photographed frames and the ISO sensitivity.
  • the exposure value of the above second area is 0 corresponding to the number 5 shown in Table 1; at this time, the exposure time of the telephoto camera is T3, the number of shooting frames is F2, and the ISO sensitivity is ISO2.
  • the above-mentioned first exposure threshold is 3. Then, the mobile phone can adjust the number of shooting frames to F5, and the ISO sensitivity to ISO 3; in this way, the exposure value can be the exposure value 3 corresponding to the serial number 12 shown in Table 1.
  • the mobile phone responds to the user's photographing operation, and the anti-shake operation performed on the first preview image collected by the telephoto camera may include OIS anti-shake and EIS anti-shake.
  • OIS anti-shake the image quality of moving objects captured by the telephoto camera can be improved.
  • the mobile phone can fuse (or synthesize) the first preview images of multiple frames collected by the telephoto camera to obtain the above-mentioned image c.
  • the aforementioned EIS image stabilization can be used to reduce the blurring of multiple frames when the mobile phone merges multiple frames of the first preview image. That is, the mobile phone can perform EIS anti-shake fusion on the first preview image of the multiple frames.
  • the mobile phone may use a neural network fusion algorithm to perform image fusion on the above-mentioned multiple frames of first preview images to obtain a third image.
  • the algorithm used by the mobile phone to perform image fusion on the multi-frame first preview image includes, but is not limited to, a neural network fusion algorithm.
  • the mobile phone may also use a weighted average algorithm of multiple frames of the first preview image to perform image fusion on the multiple frames of the first preview image to obtain the third image.
  • other methods for image fusion of multiple frames of images by the mobile phone are not described in detail in this embodiment.
  • S302, S303, and S304 may be executed.
  • S1201 may be executed to determine that the preset object is still or moving.
  • the mobile phone can execute S305 to determine the exposure value of the second area.
  • S306 the mobile phone can execute S306 to determine whether the exposure value of the second area is less than the first exposure threshold.
  • the mobile phone After S306, when the exposure value of the second area is less than the first exposure threshold, combined with the judgment result of S1201, if the preset object is static, the mobile phone can execute S307a, and if the preset object is moving, the mobile phone can execute S307b . After S307a or S307b, the mobile phone can perform S308-S309. After S306, if the exposure value of the second area is greater than or equal to the first exposure threshold, the mobile phone may perform S310.
  • the exposure parameter adjusted by the mobile phone when the preset object is stationary is different from the exposure parameter adjusted by the mobile phone when the preset object is moving.
  • the mobile phone can adjust different exposure parameters to increase the exposure value according to the motion state (such as still or moving) of the shooting object (ie, the preset object). In this way, it is beneficial to improve the image quality of the image captured by the telephoto camera.
  • the face of the preset object is that the user's head is stationary, and the body below the user's head is moving. In this way, although the preset object is still, other shooting objects in the image b (such as the body below the user's head) are moving.
  • the preset object is a human face
  • the user is sitting in a car
  • the user's head is still
  • the scenery outside the car window is changing.
  • other shooting objects such as the background outside the face
  • the mobile phone can determine whether there is a moving subject in the image (such as image b) collected by the main camera. If there is no moving subject in the image b, the mobile phone can perform S307a. If there is a moving subject in the image b, the mobile phone can execute S307b.
  • the mobile phone can determine whether there is a moving subject in the image collected by the main camera through the following implementation (i) and implementation (ii).
  • the mobile phone can compare corresponding pixels in multiple frames of images (such as two frames of images) collected by the main camera, and count the number of corresponding pixels in the two frames of images that have differences. If the number obtained by statistics is greater than or equal to the first preset number threshold, it means that there is a moving subject in the image collected by the main camera. If the number obtained by statistics is less than the first preset number threshold, it means that there is no moving subject in the image collected by the main camera.
  • the mobile phone can compare the corresponding pixels in the above two frames of images, and calculate the difference value of the corresponding pixels in the two frames of images (for example, the initial value of the difference value is 0, if the two If the corresponding pixels in the frame image are different, the difference value is increased by 1. After comparing the corresponding pixels in the above two frames of images, the final difference value can be considered as the number of pixels that are different in the two frames); then, The mobile phone can count the number of pixels whose difference value is greater than or equal to the preset difference threshold. If the number of pixels with a difference value greater than the preset difference threshold is greater than the second preset number threshold, it means that there is a moving subject in the image captured by the main camera. If the number obtained by statistics is less than the second preset number threshold, it means that there is no moving subject in the image collected by the main camera.
  • the pixel points of the i-th row and the j-th column of one frame of image correspond to the pixel points of the i-th row and the jth column of the other frame of image.
  • Both i and j are positive integers.
  • the pixel points in the i-th row and j-th column of one frame of image correspond to the pixels in the m-th row and nth column of the other frame of image. Both i and j are positive integers.
  • the method for determining the corresponding pixel point can be implemented by using the method in the prior art, and will not be repeated here.
  • the mobile phone can execute S307b.
  • the mobile phone can not only determine that there are moving subjects in image b, but also determine which subjects in image b are moving and which subjects are still.
  • the pixels whose difference value is greater than the preset difference threshold correspond to the object in the image area (called the motion area), and the pixels whose difference value is less than or equal to the preset difference threshold are moving.
  • the point corresponding to the image area (referred to as the still area) of the subject is still.
  • the mobile phone executes S309 to obtain the third image based on the first preview image of multiple frames
  • the image of the still area only the image of the still area in any one of the first preview images of the multiple frames needs to be used. Yes; for the image of the moving area, the image of the moving area of the first preview image of multiple frames can be fused using an image fusion algorithm.
  • each area in the preview image is a static area or a moving area
  • the following methods can be used to divide each area, and then recognize that each area is Stationary area or moving area.
  • the image area where the face (ie, the preset object) is located is divided into a single area; the area in the preview image except the image area where the face is located is taken as an area, and this area may include the image of the body below the user's head , And background images outside the user’s body, etc.
  • the image area where the face (ie, the preset object) is located is separately divided into one area; the image area where the body below the user's head is located in the preview image is a separate area, and the image area where the background outside the user's body is located is separate As a zone.
  • the image area where the user's body is located can also be divided into multiple areas according to the human body structure (such as the head, neck, torso, and limbs, etc.).
  • the image area where the face (ie the preset object) is located is divided into a single area; the image area where the user’s torso is located in the preview image is used as a single area, the image area where the user’s left hand is located is used as a single area, and the user’s right hand is located
  • the image area of the user is used as an area alone, the image area where the user's left leg is located as an area, and the image area where the user's right leg is located as an area.
  • the division of the background image outside the user's body can also be divided into multiple areas, such as the image area where the background on the left side of the user's body is located, the image area where the background on the right side of the user's body is located, and the top of the user's head The image area where the background of the user is located and the image area of the background below the user's foot.
  • the method for dividing the preview image into areas includes but is not limited to the method in the above example, and other methods are not described in this embodiment of the present application.
  • the main camera may not collect images first.
  • the ambient light sensor of the mobile phone detects the brightness of the ambient light.
  • the mobile phone can determine the ambient light brightness value X (that is, the specific value of the aforementioned ambient light brightness). If the ambient light brightness value X is lower than the first brightness threshold, the mobile phone can enter the smart shooting mode. In this smart shooting mode, the main camera of the mobile phone can collect images (such as image b). Wherein, the aforementioned ambient light brightness value X is the first ambient light brightness value or the third ambient light brightness value.
  • the mobile phone in a dark light scene (ie, a scene where the ambient light brightness value 1 is lower than the first brightness threshold), the mobile phone will enter the smart shooting mode in response to the above-mentioned zooming operation.
  • the main camera of the mobile phone can assist the telephoto camera to capture images to improve the image quality of the images captured by the telephoto camera.
  • the mobile phone will not execute the method of the embodiment of the present application; the mobile phone can capture images according to the method in the conventional technology. In this way, the power consumption of the mobile phone can be reduced, and the response time of the mobile phone taking pictures can be improved.
  • the main camera may not collect images first.
  • the ambient light sensor of the mobile phone detects the brightness of the ambient light.
  • the mobile phone can determine the ambient light brightness value X (ie, the specific value of the aforementioned ambient light brightness). If the ambient light brightness value X is lower than the first brightness threshold, the mobile phone can request the user to confirm whether to enter the smart shooting mode. If the user chooses to enter the smart shooting mode, the main camera of the phone can collect images and assist the telephoto camera to take images.
  • the main camera may not collect images first.
  • the mobile phone may request the user to confirm whether to enter the smart shooting mode. If the user chooses to enter the smart shooting mode, the main camera of the phone can collect images and assist the telephoto camera to take images.
  • S303 shown in FIG. 3 or S303 shown in FIG. 12 can be replaced with S1301-S1303.
  • S303 shown in FIG. 12 can be replaced with S1301-S1303.
  • the mobile phone displays a first user interface.
  • the first user interface is used to request the user to confirm whether to use the main camera to assist the telephoto camera to capture images.
  • the main camera of the mobile phone can assist the telephoto camera to capture images to improve the image quality of the images captured by the telephoto camera.
  • the above-mentioned first user interface can be used to request the user to confirm whether to enter the smart shooting mode.
  • the mobile phone can display the image preview interface shown in (b) in FIG. 4.
  • the mobile phone may display the first user interface 1401 shown in (a) of FIG. 14.
  • the first user interface 1401 includes the instruction message "Please confirm whether to enter the smart shooting mode?" 1402, and the prompt message "In the smart shooting mode, the mobile phone can start the main camera to assist in taking pictures, which can improve the image quality! 1403.
  • the first user interface 1401 also includes a "yes” button and a "no” button. The “Yes” button is used to instruct the mobile phone to enter the smart shooting mode, and the "No” button is used to instruct the mobile phone not to enter the smart shooting mode.
  • the mobile phone may not start the main camera first; instead, it displays the first user interface. If the user chooses to enter the smart shooting mode on the first user interface, the mobile phone can start the main camera, and the telephoto camera can collect images. However, in response to the above zoom operation, the mobile phone can start the telephoto camera, the telephoto camera can collect images (such as image a), and display the image a (ie preview image) collected by the telephoto camera, and display it on the preview image
  • the mobile phone detects the user's first operation on the first user interface.
  • the main camera of the mobile phone collects the image b.
  • the above-mentioned first operation is used to trigger the mobile phone to enter the smart shooting mode.
  • the first operation may be a user's click operation (such as a click operation) on the "Yes" button shown in FIG. 14(a) or FIG. 14(b).
  • the first operation may also be a voice command issued by the user, such as voice messages such as "enter smart shooting mode", "yes", or "enter”.
  • the first operation may also be a preset gesture input by the user on the first user interface, such as any gesture such as an S-shaped gesture or an L-shaped gesture.
  • the main camera of the mobile phone can collect the image b, and execute S304-S310.
  • the mobile phone executes S308 to display the image preview interface shown in (a) in FIG. 10.
  • the mobile phone in response to the user's click operation (ie, the first operation) on the "Yes" button shown in Figure 14 (a) or Figure 14 (b), the mobile phone can display the display shown in Figure 10 (a) The image preview interface.
  • the mobile phone can receive the user's second operation on the first user interface.
  • the second operation may be a user's click operation (such as a click operation) on the "No" button shown in FIG. 14(a) or FIG. 14(b).
  • the second operation may also be a voice command issued by the user, such as voice messages such as "do not enter the smart shooting mode", "no", or "do not enter”.
  • the mobile phone does not need to enter the smart shooting mode, and the mobile phone can capture images in a conventional technique.
  • the mobile phone in response to the user's click operation (ie, the second operation) on the "No" button shown in Figure 14 (a) or Figure 14 (b), the mobile phone can display the display shown in Figure 4 (c) The image preview interface.
  • the above-mentioned first user interface may also provide a prompt box with similar content such as the option "Don't remind me next time”.
  • the mobile phone The same operation can be performed according to the operation of opening the camera interface last time, and the above prompt box is no longer displayed; if the user does not select the "not remind me next time” option, the prompt box can continue to pop up to prompt the user next time. It is also possible that after the user does not select the option "Don't remind me next time” for more than a certain number of times, the mobile phone automatically performs the same operation as the last time the camera interface was opened.
  • the mobile user interface provides prompt information 1402 while also providing options The option of "Don't remind me next time”, the user chooses to enter the smart shooting mode every time, but does not check the option of "Don't remind me next time”. After more than 5 or 10 times, the phone will no longer provide Prompt 1402, and enter the smart shooting mode.
  • the mobile phone can request the user to confirm whether to enter the smart shooting mode on the first user interface; if the user chooses to enter the smart shooting mode, the mobile phone will activate the main camera to assist the telephoto camera to capture images. In other words, the mobile phone can activate the main camera to assist the telephoto camera to capture images according to the user's wishes. In this way, the user experience during the interaction between the mobile phone and the user can be improved.
  • the mobile phone can also provide an image effect preview function in the smart shooting mode.
  • the mobile phone can display the effect preview image in the smart shooting mode to the user, so that the user can choose whether to enter the smart shooting mode according to the effect preview image.
  • the method in the embodiment of the present application further includes S1401-S1403.
  • the mobile phone detects a user's third operation on the first user interface.
  • the third operation is used to trigger the mobile phone to display the first preview image collected by the first camera (that is, the effect preview image in the smart shooting mode).
  • the first user interface 1401 further includes a first control, such as a button 1407 of “effect preview of smart shooting mode”.
  • the first user interface 1406 further includes a first control, such as a button 1408 of “effect preview of smart shooting mode”.
  • the third operation may be a click operation (such as a single-click operation, a double-click operation, a triple-click operation, etc.) of the above-mentioned first control (such as a "smart shooting mode effect preview" button) by the user.
  • the above-mentioned third operation may be a voice command input by the user, such as voice information such as "intelligent shooting mode preview effect", “preview effect”, “image preview” or “effect preview”.
  • the above third operation can also be a preset gesture input by the user, such as a tick " ⁇ ” gesture, a circle gesture, two fingers together, two fingers to draw a “Z” shape, three fingers to slide down, etc. gestures, this application for this gesture It is not limited and will not be repeated here.
  • the mobile phone in response to the third operation, displays a second user interface.
  • the second user interface includes the first preview image collected by the telephoto camera using the adjusted exposure parameters, that is, the preview image (such as image a) collected by the telephoto camera before the mobile phone enters the smart shooting mode. That is, in response to the third operation, the mobile phone may temporarily enter the smart shooting mode to obtain the preview image described in S308.
  • the aforementioned second user interface may also include a preview image (such as the preview image described in S308) collected by the telephoto camera after the mobile phone enters the smart shooting mode. In this way, it is helpful for the user to compare the preview image in the smart shooting mode with the preview image in the non-smart mode, so as to decide whether to control the mobile phone to enter the smart shooting mode according to the image effects of the two preview images.
  • the second user interface 1501 may include: instruction information "please confirm whether to enter the smart shooting mode according to the following image effects?" 1502, a preview image 1503 of the non-smart shooting mode, and a preview image 1504 of the smart shooting mode (that is, the above-mentioned first preview) image).
  • the preview image 1503 in the non-smart shooting mode is a preview image (such as the above image a) collected by the telephoto camera before the mobile phone enters the smart shooting mode.
  • the preview image 1504 of the smart shooting mode is a preview image (such as the preview image described in S308) collected by the telephoto camera after the mobile phone enters the smart shooting mode.
  • the second user interface 1501 also includes a "Yes” button and a “No” button. The “Yes” button is used to instruct the mobile phone to enter the smart shooting mode, and the “No” button is used to instruct the mobile phone not to enter the smart shooting mode.
  • the main camera of the mobile phone collects the image b.
  • the fourth operation is used to trigger the mobile phone to enter the smart shooting mode.
  • the fourth operation may be a click operation (such as a single click operation) of the "Yes" button shown in FIG. 15A by the user.
  • the fourth operation may also be a voice command issued by the user, such as voice information such as "enter smart shooting mode", "yes", or "enter”.
  • the main camera of the mobile phone can collect the image b, and execute S304-S310.
  • the mobile phone executes S308 to display the image preview interface shown in (a) in FIG. 10.
  • the mobile phone may display the image preview interface shown in (a) in FIG. 10.
  • the mobile phone can receive the user's fifth operation on the second user interface.
  • the fifth operation may be a click operation (such as a single click operation) of the "No" button shown in FIG. 15A by the user.
  • the fifth operation may also be a voice command issued by the user, such as voice messages such as "do not enter the smart shooting mode", "no", or "do not enter”.
  • the mobile phone does not need to enter the smart shooting mode, and the mobile phone can capture images according to the method in the conventional technology.
  • the mobile phone may display the image preview interface shown in (c) in FIG. 4.
  • the mobile phone in response to the user's third operation on the first user interface, may display the second user interface.
  • the second user interface includes: the preview image collected by the telephoto camera before the mobile phone enters the smart shooting mode (such as image a); and the preview image collected by the telephoto camera after the mobile phone enters the smart shooting mode (such as the preview image described in S308) ).
  • the mobile phone can provide users with image effect previews in non-smart shooting mode and image effect previews in smart shooting mode. In this way, it is convenient for the user to compare the preview image of the non-smart shooting mode and the preview image of the smart shooting mode, and decide whether to control the mobile phone to enter the smart shooting mode according to the image effect of the preview image.
  • the mobile phone may display on the above-mentioned first user interface: the preview image collected by the telephoto camera before the mobile phone enters the smart shooting mode (such as image a above); and the preview image collected by the telephoto camera after the mobile phone enters the smart shooting mode Image (the preview image as described in S308).
  • the mobile phone executes S1301 to display the first user interface 1505 shown in (a) in FIG. 15B.
  • the first user interface 1505 not only includes the instruction message "Please confirm whether to enter the smart shooting mode?", the prompt message "In smart shooting mode, the mobile phone can start the main camera to assist in taking pictures, which can improve the image quality!, "Yes” button and
  • the "No” button also includes a preview image 1506 in the non-smart shooting mode and a preview image 1507 in the smart shooting mode.
  • the mobile phone executes 1301 to display the first user interface 1508 shown in (b) of FIG. 15B.
  • the first user interface 1508 not only includes the instruction message "Please confirm whether to enter the smart shooting mode?", the prompt message "In smart shooting mode, the mobile phone can start the main camera to assist in taking pictures, which can improve the image quality!, "Yes” button and The "No” button also includes a preview image 1509 in the non-smart shooting mode and a preview image 1510 in the smart shooting mode.
  • the mobile phone in response to the zoom operation, can directly display on the first user interface the preview image collected by the telephoto camera before the mobile phone enters the smart shooting mode (such as image a above); and the telephoto camera after the mobile phone enters the smart shooting mode The collected preview image (such as the preview image described in S308).
  • the mobile phone can directly provide the user with the image effect preview in the non-smart shooting mode and the image effect preview function in the smart shooting mode on the first user interface. In this way, it is convenient for the user to directly compare the preview image of the non-smart shooting mode and the preview image of the smart shooting mode on the first user interface, and decide whether to control the mobile phone to enter the smart shooting mode according to the image effect of the preview image.
  • the mobile phone includes a visible light camera and an infrared camera.
  • the above-mentioned visible light camera may also be an RGB camera. RGB cameras can only perceive visible light, not infrared light.
  • the above-mentioned infrared camera can not only perceive visible light, but also infrared light.
  • the above-mentioned infrared light may be infrared light of 890 nanometers (nm) to 990 nm. That is, the infrared camera can perceive infrared light with a wavelength of 890nm-990nm.
  • different infrared cameras can perceive infrared light (that is, the wavelength of infrared light) that can be different.
  • the above-mentioned visible light camera may also be a camera of a common wavelength band, which is a wavelength band where the wavelength of visible light is located.
  • the visible light camera cannot perceive light or perceive weak light, so it cannot collect a clear image of the preset object.
  • the infrared light camera can perceive the infrared light emitted by a person or animal (that is, a preset object) with a temperature in the field of view, so that an image of the preset object can be collected.
  • the infrared camera when the mobile phone uses the visible light camera as the preview camera (ie the first camera) to collect images in the dark scene, in order to avoid the weak visible light and affect the image quality, the infrared camera can be used to Perceiving the advantages of infrared light, the infrared camera is used as an auxiliary camera (that is, the second camera) to assist the visible light camera to improve the image quality of the image captured by the visible light camera.
  • an embodiment of the present application provides a method for capturing an image, and the method may be applied to a mobile phone including a main camera and a telephoto camera. As shown in Figure 16, the method may include S1601-S1611.
  • the mobile phone detects preset operation 1.
  • the preset operation 1 is used to trigger the visible light camera of the mobile phone to collect images.
  • the preset operation 1 is used to trigger the mobile phone to start the visible light camera, make the visible light camera collect images, and then display the image collected by the visible light camera.
  • the visible light camera of the mobile phone collects an image I, and the mobile phone displays the image I collected by the visible light camera.
  • the above-mentioned visible light camera may be any camera such as a telephoto camera, a wide-angle camera, a main camera, or a black and white camera.
  • the preset operation 1 used to trigger the mobile phone to start different visible light cameras is different.
  • the preset operation 1 used to trigger the mobile phone to start the main camera may be the operation 1 shown in (a) of FIG. 4, that is, the operation of the user to start the "camera" application.
  • the preset operation 1 used to trigger the mobile phone to start the telephoto camera may be the zoom operation described in S301.
  • the preset operation 1 for triggering the mobile phone to start the wide-angle camera may be an operation for the user to turn on the panoramic shooting mode in the "camera”.
  • the preset operation 1 used to trigger the mobile phone to start the wide-angle camera may be an operation of the user to turn on the black and white shooting mode in the "camera”.
  • the image I in the embodiment of the present application is the first image.
  • the ambient light sensor of the mobile phone detects the brightness of the ambient light, the mobile phone determines the second ambient light brightness value, and determines whether the second ambient light brightness value is lower than the second brightness threshold.
  • the second brightness threshold may be lower than the aforementioned first brightness threshold.
  • the second brightness threshold may be the ambient light brightness outdoors in the middle of the night
  • the first brightness threshold may be a specific value of the ambient light brightness outdoors in the evening.
  • the ambient light brightness value ie, the second ambient light brightness value
  • the mobile phone's ambient light sensor it means that the ambient light brightness is high, and the mobile phone does not need to enter the smart shooting mode to start infrared
  • the camera assists the visible light camera to take pictures. At this time, the phone does not enter the smart shooting mode.
  • the visible light camera of the mobile phone continues to collect the image I, the mobile phone displays the image I collected by the visible light camera, and then executes S1611.
  • the second ambient light brightness value is lower than the second brightness threshold, it means that the ambient light brightness is low, the visible light intensity is low, and the mobile phone is in a dark light scene.
  • the visible light camera cannot perceive the light or the perceived light is weak, and therefore cannot collect a clear image of the preset object.
  • the mobile phone can use the infrared camera as an auxiliary camera to assist the visible light camera to improve the image quality of the image captured by the visible light camera.
  • the mobile phone may perform S1604.
  • the infrared camera of the mobile phone collects images II.
  • the mobile phone can start the infrared camera, and the infrared camera can collect image II.
  • the image II in the embodiment of the present application is the second image.
  • the mobile phone may not start the infrared camera first, but display the first user interface, and the user can choose whether to enter the smart shooting mode to start the infrared camera to assist the visible light camera Take pictures.
  • the mobile phone may execute S1604.
  • the mobile phone can perform S1611.
  • the first user interface, the first operation, and the second operation reference may be made to the relevant introduction in the above-mentioned embodiment, which will not be repeated here.
  • the mobile phone may also display the second user interface.
  • the second user interface includes: a preview image collected by the visible light camera before the mobile phone enters the smart shooting mode (such as image I); and a preview image collected by the visible light camera after the mobile phone enters the smart shooting mode (such as the preview image described in S1609).
  • the mobile phone can execute S1604.
  • the mobile phone can execute S1611.
  • the fourth operation, and the fifth operation reference may be made to the relevant introduction in the above-mentioned embodiment, which will not be repeated here.
  • the mobile phone detects that the image of the preset object is included in the first area of the image II.
  • the first area is an area in image II that corresponds to the field of view of the visible light camera.
  • the mobile phone detects that the first area of the image II includes the image of the preset object” in S1605
  • the method of "image” is not repeated here in this embodiment.
  • the mobile phone determines the exposure value of the second area.
  • the second area is the area where the avatar of the preset object in the image I is located.
  • the method of "the mobile phone determines the second area in image I and detects the exposure value of the second area” in S1606 can refer to the “mobile phone determines the second area in image a" in S305 described in the above embodiment. And the method of detecting the exposure value of the second area" is not repeated in this embodiment.
  • S1607 The mobile phone judges whether the exposure value of the second area is less than the first exposure threshold.
  • the mobile phone determines whether the exposure value of the second area is less than the first exposure threshold" in S1607, reference may be made to the detailed description of S306 in the foregoing embodiment, which is not repeated in this embodiment.
  • the mobile phone does not need to update the exposure value of the second area. Specifically, the mobile phone can execute S1611.
  • the mobile phone can adjust the exposure parameters of the visible light camera to increase the above-mentioned exposure value. Specifically, the mobile phone can execute S1608.
  • S1608 The mobile phone adjusts the exposure parameter of the visible light camera so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the method of "the mobile phone adjusts the exposure parameters of the visible light camera so that the exposure value of the second area is equal to or greater than the first exposure threshold" in S1608 can refer to the “mobile phone adjusts the exposure parameters of the telephoto camera" in the above-mentioned embodiment.
  • the method of making the exposure value of the second area equal to or greater than the first exposure threshold" is not repeated in this embodiment.
  • the mobile phone can also adjust different exposure parameters to increase the exposure value according to the motion state (such as still or moving) of the shooting object (ie, the preset object).
  • the exposure parameter adjusted by the mobile phone in S1608 may include the number of photographed frames.
  • the exposure parameter adjusted by the mobile phone executing S1608 may include the exposure time.
  • the mobile phone may execute S1201.
  • S1608a The mobile phone adjusts the exposure time (ie, exposure parameter) of the visible light camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • S1608b The mobile phone adjusts the number of photographing frames (ie, exposure parameters) of the visible light camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the visible light camera of the mobile phone uses the adjusted exposure parameters to collect a first preview image, and the mobile phone displays the first preview image.
  • the mobile phone In response to the user's photographing operation, the mobile phone saves the image III.
  • the image III was taken by a visible light camera with adjusted exposure parameters.
  • the above-mentioned image III is obtained by the first preview image of one or more frames collected by the visible light camera using the adjusted exposure parameters.
  • the image III in the embodiment of the present application is the third image.
  • S1610 in this embodiment reference may be made to the detailed description of S309 in the foregoing embodiment, which is not repeated in this embodiment.
  • the mobile phone when the preset object is still, responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the visible light camera includes OIS anti-shake.
  • the mobile phone responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the visible light camera may include OIS anti-shake and EIS anti-shake.
  • the mobile phone saves the image IV.
  • the image IV is obtained based on the image I collected by the visible light camera.
  • the image IV in the embodiment of the present application is the fourth image.
  • S1611 in this embodiment reference may be made to the detailed description of S310 in the foregoing embodiment, which is not repeated in this embodiment.
  • the embodiment of the application provides a method for capturing images.
  • the infrared camera has the ability to perceive visible light and infrared light, while the visible light camera has the ability to perceive visible light but does not have the ability to perceive infrared light.
  • the infrared camera can be used as an auxiliary camera.
  • the mobile phone can perceive the advantages of infrared light by means of an infrared camera, and detect the position of the preset object (ie, the second area) from the image I collected by the visible light camera.
  • the image quality of the image I is poor, and the reason why the preset object cannot be clearly distinguished from the image I is that the position of the preset object in the image I (such as the second area) has a low exposure value.
  • the mobile phone can detect and adjust the exposure parameters of the visible light camera to increase the above exposure value.
  • the image quality of the image captured by the visible light camera can be improved.
  • the visible light camera can shoot images with higher image quality (such as image III).
  • the above-mentioned visible light camera is a telephoto camera
  • the mobile phone includes a telephoto camera, a main camera, and an infrared camera.
  • the infrared camera has the ability to perceive visible light and infrared light
  • the telephoto camera has the ability to perceive visible light but does not have the ability to perceive infrared light.
  • the method may include S1601-S1602, S1701-S1703, 1604-S1611, and S304-S310.
  • the preset operation 1 described in S1601-S1602 is a zoom operation.
  • zoom operation For the detailed introduction of the zoom operation, reference may be made to the relevant description of the foregoing embodiment, which is not repeated in this embodiment.
  • the method in this embodiment of the present application may further include S1701-S1703.
  • the ambient light sensor of the mobile phone detects the brightness of the ambient light, the mobile phone determines the second ambient light brightness value, and determines whether the second ambient light brightness value is lower than the first brightness threshold.
  • the second ambient light brightness value is higher than or equal to the first brightness threshold, it indicates that the ambient light brightness is high; then, even if the light input of the telephoto camera is small, the image quality of the captured image will not be affected. In this case, the phone does not need to enter the smart shooting mode. Therefore, the mobile phone may not enter the smart shooting mode, the visible light camera collects the image I, and the mobile phone displays the image I collected by the visible light camera, and then S1611 is executed.
  • the mobile phone can enter the smart shooting mode, using the main camera or infrared camera to assist the telephoto camera to take pictures. It is understandable that in the case of particularly low ambient light brightness, even if the main camera has a large amount of light, it may not be possible to collect a clear image of the preset object due to weak visible light.
  • the infrared light camera can perceive the infrared light emitted by a person or animal (that is, a preset object) with a temperature in the field of view, so that an image of the preset object can be collected.
  • the mobile phone can use the main camera to assist the telephoto camera to take pictures.
  • the mobile phone can use an infrared camera to assist the telephoto camera to take pictures.
  • the second brightness threshold is lower than the first brightness threshold.
  • the second brightness threshold may be a brightness value of ambient light outdoors at night
  • the first brightness threshold may be a brightness value of ambient light outdoors in the evening.
  • the mobile phone may perform S1702.
  • the mobile phone judges whether the second ambient light brightness value is lower than the second brightness threshold value.
  • the mobile phone can enter the smart shooting mode and use the infrared camera to assist the telephoto camera to take pictures.
  • the mobile phone can execute 1604-S1611, enter the smart shooting mode, and use the infrared camera as the auxiliary camera.
  • the mobile phone can enter the smart shooting mode and use the main external camera to assist the telephoto camera to take pictures. As shown in Figure 17, after S1702, if the second ambient light brightness value is higher than or equal to the second brightness threshold, the mobile phone can execute S1703 and S304-S310, enter the smart shooting mode, and use the main camera as the auxiliary camera.
  • the main camera of the mobile phone collects an image b.
  • the method in this embodiment of the present application may further include S304-S310.
  • the image I described in S1601 and S1602 is the same as the image a described in S305 and S310.
  • the image I and the image a are both preview images collected by the telephoto camera as the preview camera before the mobile phone enters the smart shooting mode.
  • Image II is different from image b.
  • the image b is a preview image collected by the main camera as the auxiliary camera.
  • Image II is a preview image collected by an infrared camera as an auxiliary camera.
  • Image III is different from image c.
  • the image c is the image collected by the telephoto camera when the mobile phone enters the smart shooting mode
  • the telephoto camera is used as the preview camera
  • the main camera is used as the auxiliary camera.
  • Image III is the image collected by the telephoto camera when the mobile phone enters the smart shooting mode
  • the telephoto camera is used as the preview camera
  • the infrared camera is used as the auxiliary camera.
  • Image IV is different from image d.
  • the image d is an image obtained by the mobile phone based on the image a (that is, the preview image) in response to the photographing operation.
  • the image IV is an image obtained by the mobile phone based on the image I (ie, the preview image) in response to the photographing operation.
  • the embodiment of the application provides a method for capturing images.
  • the mobile phone can select the main camera or the infrared camera as the auxiliary camera to assist the telephoto camera to take pictures according to the brightness of the ambient light. Improve the image quality of the images captured by the telephoto camera.
  • the mobile phone uses the other camera as the preview camera, and uses the main camera or infrared camera as the auxiliary camera to assist other cameras in taking pictures. Similar to the above method, this embodiment will not be repeated here.
  • the mobile phone includes a color camera and a black and white camera.
  • the color camera can collect color images.
  • the black-and-white camera has a larger amount of light.
  • the images collected by the black-and-white camera can only show different levels of gray, and cannot show the true colors of the subject.
  • the above-mentioned main camera, telephoto camera, and wide-angle camera are all color cameras.
  • the black-and-white camera is used as an auxiliary camera (that is, the second camera) to assist the color camera to improve the image quality of the image captured by the color focus camera.
  • the mobile phone uses a color camera as a preview camera, and uses a black and white camera as an auxiliary camera to assist the color camera in taking pictures.
  • a color camera as a preview camera
  • a black and white camera as an auxiliary camera to assist the color camera in taking pictures.
  • the color camera described in the above embodiment is a telephoto camera
  • the mobile phone includes a telephoto camera, a camera (such as a main camera) with a larger light input than the telephoto camera, an infrared camera, and a black-and-white camera.
  • the main camera, infrared camera or black-and-white camera can be selected as the auxiliary camera to assist the telephoto camera in taking pictures according to the brightness of the ambient light.
  • the mobile phone can use the main camera as an auxiliary camera to assist the telephoto camera to take pictures. If the third ambient light brightness value is lower than the third brightness threshold but higher than or equal to the second brightness threshold, the mobile phone can use the black and white camera as an auxiliary camera to assist the telephoto camera to take pictures. If the third ambient light brightness value is lower than the second brightness threshold, the mobile phone can use the infrared camera as an auxiliary camera to assist the telephoto camera to take pictures. Wherein, the first brightness threshold is higher than the third brightness threshold, and the third brightness threshold is higher than the second brightness threshold.
  • the mobile phone uses the telephoto camera as the preview camera to collect images
  • the main camera, the infrared camera or the black and white camera is used as the auxiliary camera to assist the telephoto camera to take pictures.
  • the embodiment of the application provides a method for capturing images.
  • the mobile phone can select the main camera, infrared camera or black and white camera as the auxiliary camera to assist the telephoto camera according to the brightness of the ambient light. Take pictures to improve the image quality of the images captured by the telephoto camera.
  • the mobile phone includes a color camera and a depth camera (such as a ToF camera).
  • a color camera As a preview camera to capture images, it may not be possible to clearly capture the outline of the preset object because the color of the shooting object (such as the above-mentioned preset object) is close to the background color.
  • the depth camera can collect the depth information of the preset object, and the depth information can be used to detect the contour of the preset object. Therefore, in this embodiment, when the mobile phone uses a color camera as the preview camera (i.e., the first camera) to collect images, the depth camera can be used as an auxiliary camera (i.e., the second camera) to assist the color camera to work to improve the image captured by the color camera. Image quality.
  • the color camera described in this embodiment may be any camera such as a main camera, a telephoto camera, and a wide-angle camera.
  • an image capturing method provided in an embodiment of the present application may include S1801-S1811.
  • the mobile phone detects preset operation 2.
  • the preset operation 2 is used to trigger the color camera of the mobile phone to collect images.
  • the preset operation 2 is used to trigger the mobile phone to start the color camera so that the color camera collects images, and then the mobile phone can display the image collected by the color camera.
  • the color camera of the mobile phone collects the image i, and the mobile phone displays the image i collected by the color camera.
  • the preset operation 2 used to trigger the mobile phone to start different color cameras is different.
  • the preset operation 2 used to trigger the mobile phone to start the main camera may be the operation 1 shown in (a) of FIG. 4, that is, the operation of the user to start the "camera" application.
  • the preset operation 2 used to trigger the mobile phone to start the telephoto camera may be the zoom operation described in S301.
  • the preset operation 2 used to trigger the mobile phone to start the wide-angle camera may be an operation for the user to turn on the panoramic shooting mode in the "camera”.
  • the image i in the embodiment of the present application is the first image.
  • the mobile phone determines the RGB value of each pixel in the image i, and determines whether the image i meets the preset condition 1.
  • the above-mentioned preset condition 1 is the first preset condition, and the preset condition 1 means that the image i includes the third area.
  • the difference between the RGB values of the multiple pixels in the third area is less than the preset RGB threshold.
  • the mobile phone may calculate the difference between the RGB values of two pixels that are K pixels apart in the image i. Then, the mobile phone can determine whether such an image area (that is, the third area) is included in the image i.
  • the above-mentioned difference calculated in the image area (that is, the third area) is less than the preset RGB threshold; or, the number of the above-mentioned difference calculated in the image area (that is, the third area) is less than the preset RGB threshold is greater than the preset RGB threshold.
  • Set the number threshold the size (such as the area or the number of pixels) of the above-mentioned image area may be preset. It can be understood that if the image area is included in the image i, it means that the image i satisfies the preset condition 1. If the image area is not included in the image i, it means that the image i does not satisfy the preset condition 1.
  • the mobile phone can perform S1804; if the image i does not meet the preset condition 1, the mobile phone does not enter the smart shooting mode.
  • the color camera of the mobile phone continues to collect the image i, the mobile phone displays the image i collected by the color camera, and then executes S1811.
  • the depth camera of the mobile phone collects an image ii.
  • the image ii in the embodiment of the present application is the second image.
  • the mobile phone may not start the depth camera first, but display the first user interface, and the user can choose whether to enter the smart shooting mode to start the depth camera to assist the color camera in capturing images.
  • the mobile phone may perform S1804.
  • the mobile phone can perform S1811.
  • the first user interface, the first operation, and the second operation reference may be made to the relevant introduction in the above-mentioned embodiment, which will not be repeated here.
  • the mobile phone may also display the second user interface.
  • the second user interface includes: a preview image (such as image i) collected by the color camera before the mobile phone enters the smart shooting mode; and a preview image (such as the preview image described in S1809) collected by the color camera after the mobile phone enters the smart shooting mode.
  • the mobile phone can perform S1804.
  • the mobile phone can execute S1811.
  • the fourth operation, and the fifth operation reference may be made to the relevant introduction in the above-mentioned embodiment, which will not be repeated here.
  • the mobile phone detects that the first area of the image ii includes the image of the preset object.
  • the first area is an area in the image ii that corresponds to the field of view of the color camera.
  • the method of "the mobile phone detects that the first area of the image ii includes the image of the preset object" in S1805 can refer to the "the mobile phone detects that the first area of the image b includes the preset object in S304" in the above-mentioned embodiment.
  • the method of "image” is not repeated here in this embodiment.
  • the mobile phone determines the exposure value of the second area.
  • the second area is the area where the image of the preset object in image i is located.
  • the method of "the mobile phone determines the second area in image i and detects the exposure value of the second area" in S1806 can refer to the “mobile phone determines the second area in image a" in S305 described in the above embodiment. And the method of detecting the exposure value of the second area" is not repeated in this embodiment.
  • S1807 The mobile phone judges whether the exposure value of the second area is less than the first exposure threshold.
  • the mobile phone determines whether the exposure value of the second area is less than the first exposure threshold" in S1807, reference may be made to the detailed description of S306 in the foregoing embodiment, which is not repeated in this embodiment.
  • the mobile phone does not need to update the exposure value of the second area. Specifically, the mobile phone can execute S1811.
  • the mobile phone can adjust the exposure parameters of the color camera to increase the above exposure value. Specifically, the mobile phone can execute S1808.
  • S1808 The mobile phone adjusts the exposure parameters of the color camera so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the mobile phone adjusts the exposure parameters of the color camera so that the exposure value of the second area is equal to or greater than the first exposure threshold” in S1808, please refer to the “mobile phone adjusts the exposure parameters of the telephoto camera” in the above-mentioned embodiment.
  • the method of making the exposure value of the second area equal to or greater than the first exposure threshold is not repeated in this embodiment.
  • the mobile phone can also adjust different exposure parameters to increase the exposure value according to the motion state (such as still or moving) of the shooting object (ie, the preset object).
  • the exposure parameter adjusted by the mobile phone in S1808 may include the number of photographed frames.
  • the exposure parameter adjusted by the mobile phone executing S1808 may include the exposure time.
  • the mobile phone may execute S1201.
  • S1201 if the preset object is stationary, the mobile phone can execute S1808a; if the preset object is moving, the mobile phone can execute S1808b.
  • S1808a The mobile phone adjusts the exposure time (ie exposure parameter) of the color camera to make the exposure value of the second area equal to or greater than the first exposure threshold.
  • S1808b The mobile phone adjusts the number of photo frames (ie exposure parameters) of the color camera to make the exposure value of the second area equal to or greater than the first exposure threshold.
  • S1808a refer to the detailed introduction of S307a in the foregoing embodiment; for the specific implementation of S1808b, refer to the detailed introduction of S307b in the foregoing embodiment, which is not repeated in this embodiment.
  • the color camera of the mobile phone uses the adjusted exposure parameters to collect a first preview image, and the mobile phone displays the first preview image.
  • the mobile phone In response to the user's photographing operation, the mobile phone saves the image iii.
  • the image iii is taken by a color camera with adjusted exposure parameters.
  • the image iii is acquired based on the first preview image of one or more frames collected by the color camera using the adjusted exposure parameters.
  • the image iii in the embodiment of the present application is the third image.
  • S1810 in this embodiment reference may be made to the detailed description of S309 in the foregoing embodiment, which is not repeated here in this embodiment.
  • the mobile phone when the preset object is still, responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the color camera includes OIS anti-shake.
  • the mobile phone responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the color camera may include OIS anti-shake and EIS anti-shake.
  • the mobile phone in response to the user's photographing operation, the mobile phone saves the image iv.
  • the image iv is obtained based on the image i collected by the telephoto camera.
  • the image iv in the embodiment of the present application is the fourth image.
  • S1811 in this embodiment reference may be made to the detailed description of S310 in the foregoing embodiment, which will not be repeated in this embodiment.
  • the embodiment of the present application provides a method for capturing images.
  • the depth camera can be used as an auxiliary camera.
  • the mobile phone can use the advantage that the depth camera can collect the depth information of the preset object, and detect the position of the preset object (that is, the second area) from the image i collected by the color camera.
  • the image quality of the image i is poor, and the reason why the preset object cannot be clearly distinguished from the image i is that the position of the preset object in the image i (such as the second area) has a low exposure value.
  • the mobile phone can detect and adjust the exposure parameters of the color camera to increase the above-mentioned exposure value.
  • the image quality of the image captured by the color camera can be improved.
  • the color camera can capture images with higher image quality (such as image iii).
  • the mobile phone includes a black and white camera and a color camera.
  • the color camera can collect color images.
  • the images collected by the black-and-white camera can only show different levels of gray, and cannot show the true colors of the subject. Therefore, using a black-and-white camera to take pictures may affect the image quality because the photographed objects (such as the above-mentioned preset objects) include colors that are similar and not easily distinguishable by grayscale.
  • the mobile phone uses a black and white camera as the preview camera (i.e., the first camera) to collect images, it can take advantage of the color camera to capture the true color of the subject.
  • the color camera is used as the auxiliary camera (i.e., the second camera). ) Assist the work of the black and white camera to improve the image quality of the image captured by the black and white focus camera.
  • the aforementioned color camera may be any camera such as a main camera, a telephoto camera, and a wide-angle camera. In this embodiment, it is taken as an example that the color camera is the main camera.
  • an image capturing method provided in an embodiment of the present application may include S1901-S1911.
  • the mobile phone detects preset operation 3.
  • the preset operation 3 is used to trigger the black and white camera of the mobile phone to collect images.
  • the black and white camera of the mobile phone collects image A, and the mobile phone displays the image A collected by the black and white camera.
  • the preset operation 3 may be an operation for the user to turn on the black-and-white shooting mode in the “camera”.
  • the image A in the embodiment of the present application is the first image.
  • the mobile phone determines the gray value of each pixel in the image A, and determines whether the image A meets the preset condition 2.
  • the preset condition 2 is the second preset condition.
  • the preset condition 2 refers to: the image A includes the fourth area. The difference in the gray values of the multiple pixels in the fourth area is less than the preset gray threshold.
  • the mobile phone may calculate the difference between the gray values of two pixels in image A that are K pixels apart. Then, the mobile phone can determine whether the image A includes such an image area (that is, the fourth area). The aforementioned difference calculated in the image area (that is, the fourth area) is less than the preset gray-scale threshold; or, the aforementioned difference calculated in the image area (the fourth area) is less than the preset gray-scale threshold. Greater than the preset number threshold.
  • the size (such as the area or the number of pixels) of the above-mentioned image area may be preset. It can be understood that if the image area is included in the image A, it means that the image A satisfies the preset condition 2. If the image area is not included in the image A, it means that the image A does not meet the preset condition 2.
  • the mobile phone can perform S1904; if the image A does not meet the preset condition 2, the mobile phone does not enter the smart shooting mode.
  • the black and white camera of the mobile phone continues to collect image A, and the mobile phone displays the image A collected by the black and white camera, and then executes S1911.
  • the main camera of the mobile phone (that is, the color camera) collects the image B.
  • the image B in the embodiment of the present application is the second image.
  • the mobile phone may not start the main camera (that is, the color camera), but display the first user interface, and the user can choose whether to enter the smart shooting mode to start the main camera to assist the black and white camera Take pictures.
  • the mobile phone may perform S1904.
  • the mobile phone can perform S1911.
  • the first user interface, the first operation, and the second operation reference may be made to the relevant introduction in the above-mentioned embodiment, which will not be repeated here.
  • the mobile phone may also display the second user interface.
  • the second user interface includes: a preview image (such as image A) collected by the black and white camera before the mobile phone enters the smart shooting mode; and a preview image (such as the preview image described in S1909) collected by the black and white camera after the mobile phone enters the smart shooting mode.
  • the mobile phone may perform S1904.
  • the mobile phone can perform S1911.
  • the fourth operation, and the fifth operation reference may be made to the relevant introduction in the above-mentioned embodiment, which will not be repeated here.
  • the mobile phone detects that the first area of the image B includes the image of the preset object.
  • the first area is an area in image B that corresponds to the field of view of the black-and-white camera.
  • the method of "the mobile phone detects that the first area of image B includes the image of the preset object” in S1905 can refer to the “mobile phone detects that the first area of image b includes the preset object in S304" in the above-mentioned embodiment.
  • the method of "image” is not repeated here in this embodiment.
  • the mobile phone determines the exposure value of the second area.
  • the second area is the area where the image of the preset object in image A is located.
  • the method of "the mobile phone determines the second area in image A and detects the exposure value of the second area” in S1906 can refer to the “mobile phone determines the second area in image a" in S305 described in the above embodiment. And the method of detecting the exposure value of the second area" is not repeated in this embodiment.
  • S1907 The mobile phone judges whether the exposure value of the second area is less than the first exposure threshold.
  • the mobile phone determines whether the exposure value of the second area is less than the first exposure threshold" in S1907, reference may be made to the detailed description of S306 in the foregoing embodiment, which will not be repeated in this embodiment.
  • the mobile phone does not need to update the exposure value of the second area. Specifically, the mobile phone can execute S1911.
  • the mobile phone can adjust the exposure parameters of the black and white camera to increase the above exposure value. Specifically, the mobile phone can execute S1908.
  • the mobile phone adjusts the exposure parameters of the black and white camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • the mobile phone adjusts the exposure parameters of the black and white camera so that the exposure value of the second area is equal to or greater than the first exposure threshold
  • method can refer to the “mobile phone adjusts the exposure parameters of the telephoto camera" in the above-mentioned embodiment.
  • the method of making the exposure value of the second area equal to or greater than the first exposure threshold is not repeated in this embodiment.
  • the mobile phone can also adjust different exposure parameters to increase the exposure value according to the motion state (such as still or moving) of the shooting object (ie, the preset object).
  • the exposure parameter adjusted by the mobile phone executing S1908 may include the number of photographing frames.
  • the exposure parameter adjusted by the mobile phone executing S1908 may include the exposure time.
  • the mobile phone may perform S1201.
  • S1908a The mobile phone adjusts the exposure time (ie, exposure parameter) of the black and white camera, so that the exposure value of the second area is equal to or greater than the first exposure threshold.
  • S1908b The mobile phone adjusts the number of photo frames (ie exposure parameters) of the black and white camera to make the exposure value of the second area equal to or greater than the first exposure threshold.
  • the black and white camera of the mobile phone uses the adjusted exposure parameters to collect a first preview image, and the mobile phone displays the first preview image.
  • the mobile phone In response to the user's photographing operation, the mobile phone saves the image C.
  • the image C was taken by a black-and-white camera with adjusted exposure parameters.
  • the image C is acquired based on one or more frames of the first preview image collected by the black-and-white camera using the adjusted exposure parameters.
  • the image C in the embodiment of the present application is the third image.
  • S1910 in this embodiment reference may be made to the detailed description of S309 in the foregoing embodiment, which is not repeated in this embodiment.
  • the mobile phone when the preset object is still, responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the black and white camera includes OIS anti-shake.
  • the mobile phone responds to the user's photographing operation, and the anti-shake operation performed on the preview image collected by the black and white camera may include OIS anti-shake and EIS anti-shake.
  • the mobile phone in response to the user's photographing operation, the mobile phone saves the image D.
  • the image D is obtained based on the image A collected by the telephoto camera.
  • the image D in the embodiment of the present application is the fourth image.
  • S1911 in this embodiment reference may be made to the detailed description of S310 in the foregoing embodiment, which is not repeated in this embodiment.
  • the embodiment of the present application provides a method for capturing images, based on the color camera that can capture color images; while the images captured by the black-and-white camera can only show different levels of gray scale, and cannot show the characteristics of the true color of the photographed object.
  • the black and white camera of the mobile phone collects images
  • the mobile phone can use the main camera (that is, the color camera) as the auxiliary camera.
  • the mobile phone can take advantage of the color camera to collect color images, and detect the position of the preset object (that is, the second area) from the image A collected by the black and white camera.
  • the image quality of the image A is poor, and the reason why the preset object cannot be clearly distinguished from the image A is that the position of the preset object in the image A (such as the second area) has a low exposure value. Therefore, the mobile phone can detect and adjust the exposure parameters of the black-and-white camera to increase the above-mentioned exposure value. In this way, the image quality of the images captured by the black and white camera can be improved. In this way, after increasing the exposure value, the black-and-white camera can shoot images with higher image quality (such as image C).
  • the above-mentioned electronic device (such as a mobile phone) includes a hardware structure and/or software module corresponding to each function.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a certain function is executed by hardware or computer software-driven hardware depends on the specific application and design constraint conditions of the technical solution. Professionals and technicians can use different methods for each specific application to implement the described functions, but such implementation should not be considered as going beyond the scope of the embodiments of the present application.
  • the embodiments of the present application can divide the above-mentioned electronic devices (such as mobile phones) into functional modules according to the above-mentioned method examples.
  • each functional module can be divided corresponding to each function, or two or more functions can be integrated into one processing module. middle.
  • the above-mentioned integrated modules can be implemented in the form of hardware or software functional modules. It should be noted that the division of modules in the embodiments of the present application is illustrative, and is only a logical function division, and there may be other division methods in actual implementation.
  • FIG. 20 shows a schematic diagram of a possible structure of the electronic device 2000 involved in the foregoing embodiment.
  • the electronic device 2000 may include: a processing module 2001, a display module 2002, a first collection module 2003, a second collection module 2004, and a storage module 2005.
  • the processing module 2001 is used to control and manage the actions of the electronic device 2000.
  • the first collection module 2003 and the second collection module 2004 are used to collect images.
  • the display module 2002 is used to display the images generated by the processing module 2001 and the images collected by the first collection module 2003 and the second collection module 2004.
  • the above-mentioned processing module 2001 can be used to support the electronic device 2000 to execute the “judgment of ambient light brightness in S301, S304, S305, S306, S307, S1201, S307a, S307b, S1302, S1401, S1601, and S1603 in the above method embodiment.
  • the above-mentioned display module 2002 can be used to support the electronic device 2000 to perform the operation of “display image a” in S302, the operation of “display the first preview image” in S308, and the “display image I” in S1301, S1402, and S1602.
  • the operation of "display the first preview image” in S1609, the operation of "display image i” in S1802, the operation of "display the first preview image” in S1809, the operation of "display image A” in S1902, and the operation in S1909 The operation of "displaying the first preview image", and/or other processes used in the techniques described herein.
  • the above-mentioned first acquisition module 2003 may be used to support the electronic device 2000 to perform the operation of “collecting image a” in S302, the operation of “collecting the first preview image” in S308, and the operation of “collecting image I” in S1602 in the above method embodiment. Operation, the operation of "capture the first preview image” in S1609, the operation of “capture image i” in S1802, the operation of "capture image A” in S1902, the operation of "capture the first preview image” in S1909, and/or use Other processes in the technology described in this article.
  • the above-mentioned second acquisition module 2004 can be used to support the electronic device 2000 to perform the operation of "capture image b" in S303, S1303, S1403, S1604, S1703, S1804, and "capture the first preview image” in S1809 in the foregoing method embodiment. Operation, S1904, and/or other processes used in the techniques described herein.
  • the above-mentioned storage module 2005 may be used to support the electronic device 2000 to perform the operation of "save image c" in S309, the operation of "save image d" in S310, the operation of "save image III” in S1610, and the operation of "save image III” in S1611 in the above method embodiment.
  • the storage module can also be used to store the program code and data of the electronic device 2000.
  • the electronic device 2000 may also include other functional modules such as a sensor module and a communication module.
  • the sensor module is used to detect the brightness of the ambient light.
  • the above-mentioned sensor module may be used to support the electronic device 2000 to perform the operation of "detecting ambient light brightness" in S1603 and S1701 in the above-mentioned method embodiment, and/or other processes used in the technology described herein.
  • the communication module is used to support the communication between the electronic device 2000 and other devices.
  • the processing module 2001 may be a processor or a controller, for example, a central processing unit (CPU), a digital signal processor (Digital Signal Processor, DSP), or an application-specific integrated circuit (ASIC). ), Field Programmable Gate Array (FPGA) or other programmable logic devices, transistor logic devices, hardware components, or any combination thereof.
  • the processor may include an application processor and a baseband processor. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of this application.
  • the processor may also be a combination for realizing computing functions, for example, including a combination of one or more microprocessors, a combination of a DSP and a microprocessor, and so on.
  • the processing module 2001 is one or more processors (the processor 110 shown in FIG. 1), and the storage module 2005 may be a memory (the internal memory 121 shown in FIG. 1).
  • the display module 2002 may be a display screen (the display screen 194 shown in FIG. 1).
  • the above-mentioned first acquisition module 2003 may be a first camera (a preview camera as shown in FIG. 1), and the second acquisition module 2004 may be a second camera (a auxiliary camera as shown in FIG. 1).
  • the aforementioned sensor module may be the sensor module 180 shown in FIG. 1, and the sensor module 180 shown in FIG. 1 includes an ambient light sensor.
  • the electronic device 2000 provided by the embodiment of the present application may be the electronic device 100 shown in FIG. 1.
  • the above-mentioned one or more processors, memory, first camera, second camera, and display screen may be connected together, for example, connected by a bus.
  • the chip system 2100 includes at least one processor 2101 and at least one interface circuit 2102.
  • the processor 2101 and the interface circuit 2102 may be interconnected by wires.
  • the interface circuit 2102 may be used to receive signals from other devices (such as the memory of an electronic device).
  • the interface circuit 2102 may be used to send signals to other devices (such as the processor 2101).
  • the interface circuit 2102 may read an instruction stored in the memory, and send the instruction to the processor 2101.
  • the electronic device can execute the steps in the foregoing embodiments.
  • the chip system may also include other discrete devices, which are not specifically limited in the embodiment of the present application.
  • the embodiments of the present application also provide a computer storage medium, the computer storage medium includes computer instructions, when the computer instructions run on the above-mentioned electronic device, the electronic device is caused to perform each function or step performed by the mobile phone in the above-mentioned method embodiment .
  • the embodiments of the present application also provide a computer program product, which when the computer program product runs on a computer, causes the computer to execute each function or step performed by the mobile phone in the above method embodiment.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods, for example, multiple units or components may be divided. It can be combined or integrated into another device, or some features can be omitted or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate parts may or may not be physically separate.
  • the parts displayed as units may be one physical unit or multiple physical units, that is, they may be located in one place, or they may be distributed to multiple different places. . Some or all of the units may be selected according to actual needs to achieve the objectives of the solutions of the embodiments.
  • the functional units in the various embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated unit can be implemented in the form of hardware or software functional unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solutions of the embodiments of the present application are essentially or the part that contributes to the prior art, or all or part of the technical solutions can be embodied in the form of a software product, and the software product is stored in a storage medium. It includes several instructions to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage media include: U disk, mobile hard disk, read only memory (read only memory, ROM), random access memory (random access memory, RAM), magnetic disk or optical disk and other media that can store program codes.

Abstract

一种拍摄图像的方法及电子设备,涉及终端技术领域和图像处理技术领域,可提升拍摄得到的图像质量。具体方案包括:电子设备检测到预设操作;响应于预设操作,电子设备的第一摄像头采集第一图像,第二摄像头采集第二图像;电子设备可显示第一图像,但不显示第二图像;检测到第二图像的第一区域内包括预设对象的图像;确定第一图像中、预设对象所在的第二区域的曝光值;如果曝光值小于第一曝光阈值,调整第一摄像头的曝光参数使曝光值等于或者大于第一曝光阈值;第一摄像头采用调整后的曝光参数采集第一预览图像,电子设备显示预览图像;响应于用户的拍照操作,电子设备保存第一摄像头采用调整后的曝光参数拍摄的第三图像。

Description

一种拍摄图像的方法及电子设备
本申请要求于2020年03月20日提交国家知识产权局、申请号为202010201964.8、发明名称为“一种拍摄图像的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及终端技术领域和图像处理技术领域,尤其涉及一种拍摄图像的方法及电子设备。
背景技术
随着电子技术的发展,电子设备(如手机、平板电脑或智能手表等)的功能越来越多。例如,大多数电子设备中均安装有摄像头,具有拍摄图像的功能。
以手机为例,手机中可以安装多个摄像头,如主摄像头、长焦摄像头、广角摄像头、红外摄像头、深度摄像头或者黑白摄像头中的至少两种摄像头。其中,基于上述各个摄像头的特点,手机可以在不同的拍摄场景下,采用不同的摄像头拍摄图像,以保证拍摄得到的图像的图像质量。
例如,基于长焦摄像头焦距长的特点,手机可以采用长焦摄像头,拍摄距离手机较远的拍摄对象。又例如,基于主摄像头进光量大和分辨高的特点,手机可以采用主摄像头,拍摄处于暗光场景下的拍摄对象。又例如,基于广角摄像头焦距短和视角大的特点,手机可以采用广角摄像头,拍摄较大的拍摄对象(如建筑或风景等)。
其中,虽然上述各种摄像头在不同的拍摄场景下各有优势;但是,每种摄像头在其他的场景下也各有劣势。该劣势可能会影响拍摄得到的图像的图像质量。例如,虽然长焦摄像头的焦距长,但是长焦摄像头的进光量较小;因此,如果在暗光场景下,使用长焦摄像头拍摄距离手机较远的拍摄对象,则可能会因为进光量不足而影响图像质量。又例如,虽然主摄像头的进光量大、分辨高,但是主摄像头的焦距短;因此,如果使用主摄像头拍摄距离手机较远的拍摄对象,则可能会导致拍摄得到的图像清晰度不足,影响图像质量。
发明内容
本申请提供一种拍摄图像的方法及电子设备,多个摄像头可以协同工作,可以提升拍摄得到的图像质量。
第一方面,本申请提供一种拍摄图像的方法,该方法可以应用于包括多个摄像头的电子设备。如该电子设备可以包括第一摄像头和第二摄像头。该第一摄像头和第二摄像头是不同的摄像头。
其中,电子设备可检测到预设操作。响应于该预设操作,电子设备的第一摄像头可采集第一图像,电子设备可显示该第一图像。电子设备的第二摄像头可采集第二图像,但是电子设备不显示第二图像。也就是说,电子设备可以将第一摄像头(称为预览摄像头)采集的第一图像作为预览图像显示出来,而不会显示第二摄像头(称为辅助摄像头)采集的第二图像。其中,上述第二图像包括第一区域,该第一区域是对应 于第一摄像头的视野范围的区域。然后,电子设备可识别该第二图像,检测到该第二图像的第一区域内包括预设对象的图像。例如,上述预设对象包括以下至少一种:人脸、人体、植物、动物、建筑或文字等。随后,电子设备可确定第二区域的曝光值。该第二区域是第一图像中预设对象的图像所在的区域。若第二区域的曝光值小于第一曝光阈值,电子设备可调整第一摄像头的曝光参数,使曝光值等于或者大于第一曝光阈值。最后,电子设备的第一摄像头可采用调整后的曝光参数采集第一预览图像,电子设备可显示该第一预览图像。响应于用户的拍照操作,电子设备可保存第三图像,该第三图像是第一摄像头采用调整后的曝光参数所拍摄的。具体的,该第三图像可以是基于第一摄像头采集的一帧或多帧第一预览图像获取的。
本申请中,电子设备采用预览摄像头(即第一摄像头)拍摄图像时,可以借助于其他摄像头(称为辅助摄像头,如第二摄像头)相比于预览摄像头的优势,控制辅助摄像头与预览摄像头协同工作,以提升预览摄像头在拍摄时,所拍摄得到的图像的图像质量。也就是说,本申请的方法中,电子设备可以利用各个摄像头的优势,控制多个摄像头协同工作,以提升拍摄得到的图像的图像质量。
在第一方面的一种可能的设计方式中,上述曝光参数可以包括曝光时间、拍照帧数和ISO感光度中的至少一项。也就是说,电子设备可以调整曝光时间、拍照帧数和ISO感光度中的至少一项,使上述第二区域的曝光值等于或者大于第一曝光阈值。
可以理解,为了提升预览摄像头的拍摄的图像质量,可以调整曝光时间、拍照帧数或ISO感光度等至少一个曝光参数,以实现更新曝光值的目的。并且,曝光时间越长,曝光值越大;拍照帧数越大,曝光值越大;ISO感光度越高,曝光值越大。由此可见,“调大曝光时间”、“调大拍照帧数”和“调高ISO感光度”中的任一个操作,都可以达到提升上述曝光值的目的。
在第一方面的另一种可能的设计方式中,预设对象静止的情况下电子设备调整的曝光参数,与预设对象运动的情况下电子设备调整的曝光参数不同。
其中,摄像头在拍摄静止的物体(如上述预设对象)时,调整拍照帧数对图像的曝光值的影响不会很大,甚至可以忽略。因此,在预设对象静止的情况下,第一摄像头(如长焦摄像头)的曝光时间、拍照帧数和ISO感光度中,影响上述曝光值的主要因素为曝光时间。因此,本申请中,在预设对象静止的情况下,电子设备可调整第一摄像头的曝光时间,以达到提升曝光值的目的。
具体的,上述电子设备调整第一摄像头的曝光参数,使曝光值等于或者大于第一曝光阈值,可以包括:如果预设对象是静止的,电子设备调整第一摄像头的曝光时间,使第二区域的曝光值等于或者大于第一曝光阈值。
当然,在预设对象静止的情况下,第一摄像头(如长焦摄像头)的ISO感光度也会对曝光值产生一定的影响。可选的,如果预设对象是静止的,电子设备可以调整第一摄像头的曝光时间和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在第一方面的另一种可能的设计方式中,摄像头在拍摄运动的物体(如上述预设对象)时,调整曝光时间对图像的曝光值的影响不会很大,甚至可以忽略。在预设对象运动的情况下,第一摄像头的曝光时间、拍照帧数和ISO感光度中,影响上述曝光值的主要因素为拍照帧数。因此,本申请中,在预设对象运动的情况下,电子设备可 调整第一摄像头的拍照帧数,以达到提升曝光值的目的。
具体的,上述电子设备调整第一摄像头的曝光参数,使曝光值等于或者大于第一曝光阈值,可以包括:如果预设对象是运动的,电子设备可调整第一摄像头的拍照帧数,使第二区域的曝光值等于或者大于第一曝光阈值。
当然,在预设对象运动的情况下,第一摄像头的ISO感光度也会对曝光值产生一定的影响。可选的,如果预设对象是运动的,电子设备可调整第一摄像头的拍照帧数和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在第一方面的另一种可能的设计方式中,在预设对象静止的情况下,上述响应于用户的拍照操作,电子设备保存第三图像,可以包括:电子设备对第一摄像头采集的一帧第一预览图像进行光学(optical image stabilization,OIS)防抖,得到并保存第三图像。
其中,OIS快门时间(即曝光时间)内的防抖,用于稳定摄像头。而电子(electronic image stabilization,EIS)用于拍摄运动中的拍摄对象时,减少多帧模糊现象出现的可能性。因此,在预设对象静止的情况下,电子设备可以对第一摄像头采集的第一预览图像进行OIS防抖。
在第一方面的另一种可能的设计方式中,在预设对象运动的情况下,上述响应于用户的拍照操作,电子设备保存第三图像,可以包括:电子设备对第一摄像头采集的多帧第一预览图像进行OIS防抖和EIS防抖融合,得到并保存第三图像。
其中,电子设备响应于用户的拍照操作,对第一摄像头采集的预览图像进行的防抖操作可以包括OIS防抖和EIS防抖。这样,可以提升第一像头拍摄运动物体的图像质量。
在第一方面的另一种可能的设计方式中,第一摄像头采集的多帧预览图像中,可能会存在一部分图像区域的拍摄对象运动,而另一部分图像区域的拍摄对象静止的情况。针对这种情况,上述响应于用户的拍照操作,电子设备保存第三图像,可以包括:响应于拍照操作,电子设备对第一摄像头采集的多帧第一预览图像进行OIS防抖,并对多帧第一预览图像的运动区域的图像进行EIS防抖融合,得到并保存第三图像。也就是说,电子设备可以对第一摄像头采集的多帧预览图像进行OIS防抖,并对多帧预览图像的运动区域的图像进行EIS防抖融合,得到并保存第三图像。也就是说,电子设备基于多帧预览图像获取第三图像时,针对静止区域的图像,只需要使用多帧预览图像中的任一帧图像中静止区域的图像即可;而对于运动区域的图像而言,则可以对多帧预览图像运动区域的图像进行图像融合。
在第一方面的另一种可能的设计方式中,为了避免第一图像的第二区域的曝光值过高而影响图像质量。上述方法还包括:电子设备第二区域的曝光值是否小于第二曝光阈值。该第二曝光阈值大于上述第一曝光阈值。若电子设备确定第二区域的曝光值大于第二曝光阈值,电子设备调整第一摄像头的曝光参数,使第二区域的曝光值等于或者小于第二曝光阈值。
本申请中,如果一个摄像头采集的图像中预设对象所在的图像区域(如第二区域)的曝光值较大,则可能会导致图像过度曝光,使得用户无法从该图像中检测到预设对象。针对这种情况,本申请中,若第二区域的曝光值大于第二曝光阈值,电子设备可 调整摄像头的曝光参数,以降低第二区域的曝光值。这样,可以提升拍摄得到的图像的图像质量。
在第一方面的另一种可能的设计方式中,电子设备响应于上述预设操作,可以先不启动第二摄像头。响应于该预设操作,电子设备可以请求用户确认是否进入智能拍摄模式。其中,在该智能拍摄模式下电子设备采用第二摄像头协助第一摄像头拍摄图像。如果用户选择进入智能拍摄模式,电子设备则可以启动第二摄像头协助第一摄像头拍摄图像。
具体的,上述响应于预设操作,电子设备的第二摄像头采集第二图像,可以包括:响应于预设操作,电子设备显示第一用户界面,该第一用户界面用于请求用户确认是否使用第二摄像头协助第一摄像头拍摄图像。响应于用户在第一用户界面的第一操作,电子设备的第二摄像头采集第二图像。
本申请中,电子设备可以在第一用户界面请求用户确认是否使用第二摄像头协助第一摄像头拍摄图像;如果用户选择使用第二摄像头协助第一摄像头拍摄图像,电子设备才会启动主摄像头协助长焦摄像头拍摄图像。也就是说,电子设备可以按照用户的意愿,启动第二摄像头协助第一摄像头拍摄图像。这样,可以提升电子设备与用户交互过程中的用户体验。
在第一方面的另一种可能的设计方式中,响应于用户在第一用户界面的第二操作,电子设备的第二摄像头不采集图像。也就是说,如果用户选择不使用第二摄像头协助第一摄像头拍摄图像,电子设备的主摄像头则不会协助长焦摄像头拍摄图像。
在第一方面的另一种可能的设计方式中,上述第一用户界面还可以包括第一预览图像。该第一预览图像可以是使用第二摄像头协助第一摄像头拍摄得到的效果预览图像。
本申请中,电子设备可以在第一用户界面为用户展示使用第二摄像头协助第一摄像头拍摄得到的效果预览图像,以供用户根据效果预览图像选择是否进入智能拍摄模式。
在第一方面的另一种可能的设计方式中,电子设备还可以以其他方式,为用户提供上述图像效果预览功能。具体的,本申请的方法还包括:响应于用户在第一用户界面的第三操作,电子设备显示第二用户界面,该第三操作用于触发电子设备显示第一摄像头采集的第一预览图像,第二用户界面包括第一预览图像;响应于用户在第二用户界面的第四操作,电子设备的第二摄像头采集第二图像。该第四操作用于触发电子设备使用第二摄像头协助第一摄像头拍摄图像。
本申请中,电子设备可以为用户提供第一预览图像的预览功能。这样,可以便于用户根据第一预览图像的图像效果决定是否控制电子设备使用第二摄像头协助第一摄像头拍摄图像。
在第一方面的另一种可能的设计方式中,上述第一用户界面包括第一控件,该第三操作是用户对第一控件的点击操作。或者,上述第三操作是预设手势。
在第一方面的另一种可能的设计方式中,上述第一摄像头是长焦摄像头,第二摄像头是主摄像头。上述预设操作是变倍操作。其中,主摄像头的进光量大于长焦摄像头的进光量。
本申请中,电子设备的长焦摄像头作为预览摄像头采集图像时,可以将主摄像头作为辅助摄像头。具体的,电子设备可以借助于主摄像头的进光量较大的优势,从长焦摄像头采集第一图像中检测到预设对象的位置(即第二区域)。其中,第一图像的图像质量较差,无法从该第一图像中清楚的分辨出预设对象的原因在于:该预设对象在第一图像中的位置(如第二区域)的曝光值低。因此,电子设备可以检测并调整长焦摄像头的曝光参数,以提升上述曝光值。这样,便可以提升长焦摄像头拍摄得到的图像的图像质量。如此,提升曝光值之后,长焦摄像头便可以拍摄得到图像质量较高的图像(如图像c)。
在第一方面的另一种可能的设计方式中,上述响应于预设操作,电子设备的第二摄像头采集第二图像,包括:响应于预设操作,电子设备的环境光传感器检测环境光亮度;电子设备确定第一环境光亮度值;若第一环境光亮度值低于第一亮度阈值,电子设备的第二摄像头采集第二图像。
可以理解,若第一环境光亮度值低于第一亮度阈值,则表示电子设备处于暗光场景下。在暗光场景下,第一摄像头可能会因为进光量不足等原因,而影响拍摄的图像质量。本申请中,在上述暗光场景下,使用第二摄像头协助第一摄像头拍摄图像,可以提升拍摄得到的图像的图像质量。
在第一方面的另一种可能的设计方式中,上述第一摄像头是彩色摄像头,第二摄像头是黑白摄像头。其中,黑白摄像头的进光量大于彩色摄像头的进光量。彩色摄像头至少包括主摄像头、长焦摄像头或广角摄像头中的任一种。
针对彩色摄像头和黑白摄像头的上述特点,电子设备在暗光场景下,采用彩色摄像头作为预览摄像头(即第一摄像头)采集图像时,为了避免由于环境光亮度较弱而影响图像质量,可以借助于黑白摄像头进光量大的优势,将黑白摄像头作为辅助摄像头(即第二摄像头)协助彩色摄像头工作,以提升彩色焦摄像头拍摄得到的图像的图像质量。
在第一方面的另一种可能的设计方式中,上述第一摄像头是可见光摄像头,第二摄像头是红外摄像头。其中红外摄像头具备感知可见光和红外光的能力,而可见光摄像头具备感知可见光的能力,不具备感知红外光的能力。例如,上述可见光摄像头可以是长焦摄像头、广角摄像头、主摄像头或黑白摄像头等任一摄像头。
针对可见光摄像头和红外摄像头的上述特点,电子设备在暗光场景下,采用可见光摄像头作为预览摄像头(即第一摄像头)采集图像时,为了避免由于可见光较弱而影响图像质量,可以借助于红外摄像头能够感知红外光的优势,将红外摄像头作为辅助摄像头(即第二摄像头)协助可见光摄像头工作,以提升可见光摄像头拍摄得到的图像的图像质量。
在第一方面的另一种可能的设计方式中,上述第一摄像头是长焦摄像头,第二摄像头是红外摄像头或者主摄像头。预设操作是变倍操作,该变倍操作用于触发电子设备启动长焦摄像头。其中,主摄像头的进光量大于长焦摄像头的进光量。红外摄像头具备感知可见光和红外光的能力,而长焦摄像头具备感知可见光的能力,不具备感知红外光的能力。
在第一方面的另一种可能的设计方式中,上述响应于预设操作,电子设备的第二 摄像头采集第二图像,包括:响应于预设操作,电子设备的环境光传感器检测环境光亮度;电子设备确定第二环境光亮度值;若第二环境光亮度值低于第一亮度阈值,且低于第二亮度阈值,电子设备的红外摄像头采集第二图像,第二摄像头是红外摄像头,第二亮度阈值小于第一亮度阈值;如果第二环境光亮度值低于第一亮度阈值,但高于或者等于第二亮度阈值,电子设备的主摄像头采集第二图像,第二摄像头是主摄像头。
本申请中,在暗光场景下,电子设备的长焦摄像头作为预览摄像头采集图像时,可以根据环境光亮度,选择主摄像头或者红外摄像头作为辅助摄像头协助长焦摄像头拍照,以提升长焦摄像头拍摄得到的图像的图像质量。
在第一方面的另一种可能的设计方式中,上述第一摄像头是彩色摄像头,第二摄像头是深度摄像头。其中,深度摄像头具备获取对象的深度信息的能力,深度信息用于识别预设对象的轮廓。
可以理解,电子设备将彩色摄像头作为预览摄像头采集图像时,可能会因为拍摄对象(如上述预设对象)的颜色与背景颜色接近而无法清晰拍摄到预设对象的轮廓。而深度摄像头可以采集到预设对象的深度信息,该深度信息可以用于检测到该预设对象的轮廓。因此,该实施例中,电子设备采用彩色摄像头作为预览摄像头(即第一摄像头)采集图像时,可以将深度摄像头作为辅助摄像头(即第二摄像头)协助彩色摄像头工作,以提升彩色摄像头拍摄得到的图像的图像质量。
在第一方面的另一种可能的设计方式中,在第一摄像头是彩色摄像头,第二摄像头是深度摄像头的情况下,上述响应于预设操作,电子设备的第二摄像头采集第二图像,包括:响应于预设操作,电子设备确定第一图像中各个像素点的红绿蓝(red green blue,RGB)值;若电子设备确定第一图像满足第一预设条件,电子设备的深度摄像头采集第二图像。其中,第一预设条件是指:第一图像包括第三区域,该第三区域中多个像素点的RGB值的差异小于预设RGB阈值;若第一图像满足第一预设条件。
在第一方面的另一种可能的设计方式中,上述第一摄像头是黑白摄像头,摄像头是彩色摄像头。其中,彩色摄像头相比于黑白摄像头的预设优势为:彩色摄像头具备采集彩色图像的能力;彩色摄像头至少包括主摄像头、长焦摄像头或广角摄像头中的任一种。
其中,彩色摄像头可以采集到彩色的图像。但是,黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩。因此,采用黑白摄像头拍照,可能会因为拍摄对象(如上述预设对象)中包括相近且不易于用灰度区分的颜色,而影响图像质量。本申请实施例中,电子设备采用黑白摄像头作为预览摄像头(即第一摄像头)采集图像时,可以借助于彩色摄像头可以拍摄出拍摄对象的真实色彩的优势,将彩色摄像头作为辅助摄像头(即第二摄像头)协助黑白摄像头工作,以提升黑白焦摄像头拍摄得到的图像的图像质量。
结合第一方面,在另一种可能的设计方式中,在第一摄像头是黑白摄像头,摄像头是彩色摄像头的情况下,上述响应于预设操作,电子设备的第二摄像头采集第二图像,包括:响应于预设操作,电子设备确定第一图像中各个像素点的灰度值;若电子设备确定断第一图像满足第二预设条件,电子设备的彩色摄像头采集第二图像。其中,第二预设条件是指:第一图像包括第四区域,该第四区域中多个像素点的灰度值的差 异小于预设灰度阈值。
在第一方面的另一种可能的设计方式中,在电子设备确定第二区域的曝光值之前,上述方法还包括:电子设备根据预设对象的图像在第一图像中第一区域的位置,确定第一图像中预设对象的图像所在的第二区域。例如,电子设备可以保存第一摄像头的视野范围与第二摄像头的视野范围的对应关系。电子设备可以根据预设对象的图像在第一区域的位置,结合第一摄像头的视野范围与第二摄像头的视野范围的对应关系,确定出第一图像中预设对象所在的第二区域。
在第一方面的另一种可能的设计方式中,上述第一摄像头是长焦摄像头,第二摄像头是主摄像头,上述预设操作是变倍操作。也就是说,长焦摄像头作为预览摄像头采集图像时,主摄像头可以作为辅助摄像头,协助长焦摄像头拍摄图像。
在上述方案中,响应于预设操作,电子设备的环境光传感器可检测环境光亮度。电子设备可确定第三环境光亮度值。若第三环境光亮度值低于第一亮度阈值,则表示电子设备处于暗光场景中,电子设备的第二摄像头(即主摄像头)可采集第二图像。也就是说,在暗光场景下,电子设备的主摄像头可协助长焦摄像头拍摄图像。其中,主摄像头的进光量大于长焦摄像头的进光量。如此,即使长焦摄像头的进光量小,借助于主摄像头进光量大的优势,电子设备也可以拍摄得到图像质量较高的图像。
并且,在该方案中,电子设备在预设对象静止或运动的情况下,可以调整虎长焦摄像头不同的曝光参数,以达到提升曝光值的目的。
具体的,在预设对象静止的情况下,第一摄像头(如长焦摄像头)的曝光时间、拍照帧数和ISO感光度中,影响上述曝光值的主要因素为曝光时间。因此,如果预设对象是静止的,电子设备可调整第一摄像头的曝光时间,或者调整曝光时间和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在预设对象运动的情况下,第一摄像头(即长焦摄像头)的曝光时间、拍照帧数和ISO感光度中,影响上述曝光值的主要因素为拍照帧数。因此,如果预设对象是运动的,电子设备可调整第一摄像头的拍照帧数,或者调整第一摄像头的拍照帧数和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
本申请中,电子设备可以根据预设对象的运动状态(如静止或运动)适应性调整长焦摄像头不同的曝光参数。这样,可以提升电子设备调整曝光参数提升曝光值的效率。
进一步的,预设对象的运动状态不同,电子设备生成第三图像所采用的防抖方式可以不同。具体的,OIS快门时间(即曝光时间)内的防抖,用于稳定摄像头。而EIS用于拍摄运动中的拍摄对象时,减少多帧模糊现象出现的可能性。因此,在预设对象静止的情况下,电子设备可以对第一摄像头采集的一帧第一预览图像进行OIS防抖;在预设对象运动的情况下,电子设备可以对第一摄像头采集的多帧第一预览图像进行OIS防抖和EIS防抖。这样,可以进一步提升电子设备拍摄得到的图像的图像质量。
第二方面,本申请提供一种电子设备,该电子设备包括第一采集模块、第二采集模块和显示模块。该电子设备还包括处理模块和存储模块。其中,上述第一采集模块与第二采集模块不同。
具体的,上述处理模块,用于检测预设操作。上述第一采集模块,用于响应于处 理模块检测到的预设操作,采集第一图像。上述显示模块,用于显示第一图像。上述第二采集模块,用于采集第二图像。其中,上述显示模块不显示第二图像。该第二图像包括第一区域,第一区域是对应于第一采集模块的视野范围的区域。上述处理模块,还用于检测第一区域内包括预设对象的图像;还用于确定第二区域的曝光值。该第二区域是第一图像中预设对象的图像所在的区域。上述处理模块,还用于确定若第二区域的曝光值小于第一曝光阈值,调整第一采集模块的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值。上述第一采集模块,还用于采用调整后的曝光参数采集第一预览图像。上述显示模块,还用于显示第一预览图像。上述第一采集模块,还用于响应于用户的拍照操作,采用调整后的曝光参数拍摄第三图像。上述存储模块,用于保存第三图像。上述预设对象包括以下至少一种:人脸、人体、植物、动物、建筑或文字。
在第二方面的一种可能的设计方式中,上述曝光参数包括曝光时间、拍照帧数和ISO感光度中的至少一项。
在第二方面的另一种可能的设计方式中,上述处理模块,用于调整第一采集模块的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值,包括:处理模块,用于:如果预设对象是静止的,调整第一采集模块的曝光时间,使第二区域的曝光值等于或者大于第一曝光阈值;或者,如果预设对象是静止的,调整第一采集模块的曝光时间和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于响应于拍照操作,对第一采集模块采集的一帧第一预览图像进行OIS防抖,得到第三图像。
在第二方面的另一种可能的设计方式中,上述处理模块,用于调整第一采集模块的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值,包括:处理模块,用于:如果预设对象是运动的,调整第一采集模块的拍照帧数,使第二区域的曝光值等于或者大于第一曝光阈值;或者,如果预设对象是运动的,调整第一采集模块的拍照帧数和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于响应于拍照操作,对第一采集模块采集的多帧第一预览图像进行OIS防抖和EIS防抖融合,得到第三图像。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于响应于拍照操作,对第一采集模块采集的多帧第一预览图像进行OIS防抖,并对多帧第一预览图像的运动区域的图像进行EIS防抖融合,得到第三图像。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于确定第二区域的曝光值是否大于第二曝光阈值;若该处理模块确定该第二区域的曝光值大于第二曝光阈值,该处理模块,还用于调整第一采集模块的曝光参数,使第二区域的曝光值等于或者小于第二曝光阈值。
在第二方面的另一种可能的设计方式中,上述显示模块,还用于响应于预设操作,显示第一用户界面,第一用户界面用于请求用户确认是否使用第二采集模块协助第一采集模块拍摄图像。上述处理模块,还用于检测到用户对第一用户界面的第一操作。上述第二采集模块,还用于响应于第一操作,采集第二图像。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于检测到用户对第一用户界面的第二操作。其中,第二采集模块响应于第二操作,不采集图像。
在第二方面的另一种可能的设计方式中,上述第一用户界面还包括第一预览图像。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于检测到用户对第一用户界面的第三操作。上述显示模块,还用于响应于第三操作,显示第二用户界面。第二用户界面包括第一预览图像。该第一预览图像是第一采集模块采集的。上述处理模块,还用于检测到用户对第二用户界面的第四操作。上述第二采集模块,还用于响应于第四操作,采集第二图像。
在第二方面的另一种可能的设计方式中,上述第一用户界面包括第一控件,上述第三操作是用户对第一控件的点击操作。或者,上述第三操作是预设手势。
在第二方面的另一种可能的设计方式中,上述第一采集模块与第二采集模块可以不同。其中,第一采集模块与第二采集模块的各种可能的实现方式,可以参考以下可能的设计方式中的描述,这里不予赘述。
在第二方面的另一种可能的设计方式中,上述第一采集模块是长焦摄像头,第二采集模块是主摄像头或者红外摄像头。或者,第一采集模块是彩色摄像头,第二采集模块是黑白摄像头。或者,第一采集模块是可见光摄像头,第二采集模块是红外摄像头。或者,第一采集模块是彩色摄像头,第二采集模块是深度摄像头。或者,第一采集模块是黑白摄像头,摄像头是彩色摄像头。其中,彩色摄像头至少包括主摄像头、长焦摄像头或广角摄像头中的任一种。
在第二方面的另一种可能的设计方式中,上述电子设备还包括传感器模块。传感器模块,用于响应于预设操作,检测环境光亮度。上述处理模块,还用于确定第一环境光亮度值。该处理模块,还用于确定第一环境光亮度值是否低于第一亮度阈值。若该处理模块确定第一环境光亮度值低于第一亮度阈值,上述第二采集模块还用于采集第二图像。
在第二方面的另一种可能的设计方式中,上述第一采集模块是长焦摄像头,第二采集模块是红外摄像头或者主摄像头。上述预设操作是变倍操作。上述电子设备还包括传感器模块。上述传感器模块,用于响应于预设操作,检测环境光亮度。上述处理模块,还用于确定第二环境光亮度值。该处理模块,还用于确定第二环境光亮度值是否低于第一亮度阈值和第二亮度阈值。若处理模块确定第二环境光亮度值低于第一亮度阈值和第二亮度阈值,上述第二采集模块,还用于采集第二图像;该第二采集模块是红外摄像头。
上述处理模块,还用于确定第二环境光亮度值是否低于第一亮度阈值,且大于或者等于第二亮度阈值。若处理模块确定第二环境光亮度值低于第一亮度阈值,且大于或者等于第二亮度阈值,上述第二采集模块还用于采集第二图像;该第二采集模块是主摄像头。其中,上述第二亮度阈值小于第一亮度阈值。
在第二方面的另一种可能的设计方式中,上述第一采集模块是彩色摄像头,第二采集模块是深度摄像头。上述处理模块,还用于响应于预设操作,确定第一图像中像素点的RGB值。
上述处理模块,还用于确定第一图像是否满足第一预设条件。若该处理模块确定 第一图像满足第一预设条件,上述第二采集模块还用于采集第二图像。其中,第一预设条件是指:第一图像包括第三区域,第三区域中多个像素点的RGB值的差异小于预设RGB阈值。
在第二方面的另一种可能的设计方式中,上述第一采集模块是黑白摄像头,摄像头是彩色摄像头。处理模块,还用于响应于预设操作,确定第一图像中像素点的灰度值。上述处理模块,还用于确定第一图像是否满足第二预设条件。若处理模块确定第一图像满足第二预设条件,上述第二采集模块,还用于确采集第二图像。其中,该第二预设条件是指:第一图像包括第四区域,第四区域中多个像素点的灰度值的差异小于预设灰度阈值。
在第二方面的另一种可能的设计方式中,上述处理模块,还用于在确定第二区域的曝光值之前,根据预设对象的图像在第一图像中第一区域的位置,确定第一图像中预设对象的图像所在的第二区域。
在第二方面的另一种可能的设计方式中,上述第一采集模块是长焦摄像头,第二采集模块是主摄像头,预设操作是变倍操作。上述电子设备还包括传感器模块。该传感器模块,用于响应于预设操作,检测环境光亮度。上述处理模块,还用于确定第三环境光亮度值。处理模块,还用于确定第三环境光亮度值是否低于第一亮度阈值。若处理模块确定第三环境光亮度值低于第一亮度阈值,上述第二采集模块还用于采集第二图像。
其中,处理模块,用于调整第一采集模块的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值,包括:处理模块,用于如果预设对象是静止的,调整第一采集模块的曝光时间,或者电子设备调整第一采集模块的曝光时间和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值;如果预设对象是运动的,调整第一采集模块的拍照帧数,或者电子设备调整第一采集模块的拍照帧数和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
其中,处理模块,还用于响应于拍照操作,如果预设对象是静止的,对第一采集模块采集的一帧第一预览图像进行OIS防抖,得到第三图像;如果预设对象是运动的,对第一采集模块采集的多帧第一预览图像进行OIS防抖,得到第三图像。
在第二方面的另一种可能的设计方式中,上述第一采集模块与第二采集模块可以相同。
第三方面,本申请提供一种电子设备,包括一个或多个触摸屏,一个或多个存储模块,一个或多个处理模块;其中所述一个或多个储存模块存储有一个或多个程序;当所述一个或多个处理模块在执行所述一个或多个程序时,使得所述电子设备实现如第一方面及其任一种可能的设计方式所述的方法。
第四方面,本申请提供一种电子设备,该电子设备包括第一摄像头、第二摄像头和显示屏。该电子设备还包括处理器和存储器。该第二摄像头与第一摄像头不同。上述存储器、显示屏、第一摄像头和第二摄像头与处理器耦合。
具体的,上述处理器,用于检测预设操作。上述第一摄像头,用于响应于预设操作,采集第一图像。上述显示屏,用于显示第一图像。上述第二摄像头,用于采集第二图像。其中,上述显示屏不显示第二图像,第二图像包括第一区域,第一区域是对 应于第一摄像头的视野范围的区域。上述处理器,还用于检测第一区域内包括预设对象的图像。该预设对象包括以下至少一种:人脸、人体、植物、动物、建筑或文字。上述处理器,还用于确定第二区域的曝光值,其中,第二区域是第一图像中预设对象的图像所在的区域。上述处理器,还用于确定若第二区域的曝光值小于第一曝光阈值,调整第一摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值。上述第一摄像头,还用于采用调整后的曝光参数采集第一预览图像。上述显示屏,还用于显示第一预览图像。上述第一摄像头,还用于响应于用户的拍照操作,采用调整后的曝光参数拍摄第三图像。上述存储器,用于保存第三图像。
在第四方面的一种可能的设计方式中,上述曝光参数包括曝光时间、拍照帧数和ISO感光度中的至少一项。
在第四方面的另一种可能的设计方式中,上述处理器,用于调整第一摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值,包括:处理器,用于:如果预设对象是静止的,调整第一摄像头的曝光时间,使第二区域的曝光值等于或者大于第一曝光阈值;或者,如果预设对象是静止的,调整第一摄像头的曝光时间和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在第四方面的另一种可能的设计方式中,上述处理器,还用于响应于拍照操作,对第一摄像头采集的一帧第一预览图像进行OIS防抖,得到第三图像。
在第四方面的另一种可能的设计方式中,上述处理器,用于调整第一摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值,包括:处理器,用于:如果预设对象是运动的,调整第一摄像头的拍照帧数,使第二区域的曝光值等于或者大于第一曝光阈值;或者,如果预设对象是运动的,调整第一摄像头的拍照帧数和ISO感光度,使第二区域的曝光值等于或者大于第一曝光阈值。
在第四方面的另一种可能的设计方式中,上述处理器,还用于响应于拍照操作,对第一摄像头采集的多帧第一预览图像进行OIS防抖和EIS防抖融合,得到第三图像。
在第四方面的另一种可能的设计方式中,上述处理器,还用于响应于拍照操作,对第一摄像头采集的多帧第一预览图像进行OIS防抖,并对多帧第一预览图像的运动区域的图像进行EIS防抖融合,得到第三图像。
在第四方面的另一种可能的设计方式中,上述处理器,还用于确定第二区域的曝光值是否大于第二曝光阈值。若处理器确定第二区域的曝光值大于第二曝光阈值,处理器还用于调整第一摄像头的曝光参数,使第二区域的曝光值等于或者小于第二曝光阈值。
在第四方面的另一种可能的设计方式中,上述显示屏,还用于响应于预设操作,显示第一用户界面,第一用户界面用于请求用户确认是否使用第二摄像头协助第一摄像头拍摄图像。上述处理器,还用于检测到用户对第一用户界面的第一操作。上述第二摄像头,还用于响应于第一操作,采集第二图像。
在第四方面的另一种可能的设计方式中,上述处理器,还用于检测到用户对第一用户界面的第二操作。其中,响应于第二操作第二摄像头不采集图像。
在第四方面的另一种可能的设计方式中,上述第一用户界面还包括第一预览图像。
在第四方面的另一种可能的设计方式中,上述处理器,还用于检测到用户对第一 用户界面的第三操作。上述显示屏,还用于响应于第三操作,显示第二用户界面。其中,该第二用户界面包括第一预览图像。该第一预览图像是第一摄像头采集的。上述处理器,还用于检测到用户对第二用户界面的第四操作。上述第二摄像头,还用于响应于第四操作,采集第二图像。
在第四方面的另一种可能的设计方式中,上述第一用户界面包括第一控件,第三操作是用户对第一控件的点击操作。或者,第三操作是预设手势。
在第四方面的另一种可能的设计方式中,上述第一摄像头是长焦摄像头,第二摄像头是主摄像头或红外摄像头。或者,第一摄像头是彩色摄像头,第二摄像头是黑白摄像头。或者,第一摄像头是可见光摄像头,第二摄像头是红外摄像头。或者,第一摄像头是彩色摄像头,第二摄像头是深度摄像头。或者,第一摄像头是黑白摄像头,摄像头是彩色摄像头。其中,彩色摄像头至少包括主摄像头、长焦摄像头或广角摄像头中的任一种。
在第四方面的另一种可能的设计方式中,上述电子设备还包括环境光传感器。环境光传感器,用于响应于预设操作,检测环境光亮度。处理器,还用于确定第一环境光亮度值。上述处理器,还用于确定第一环境光亮度值是否低于第一亮度阈值。若处理器确定第一环境光亮度值低于第一亮度阈值,第二摄像头还用于采集第二图像。
在第四方面的另一种可能的设计方式中,上述第一摄像头是长焦摄像头,第二摄像头是红外摄像头或者主摄像头。预设操作是变倍操作。电子设备还包括环境光传感器。环境光传感器,用于响应于预设操作,检测环境光亮度。处理器,还用于确定第二环境光亮度值。该处理器,还用于确定第二环境光亮度值是否低于第一亮度阈值和第二亮度阈值。若处理器确定第二环境光亮度值低于第一亮度阈值和第二亮度阈值,第二采集模块还用于采集第二图像。该第二摄像头是红外摄像头。
上述处理器,还用于确定第二环境光亮度值是否低于第一亮度阈值,且大于或者等于第二亮度阈值。若处理器确定第二环境光亮度值低于第一亮度阈值,且大于或者等于第二亮度阈值,第二摄像头还用于采集第二图像。该第二摄像头是主摄像头。其中,第二亮度阈值小于第一亮度阈值。
在第四方面的另一种可能的设计方式中,上述第一摄像头是彩色摄像头,第二摄像头是深度摄像头。上述处理器,还用于响应于预设操作,确定第一图像中像素点的RGB值。该处理器,还用于确定第一图像是否满足第一预设条件。若处理器确定第一图像满足第一预设条件,上述第二摄像头还用于采集第二图像。其中,第一预设条件是指:第一图像包括第三区域,第三区域中多个像素点的RGB值的差异小于预设RGB阈值。
在第四方面的另一种可能的设计方式中,上述第一摄像头是黑白摄像头,摄像头是彩色摄像头。上述处理器,还用于响应于预设操作,确定第一图像中像素点的灰度值。上述处理器,还用于确定第一图像是否满足第二预设条件。若处理器确定第一图像满足第二预设条件,上述第二摄像头还用于采集第二图像。其中,第二预设条件是指:第一图像包括第四区域,第四区域中多个像素点的灰度值的差异小于预设灰度阈值。
在第四方面的另一种可能的设计方式中,上述处理器,还用于在确定第二区域的 曝光值之前,根据预设对象的图像在第一图像中第一区域的位置,确定第一图像中预设对象的图像所在的第二区域。
在第四方面的另一种可能的设计方式中,上述所述第一摄像头是长焦摄像头,所述第二摄像头是主摄像头,所述预设操作是变倍操作。上述电子设备还包括环境光传感器。该所述环境光传感器,用于检测环境光亮度。上述所述处理器,还用于确定第三环境光亮度值。该处理器,还用于确定第三环境光亮度值是否低于第一亮度阈值。若处理器确定第三环境光亮度值低于第一亮度阈值,上述所述第二摄像头还用于采集所述第二图像。
其中,上述处理器,用于调整所述第一摄像头的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:所述处理器,用于:如果所述预设对象是静止的,调整所述第一摄像头的曝光时间,或者所述电子设备调整所述第一摄像头的曝光时间和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;如果所述预设对象是运动的,调整所述第一摄像头的拍照帧数,或者所述电子设备调整所述第一摄像头的拍照帧数和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值。
其中,上述处理器,还用于响应于所述拍照操作,如果所述预设对象是静止的,对所述第一摄像头采集的一帧所述第一预览图像进行OIS防抖,得到所述第三图像,如果所述预设对象是运动的,对所述第一摄像头采集的多帧所述第一预览图像进行OIS防抖,得到所述第三图像。
第五方面,本申请提供一种电子设备,包括一个或多个触摸屏,一个或多个存储器,一个或多个处理器;其中所述一个或多个储存器存储有一个或多个程序;当所述一个或多个处理器在执行所述一个或多个程序时,使得所述电子设备实现如第一方面及其任一种可能的设计方式所述的方法。该存储器还用于保存第一摄像头拍摄的图像。存储器还可以用于缓存第二摄像头采集的图像。
第六方面,本申请实施例提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第七方面,本申请实施例提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的第二方面至第五方面,及其任一种可能的设计方式所述的电子设备,第六方面所述的计算机存储介质,第七方面所述的计算机程序产品所能达到的有益效果,可参考如第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种电子设备的硬件结构示意图;
图2为本申请实施例提供的一种拍摄图像的方法的原理框图;
图3为本申请实施例提供的一种拍摄图像的方法的流程图;
图4为本申请实施例提供的一种手机的显示界面的实例示意图;
图5为本申请实施例提供的另一种手机的显示界面的实例示意图;
图6为本申请实施例提供的一种第一图像和第二图像的实例示意图;
图7为本申请实施例提供的一种摄像头的视野范围的实例示意图;
图8为本申请实施例提供的另一种摄像头的视野范围的实例示意图;
图9为本申请实施例提供的一种第二图像中预设对象的图像的实例示意图;
图10为本申请实施例提供的另一种手机的显示界面的实例示意图;
图11为本申请实施例提供的一种第一图像的实例示意图;
图12为本申请实施例提供的另一种拍摄图像的方法的流程图;
图13为本申请实施例提供的另一种拍摄图像的方法的流程图;
图14为本申请实施例提供的另一种手机的显示界面的实例示意图;
图15A为本申请实施例提供的另一种手机的显示界面的实例示意图;
图15B为本申请实施例提供的另一种手机的显示界面的实例示意图;
图16为本申请实施例提供的另一种拍摄图像的方法的流程图;
图17为本申请实施例提供的另一种拍摄图像的方法的流程图;
图18为本申请实施例提供的另一种拍摄图像的方法的流程图;
图19为本申请实施例提供的另一种拍摄图像的方法的流程图;
图20为本申请实施例提供的一种电子设备的结构示意图;
图21为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。例如,第一摄像头和第二摄像头是指不同的摄像头。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请实施例提供一种拍摄图像的方法,该方法可以应用于包括多个摄像头的电子设备。例如,上述多个摄像头可以包括主摄像头、长焦摄像头、广角摄像头、红外摄像头、深度摄像头或者黑白摄像头等至少两种摄像头。
其中,每种摄像头在不同的场景下各有其优势和劣势。以下介绍本申请实施例中涉及的摄像头的特点(即优势和劣势)以及适用场景。
(1)主摄像头。主摄像头具有进光量大、分辨率高,以及视野范围居中的特点。主摄像头一般作为电子设备(如手机)的默认摄像头。也就是说,电子设备(如手机)响应于用户启动“照相机”应用的操作,可以默认启动主摄像头,在预览界面显示主摄像头采集的图像。
其中,摄像头的视野范围由摄像头的视场角(field of vie,FOV)决定。摄像头的FOV越大,摄像头的视野范围则越大。
(2)长焦摄像头。长焦摄像头的焦距较长,可适用于拍摄距离手机较远的拍摄对象(即远处的物体)。但是,长焦摄像头的进光量较小。在暗光场景下使用长焦摄像头拍摄图像,可能会因为进光量不足而影响图像质量。并且,长焦摄像头的视野范围较小,不适用于拍摄较大场景的图像,即不适用于拍摄较大的拍摄对象(如建筑或风 景等)。
(3)广角摄像头。广角摄像头的视野范围较大,可适用于拍摄较大的拍摄对象(如建筑或风景等)。但是,广角摄像头的分辨率较低。并且,采用广角摄像头拍摄得到的图像所呈现的拍摄对象容易畸变,即拍摄对象的图像容易畸形。
(4)红外摄像头。红外摄像头具有光谱范围大的特点。例如,红外摄像头不仅可以感知可见光,还可以感知红外光。在暗光场景(即可见光较弱)下,利用红外摄像头可感知红外光的特点,使用红外摄像头拍摄图像,可提升图像质量。但是,红外摄像头的分辨率较低。
(5)深度摄像头。例如,飞行时间(time of flight,ToF)摄像头或者结构光摄像头等均为深度摄像头。本申请实施例中,以深度摄像头是ToF摄像头为例。ToF摄像头具有准确获取拍摄对象的深度信息的特点。ToF摄像头可适用于人脸识别等场景中。但是,ToF摄像头的分辨率较低。
(6)黑白摄像头。由于黑白摄像头没有滤光片;因此,相比于彩色摄像头而言,黑白摄像头的进光量较大。但是,黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩。需要说明的是,上述主摄像头、长焦摄像头和广角摄像头等均为彩色摄像头。
本申请实施例提供的方法中,电子设备采用预览摄像头拍摄图像时,可以借助于其他摄像头(称为辅助摄像头)相比于预览摄像头的优势,控制辅助摄像头与预览摄像头协同工作,以提升预览摄像头在拍摄时,所拍摄得到的图像的图像质量。也就是说,本申请的方法中,电子设备可以利用各个摄像头的优势,控制多个摄像头协同工作,以提升拍摄得到的图像的图像质量。
需要说明的是,上述预览摄像头是用于采集(或拍摄)电子设备所显示的预览图像的摄像头。也就是说,电子设备在拍摄图像(或者照片)的过程中所显示的预览图像是上述预览摄像头采集的。例如,上述主摄像头、长焦摄像头、广角摄像头或者黑白摄像头等任一摄像头都可以作为电子设备的预览摄像头。上述红外摄像头、深度摄像头、主摄像头、长焦摄像头、广角摄像头或者黑白摄像头等任一摄像头均可以作为电子设备的辅助摄像头。
例如,主摄像头的进光量大于长焦摄像头的进光量。电子设备可能会在暗光场景下,采用长焦摄像头采集图像(即长焦摄像头作为预览摄像头)。在这种场景下,为了避免由于长焦摄像头的进光量不足而影响图像质量,可以借助于主摄像头进光量大的优势,将主摄像头作为辅助摄像头协助长焦摄像头工作,以提升长焦摄像头拍摄得到的图像的图像质量。
又例如,黑白摄像头的进光量大于彩色摄像头的进光量。电子设备可能会在暗光场景下,采用彩色摄像头采集图像(即彩色摄像头作为预览摄像头)。在这种场景下,为了避免由于彩色摄像头的进光量不足而影响图像质量,可以借助于黑白摄像头进光量大的优势,将黑白摄像头作为辅助摄像头协助彩色摄像头工作,以提升彩色摄像头拍摄得到的图像的图像质量。
又例如,红外摄像头具备感知可见光和红外光的能力;可见光摄像头具备感知可见光的能力,不具备感知红外光的能力。在暗光场景(如傍晚、深夜或者暗室内)下, 可见光的强度较低。可见光摄像头无法感知到光线或者感知到的光线较弱,因此无法采集到预设对象的清晰图像。而红外光摄像头可以感知视野范围内有温度的人或动物(即预设对象)发出红外光,因此可以采集到预设对象的图像。针对可见光摄像头和红外摄像头的上述特点,电子设备在暗光场景下,可采用可见光摄像头作为预览摄像头(即第一摄像头)采集图像时,为了避免由于可见光较弱而影响图像质量,可以借助于红外摄像头能够感知红外光的优势,将红外摄像头作为辅助摄像头(即第二摄像头)协助可见光摄像头工作,以提升可见光摄像头拍摄得到的图像的图像质量。
又例如,深度摄像头具备获取所述预设对象的深度信息的能力,所述深度信息用于识别所述预设对象的轮廓。彩色摄像头作为预览摄像头采集图像时,可能会因为拍摄对象(如上述预设对象)的颜色与背景颜色接近而无法清晰拍摄到预设对象的轮廓。而深度摄像头可以采集到预设对象的深度信息,该深度信息可以用于检测到该预设对象的轮廓。电子设备采用彩色摄像头作为预览摄像头采集图像时,可以将深度摄像头作为辅助摄像头协助彩色摄像头工作,以提升彩色摄像头拍摄得到的图像的图像质量。
又例如,彩色摄像头可采集到彩色的图像。但是,黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩。因此,采用黑白摄像头拍照,可能会因为拍摄对象(如上述预设对象)中包括相近且不易于用灰度区分的颜色,而影响图像质量。电子设备采用黑白摄像头作为预览摄像头采集图像时,可以借助于彩色摄像头可以拍摄出拍摄对象的真实色彩的优势,将彩色摄像头作为辅助摄像头协助黑白摄像头工作,以提升黑白焦摄像头拍摄得到的图像的图像质量。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、可穿戴设备(如智能手表)、智能电视机、照相机、个人计算机(personal computer,PC)、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括上述多种摄像头的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
例如,本申请实施例中以电子设备是手机为例,对本申请实施例提供的电子设备的结构进行举例说明。如图1所示,电子设备100(如手机)可以包括:处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及用户标识模块(subscriber identification module,SIM)卡接口195等。
其中,上述传感器模块180可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。本申请实施例中的环境光传感器,可以用于检测环境光亮度。该环境光传感器采集的环境光亮度,可以用于电子设备100判断电子设备100是否处于暗光场景。换言之,该环境光传感器采集的环境光亮度,可以用于电子设备100判断电子设备100是否需要启动辅助摄像头协助预览摄 像头拍照。
可以理解的是,本实施例示意的结构并不构成对电子设备100的具体限定。在另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。在一些实施例中,处理器110可以包括一个或多个接口。
充电管理模块140用于从充电器接收充电输入。电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏194,摄像头193,和无线通信模块160等供电。在一些实施例中,电源管理模块141和充电管理模块140也可以设置于同一个器件中。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。例如,本申请实施例中,电子设备100可以通过无线通信技术向其他设备发送上述第一账号和登录密码。
电子设备100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。该显示屏194包括显示面板。例如,本申请实施例中,显示屏194可以用于显示预览摄像头采集的图像(即预览图像)。显示屏还可以用于显示电子设备100与用户的各种交互界面,如用于请求用户确认是否进入智能拍摄模式的界面。其中,本申请实施例中所述的智能拍摄模式是指:电子设备100在采用预览摄像头采集图像时,启动辅助摄像头协助预览摄像头拍照的模式。
电子设备100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。ISP用于处理摄像头193反馈的数据。摄像头193用于 捕获静态图像、动态图像或视频。在一些实施例中,电子设备100可以包括N个摄像头193,N为大于2的正整数。
在本申请实施例中,N个摄像头193可以包括:主摄像头、长焦摄像头、广角摄像头、红外摄像头、深度摄像头或者黑白摄像头等至少两种摄像头。上述N个摄像头193中,主摄像头、长焦摄像头、广角摄像头或者黑白摄像头等任一摄像头都可以作为电子设备100的预览摄像头(即第一摄像头)。上述红外摄像头、深度摄像头、主摄像头、长焦摄像头、广角摄像头或者黑白摄像头等任一摄像头均可以作为电子设备100的辅助摄像头(即第二摄像头)。但是,预览摄像头与辅助摄像头不同。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。例如,在本申请实施例中,处理器110可以通过执行存储在内部存储器121中的指令,内部存储器121可以包括存储程序区和存储数据区。
其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。马达191可以产生振动提示。马达191可以用于来电振动提示,也可以用于触摸振动反馈。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口195用于连接SIM卡。SIM卡可以通过插入SIM卡接口195,或从SIM卡接口195拔出,实现和电子设备100的接触和分离。电子设备100可以支持1个或M个SIM卡接口,M为大于1的正整数。SIM卡接口195可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
示例性的,本申请实施例中以上述电子设备100是手机为例,介绍本申请实施例的方法。其中,该手机中包括多个摄像头(如N个摄像头)。其中,该多个摄像头中的第一摄像头可以作为预览摄像头,第二摄像头可以作为辅助摄像头。
为了便于理解,本申请实施例这里结合图2介绍本申请实施例中提升图像质量的原理:
在一些场景中,手机采用第一摄像头210(即预览摄像头)采集图像时,可能会因为第一摄像头的一些劣势(如进光量小),导致第一摄像头210采集的图像的图像质量较差,无法从该图像中清楚的分辨出预设对象(如人脸)。而第二摄像头220(即辅助摄像头)相比于第一摄像头210有相应的优势(如进光量大)。从第二摄像头220在该场景采集的图像中,可以清楚的分辨出预设对象。
基于此,如图2所示,手机采用第一摄像头210采集图像时,可以启动第二摄像头220采集图像。其中,第一摄像头210采集的第一图像211作为预览图像显示在预览界面,而第二摄像头220采集的第二图像221则不显示在预览界面。第二图像221也可称为后台图像。
可以理解的是,第一摄像头210和第二摄像头220在手机中的位置相近。因此,一般而言,如果第二图像221中包括预设对象,那么第一图像211中也包括预设对象。由于第二摄像头220相比于第一摄像头210存在上述优势;因此,如果第二图像221中包括预设对象;那么,从第二图像221中就可以清楚的分辨出预设对象。如此,手机可以执行图2所示的222(即检测第二图像221中是否包括预设对象)。如果检测到第二图像221中包括预设对象,手机便可以定位出预设对象在第二图像221中的位置;然后,根据预设对象在第二图像中的位置,以及第二摄像头220与第一摄像头210的视野范围的对应关系,确定出该预设对象在第一图像中的位置(如图像所在区域)。即执行图2所示的212中“定位预设对象”的操作。
示例性的,本申请实施例中所述的预设对象可以包括人脸、人体、动物的身体(如猫咪的身体)或者全身(如猫咪的全身,包括猫咪的脸和身体)、动物脸(如猫咪的脸)、植物、建筑或者文字等任一对象。
可以理解,第一图像的图像质量较差,无法从该第一图像中清楚的分辨出预设对象的原因在于:该预设对象在第一图像中的位置(如图像区域)的曝光值低。因此,手机可以检测并调整上述第一摄像头的曝光参数(即执行图2所示的212中“检测曝光值并调整曝光参数”的操作),以提升上述曝光值。这样,便可以提升第一摄像头拍摄得到的图像的图像质量。也就是说,更新上述曝光值(如提升曝光值)之后,第一摄像头便可以拍摄得到图像质量较高的图像(如第三图像)。
为了便于理解,本申请实施例这里介绍本申请实施例中涉及的术语:
(a)曝光值。曝光值用于表示摄像头拍摄图像时的拍摄参数(camera settings)的组合。该拍摄参数也称为曝光参数。曝光值的大小用曝光等级表示。例如,曝光值可以为-3、-2、-1、0、1、2或者3等。曝光值的大小由多个曝光参数决定。该多个曝光参数可以包括:曝光时间、拍照帧数、ISO感光度和光圈等。
(b)曝光时间。曝光时间是为了在摄像头拍照的过程中,为了将光投射到摄像头的图像传感器的感光材料的感光面上,快门所要打开的时间。
(c)拍照帧数。拍照帧数是摄像头每秒钟所采集的图像的个数。
(d)ISO感光度。ISO感光度是摄像头(即摄像头中的图像传感器)对亮度的敏感程度。其中,ISO是国际标准化组织(International Organization for
Standardization)的缩写。该组织规定摄像头对亮度的敏感程度,用ISO 100、ISO 400这样的数值来表示。
(e)光圈。光圈是一个用来控制光线透过摄像头的镜头,进入摄像头(即摄像头的图像传感器)的感光面的光量的装置。
一般而言,摄像头的光圈不易自动调整。本申请实施例中,为了提升预览摄像头的拍摄的图像质量,可以调整上述曝光时间、拍照帧数或ISO感光度等至少一个曝光参数,以实现上述更新曝光值的目的。其中,曝光时间越长,曝光值越大;拍照帧数 越大,曝光值越大;ISO感光度越高,曝光值越大。当然,本申请实施例中也不排除通过调整光圈以提升曝光值的方式。
以下通过各个实施例,结合不同场景介绍本申请实施例提供的拍摄图像的方法。
在一些实施例中,手机中包括主摄像头和长焦摄像头。其中,主摄像头具有进光量大、分辨率高,以及视野范围居中的特点。长焦摄像头的焦距较长,可适用于拍摄距离手机较远的拍摄对象(即远处的物体);但进光量小。
针对长焦摄像头和主摄像头的上述特点,手机在暗光场景下,采用长焦摄像头作为预览摄像头(即第一摄像头)采集图像时,为了避免由于长焦摄像头的进光量不足而影响图像质量,可以借助于主摄像头进光量大的优势,将主摄像头作为辅助摄像头(即第二摄像头)协助长焦摄像头工作,以提升长焦摄像头拍摄得到的图像的图像质量。
具体的,本申请实施例提供一种拍摄图像的方法,该方法可以应用于包括主摄像头和长焦摄像头的手机。以预设对象是人脸为例,如图3所示,该方法可以包括S301-S310。
S301、检测到变倍操作。
其中,该变倍操作用于触发手机的长焦摄像头采集图像。响应于该变倍操作,手机可以启动长焦摄像头,该长焦摄像头便可以采集图像。该变倍操作是预设操作。
可以理解,手机中摄像头的镜头一般都是定焦镜头,焦距可调的范围很小。手机拍摄图像时,变焦是通过切换不同焦距的摄像头来实现的。上述变倍操作可用于触发手机的高倍摄像头(如焦距为主摄像头的3倍/5倍等倍数的摄像头,如长焦摄像头)采集图像。也就是说,响应于该变倍操作,手机的预览摄像头可由低倍摄像头(即焦距较小的摄像头,如主摄像头)切换为高倍摄像头(即焦距较大的摄像头,如长焦摄像头)。因此,上述变倍操作还可以称为变焦操作。其中,上述变倍操作可以用于触发手机启动长焦摄像头,并将摄像头(如长焦摄像头)的焦距变倍到默认摄像头(如主摄像头)的2倍、3倍、5倍、10倍、15倍或者20倍等任一光学倍率。本申请实施例中,以上述变倍操作触发变倍的光学倍率为5倍为例,介绍本申请实施例的方法。当然,上述变倍操作触发变倍的光学倍率也可以为10倍或者其他数据,本申请实施例对光学倍率的具体数值不作限制。
在一种应用场景中,上述变倍操作可以是手机显示图像预览界面时,在该图像预览界面输入的用于控制手机的摄像头变焦的操作。示例性的,手机响应于用户启动“照相机”应用的操作(如图4中的(a)所示的操作1),可以启动手机的默认摄像头(如主摄像头)。例如,该操作1可以是单击操作。然后,手机可以显示图4中的(b)所示的图像预览界面,该图像预览界面包括取景框401、摄像头转化键408、拍摄键407、相册键406、闪光灯选项411、滤镜选项412、“视频”选项、“拍照”选项、“全景”选项等。
其中,图4中的(b)所示的取景框401用于显示上述默认摄像头采集的预览图像(如预览图像402)。该预览图像402与图6所示的图像602相同。例如,上述变倍操作可以是用户在该预览图像402上输入的双指外扩的操作(如操作2)。例如,如图4中的(b)所示的取景框401中还显示有手机的光学倍率标识409。该光学倍率 标识409为“1×”,表示光学倍率为1倍。响应于用户在图4中的(b)所示的预览图像402上输入的操作2,手机可显示图4中的(c)所示的图像预览界面。图4中的(c)所示的图像预览界面包括光学倍率标识410(如“5×”)。其中,“5×”表示光学倍率为5倍。也就是说,响应于上述操作2(即变倍操作),手机所使用的摄像头的光学倍率发生了变化。
其中,图4中的(b)所示的闪光灯选项411用于触发手机在拍摄照片时打开或者关闭闪光灯。滤镜选项412用于选择手机拍摄照片时所要采用的拍摄风格。该拍摄风格可以包括:标准、小清新、蓝调和黑白等。其中,“视频”选项用于触发手机显示录像的取景界面(附图未示出)。“拍照”选项用于触发手机显示拍照的取景界面(如图4中的(b)所示的图像预览界面)。“全景”选项用于触发手机显示手机拍摄全景照片的取景界面(附图未示出)。摄像头转化键408用于触发手机转化使用前置摄像头和后置摄像头来采集图像。拍摄键407用于控制手机保存取景框401中显示的预览图像。相册键406用于查看手机中保存的图像。
在一种应用场景中,用户在该预览图像402上输入的双指外扩的操作,可用于触发手机放大该预览图像。这种情况下,可能是由于用户想要拍摄的拍摄对象距离手机较远,所以用户想要触发手机放大预览图像,以便于用户可以在图像预览界面更加清晰地观看到远处的拍摄对象的图像。长焦摄像头的焦距较长,适用于拍摄距离手机较远的拍摄对象。因此,上述双指外扩的操作用于触发手机启动长焦摄像头,以拍摄距离手机较远的拍摄对象(即远处的物体)。
需要说明的是,本申请实施例中的第一摄像头(如长焦摄像头)和第二摄像头(如主摄像头)均为前置摄像头;或者,第一摄像头和第二摄像头均为后置摄像头。
在另一种应用场景中,上述还可以是在基于物体(即拍摄对象)跟踪的对焦模式中,拍摄对象由近向远的移动。例如,在基于物体跟踪的对焦模式下,手机可以接收用户对图5中的(a)所示的拍摄对象501的选择操作,将拍摄对象501确定为跟踪对象(即跟踪对象501)。手机可以检测该跟踪对象的位置变化。S301具体可以为:手机检测到该跟踪对象发生由近向远的移动,且移动距离大于预设距离阈值。例如,手机检测到上述跟踪对象501由图5中的(a)所示的位置移动至图5中的(b)所示的位置,则表示该手机接收到,该手机可以启动长焦摄像头。
需要说明的是,本申请实施例中所述的变倍操作包括但不限于上述两种变倍操作。本申请实施例中所述的变倍操作可以包括可触发手机启动长焦摄像头(即触发手机的长焦摄像头采集图像)的所有操作。例如,该变倍操作还可以是自动变倍操作。示例性的,当拍摄对象与手机之间的距离大于第一距离阈值,手机则可以自动触发上述变倍操作。例如,当用户站在地面拍摄巴黎铁塔的塔尖时,该塔尖作为拍摄对象,塔尖与手机之间的距离大于第一距离阈值,手机可以自动触发变倍操作。本申请实施例所述变倍操作的其他形式,本申请实施例这里不予赘述。
S302、响应于上述变倍操作,手机的长焦摄像头采集图像a,手机显示长焦摄像头采集的图像a。
其中,响应于上述变倍操作,手机可以启动长焦摄像头。这样,长焦摄像头便可 以采集图像(如图像a)。并且,手机可以将长焦摄像头采集的图像a作为预览图像,显示在图像预览界面。其中,本申请实施例中的图像a是第一图像。
示例性的,以下实施例中,以变倍操作是图4中的(b)所示的操作2为例。响应于用户在图4中的(b)所示的预览图像402上输入的操作2(即变倍操作),手机可显示图4中的(c)所示的预览图像404。该预览图像404是长焦摄像头采集的图像,如上述图像a。
需要说明的是,图4中的(b)所示的预览图像402是主摄像头采集的图像,而图4中的(c)所示的预览图像404是长焦摄像头采集的图像。其中,由于主摄像头的视野范围大于长焦摄像头的视野范围;因此,预览图像402的取景范围大于预览图像404的取景范围。以预设对象为人的脸部进行举例,由于长焦摄像头的焦距大于主摄像头的焦距;因此,拍摄对象405的图像在预览图像404中所占的面积大于拍摄对象405的图像在预览图像402中所占的面积,或者说,拍摄对象405的图像在预览图像404中的面积占比大于拍摄对象405的图像在预览图像402中的面积占比。由于长焦摄像头的进光量较小;因此,预览图像404的图像质量较差,用户无法从预览图像404清晰地观看到拍摄对象405的图像。
为了提升图像质量,本申请实施例中可以借助于主摄像头进光量大的优势,将主摄像头作为辅助摄像头协助长焦摄像头工作。如此,响应于上述变倍操作,手机不仅可以启动长焦摄像头,还可以启动主摄像头。具体的,如图3所示,在S301之后,本申请实施例的方法还包括S303。
S303、手机的主摄像头采集图像b,手机不显示图像b。
其中,手机的主摄像头可采集图像b。但是,主摄像头采集的图像b不会显示在预览界面。例如,响应于图4中的(b)所示的操作2(即变倍操作),如图4中的(c)所示,手机显示的预览图像404是长焦摄像头采集的图像(即图像a)。手机不会显示主摄像头采集的图像b,即图像b不会在手机上呈现给用户。
需要说明的是,虽然手机不显示图像b,但是手机可以缓存主摄像头采集的图像b。当然,手机也可以缓存长焦摄像头采集的图像a。示例性的,缓存在手机的内部存储器121中。其中,手机启动任一个摄像头后,该任一个摄像头采集的图像都可以被手机缓存。具体的,以手机缓存主摄像头采集的图像b为例,从主摄像头采集到图像b开始,手机可以在第二预设时长(如10秒、15秒或者30秒等任一时长)内,缓存该图像b。截止第二预设时长,手机则可以删除该图像b。也可以一直缓存在内部存储器121中,直到被定期删除或者被其他缓存图像所替代。
本申请实施例中,手机将长焦摄像头采集的图像a作为预览图像显示在取景框中,而不显示主摄像头采集的图像b;因此,可以将图像a称为预览图像,将图像b称为后台图像。其中,本申请实施例中的图像b是第二图像。
需要说明的是,在主摄像头是手机的默认摄像头的情况下,手机响应于用户启动“照相机”应用的操作(如图4中的(a)所示的操作1),可以启动主摄像头。一般而言,响应于上述变倍操作,手机可启动长焦摄像头,该长焦摄像头可采集图像;并且手机可关闭主摄像头,该主摄像头停止采集图像。而本申请实施例中,响应于该变倍操作,手机可启动长焦摄像头,该长焦摄像头可采集图像,但是,手机不会关闭主 摄像头,该主摄像头继续采集图像,以协助长焦摄像头拍摄图像。
可以理解,由于图4中的(b)所示的预览图像402也是主摄像头采集的图像;因此,该图像b的图像质量可以参考图4中的(b)所示的预览图像402的图像质量。对比预览图像402和预览图像404可知:用户可以从预览图像402清晰地观看到拍摄对象403的图像,而无法从预览图像404(即图像a)清晰地观看到拍摄对象405(如人脸,即预设对象)的图像。其中,拍摄对象403和拍摄对象405是同一个人。
也就是说,长焦摄像头的进光量小,可能会导致长焦摄像头采集的图像a的图像质量较差。如果图像a中包括预设对象(如人脸)的图像,用户难以从图像a中清楚的分辨出预设对象。但是,主摄像头的进光量大,主摄像头采集的图像b的图像质量较高。如果图像b中包括预设对象的图像,用户则可以从图像b中清楚的分辨出预设对象。需要说明的是,长焦摄像头和主摄像头在手机中的位置相近。因此,一般而言,如果图像b中包括预设对象,那么图像a中也包括预设对象。如此,即使从图像a中无法清楚的分辨出预设对象,还可以从图像b中清楚的分辨出预设对象。具体的,本申请实施例的方法还包括S304。
S304、手机检测到图像b的第一区域内包括预设对象的图像。图像b包括第一区域,该第一区域对应于长焦摄像头的初始视野范围的区域。
其中,长焦摄像头的初始视野范围是指长焦摄像头未变焦之前的视野范围。随着长焦摄像头的焦距的变化,长焦摄像头的视野范围也会发生变化。例如,长焦摄像头的焦距越长,长焦摄像头的视野范围越小;长焦摄像头的焦距越短,长焦摄像头的视野范围越大。一般而言,长焦摄像头的初始视野范围的中心点与主摄像头的视野范围的中心点重合。当然,也有一些长焦摄像头的初始视野范围的中心点与主摄像头的视野范围的中心点不重合。本申请实施例中,以长焦摄像头的初始视野范围的中心点与主摄像头的视野范围的中心点重合为例,介绍本申请实施例的方法。
长焦摄像头的视野范围(如初始视野范围)小于主摄像头的视野范围。例如,图6所示的虚线矩形框620表示主摄像头的视野范围,图6所示的虚线矩形框610表示长焦摄像头的视野范围。长焦摄像头的视野范围610小于主摄像头的视野范围620。如图6所示,图像601是长焦摄像头采集的第一图像(即图像a),图像602是主摄像头采集的第二图像(即图像b)。
如图6所示,上述第一区域可以是图像602(即图像b)中、与长焦摄像头的视野范围(如虚线矩形框610)对应的区域。也就是说,第一区域是图像602(即图像b)中、虚线矩形框610对应的区域。如图6所示,第一区域(即虚线矩形框610对应的区域)中包括预设对象603(如人脸)的图像。
本申请实施例中,手机可以保存长焦摄像头的视野范围与主摄像头的视野范围的对应关系。如此,手机便可以根据长焦摄像头的视野范围与主摄像头的视野范围的对应关系,确定出图像b中所包含的第一区域,然后判断该第一区域中是否包括预设对象的图像。
需要说明的是,手机判断图像b的第一区域中是否包括预设对象的图像的方法,可以参考常规技术中识别一幅图像中是否包括预设对象的图像的方法,本申请实施例这里不予赘述。
本申请实施例中,手机可以采用以下实现方式(1)和实现方式(2)中的任一种实现方式,确定出图像b的第一区域。
实现方式(1):
在实现方式(1)中,手机可以保存长焦摄像头的初始视野范围中的两个对角(如左上角和右下角,或者右上角和左下角)在主摄像头的视野范围的坐标系中的二维坐标。该二维坐标可以体现出长焦摄像头的视野范围与主摄像头的视野范围的对应关系。其中,主摄像头的视野范围的坐标系的坐标原点是主摄像头的视野范围中任意一个角(如左上角或左下角),x轴和y轴为相邻的两条边。
请参考图7,其示出主摄像头的主摄像头的视野范围720的一种坐标系实例。如图7所示,点o为坐标原点,x轴为视野范围720的下侧边,y轴为视野范围720的左侧边。手机可以保存长焦摄像头的初始视野范围710的左上角A1和右下角A2在图7所示的xoy坐标系中的二维坐标A1(x1,y1)和A2(x2,y2)。
可以理解,上述二维坐标A1(x1,y1)和A2(x2,y2)可以体现出长焦摄像头的视野范围与主摄像头的视野范围的对应关系。手机可以根据保存的二维坐标A1(x1,y1)和A2(x2,y2),确定出图像b的第一区域。
实现方式(2):
在实现方式(2)中,手机可以将长焦摄像头的初始视野范围划分为等间隔的多个区域1(如A*B个区域1),将主摄像头的初始视野范围划分为等间隔的多个区域2(如C*D个区域1)。其中,该区域1与区域2的大小(如面积)可以相同,也可以不同。手机可以保存上述多个区域1与多个区域2中部分区域2(如多个区域2中上述第一区域中的区域2)的对应关系,该多个区域1与多个区域2中部分区域2的对应关系可以体现出长焦摄像头的视野范围与主摄像头的视野范围的对应关系。
示例性的,本申请实施例中,采用图8中的(a)所示的矩形框810表示长焦摄像头的初始视野范围(记为视野范围810),采用图8中的(b)所示的矩形框820表示主摄像头的视野范围(记为视野范围820)。如图8中的(a)所示,本申请实施例中,可以将视野范围810划分为等间隔的21*27个区域1,即A=21,B=27。如图8中的(b)所示,本申请实施例中,可以将视野范围820划分为等间隔的19*24个区域2,即C=19,D=24。图8中的(a)所示的视野范围810的9个区域1,可以对应图8中的(b)所示的视野范围820的1个区域2。
例如,手机可以保存图8中的(a)所示的视野范围810中的多个区域1与图8中的(b)所示的视野范围820中的多个区域2中的部分区域2。其中,该部分区域2可以是图8中的(b)所示的视野范围810(即第一区域对应的视野范围)中的区域2,如粗线框b1对应的区域2和粗线框b2对应的区域2等。
在上述对应关系中,图8中的(a)所示的视野范围810中粗线框a1内的9个区域1,对应图8中的(b)所示的视野范围820中粗线框b1对应的区域2。图8中的(a)所示的视野范围810中粗线框a2内的9个区域1,对应图8中的(b)所示的视野范围820的粗线框b2对应的区域2。图8中的(a)所示的视野范围810中粗线框a3内的9个区域1,对应图8中的(b)所示的视野范围820的粗线框b3对应的区域2。图8中的(a)所示的视野范围810中粗线框a4内的9个区域1,对应图8中的(b)所 示的视野范围820的粗线框b4对应的区域2。图8中的(a)所示的视野范围810中粗线框a5内的9个区域1,对应图8中的(b)所示的视野范围820的粗线框b5对应的区域2。
可以理解,上述多个区域1与多个区域2中部分区域2的对应关系可以体现出长焦摄像头的视野范围与主摄像头的视野范围的对应关系。手机可以保存多个区域1与多个区域2中部分区域2的对应关系,并根据保存的对应关系,确定出图像b的第一区域。
在一些实施例中,每个区域1可以对应长焦摄像头的初始视野范围内的一个像素点,上述每个区域2可以对应主摄像头的视野范围内的一个像素点。也就是说,上述A*B是长焦摄像头的分辨率,C*D是主摄像头的分辨率。
需要说明的是,本申请实施例中,手机确定出图像b的第一区域的方法,包括但不限于上述实现方式(1)和实现方式(2)所述的方法。长焦摄像头的视野范围与主摄像头的视野范围的对应关系,包括但不限于上述实现方式(1)和实现方式(2)中所述的对应关系。并且,手机可以采用各种方式,保存上述长焦摄像头的视野范围与主摄像头的视野范围的对应关系,如采用表格保存该对应关系。本申请实施例中,对手机保存上述对应关系的具体方式不作限制。
S305、手机确定第二区域的曝光值。该第二区域是图像a中上述预设对象的图像所在的区域。
其中,手机可以根据预设对象的图像在图像b的第一区域的位置,确定出图像a中、预设对象的图像所在的第二区域,并检测第二区域的曝光值。
示例性的,上述预设对象(如人脸)的图像可能占据第一区域(即图像a的第一区域)的部分位置。例如,以图6所示的图像602为例。如图9中的(a)所示,预设对象603(如人脸)的图像占据第一区域610中虚线框901对应的位置(即第一区域610的部分位置)。如图9中的(b)所示,预设对象603(如人脸)的图像占据第一区域610中虚线框902对应的位置(即第一区域610的部分位置)。
当然,上述预设对象的图像也可能占据第一区域的所有位置(附图未示出)。在这种情况下,预设对象的图像在第一区域的位置是整个第一区域。
其中,图像a的第二区域是:图像a中、预设对象所在的区域。可以理解,上述第一区域是长焦摄像头的初始视野范围对应的在图像b中的区域。换言之,长焦摄像头所采集的图像(如图像a)可以包括主摄像头所采集的图像b的第一区域中的图像特征。并且,上述预设对象的图像在图像a中的相对位置,与预设对象的图像在第一区域中的相对位置是一致的。因此,手机可以根据预设对象的图像在第一区域的位置,确定出图像a中、预设对象所在的第二区域。
示例性的,手机可以保存长焦摄像头的视野范围与主摄像头的视野范围的对应关系。手机可以根据预设对象的图像在第一区域的位置,结合长焦摄像头的视野范围与主摄像头的视野范围的对应关系,确定出图像a中预设对象所在的第二区域。
在上述实现方式(1)中,手机可以保存长焦摄像头的初始视野范围中的两个对角在主摄像头的视野范围的坐标系中的二维坐标。该二维坐标可以体现出长焦摄像头的视野范围与主摄像头的视野范围的对应关系。
在上述实现方式(2)中,手机可以保存划分长焦摄像头的初始视野范围得到的多个区域1与划分主摄像头的视野范围得到的多个区域2中部分区域2的对应关系。该多个区域1与多个区域2中部分区域2的对应关系可以体现出长焦摄像头的视野范围与主摄像头的视野范围的对应关系。
例如,本申请实施例这里以上述实现方式(2)为例,结合以下两种情况,说明S305中“手机确定图像a中、预设对象所在的第二区域”的具体方法。其中,假设上述实现方式(2)所述的每个区域1对应长焦摄像头的初始视野范围内的一个像素点,每个区域2对应主摄像头的视野范围内的一个像素点。也就是说,上述多个区域1与多个区域2中部分区域2的对应关系,是长焦摄像头的初始视野范围内的像素点与主摄像头的视野范围内的像素点之间的对应关系。
情况(1):长焦摄像头未变焦的情况。即长焦摄像头采集图像a时,该长焦摄像头的视野范围是上述初始视野范围。
在情况(1)下,以图9中的(b)所示的虚线框902对应的区域为预设对象的图像在第一区域的位置为例。手机可以执行以下S00-S03,以确定出图像a中、预设对象(如人脸)所在的第二区域。S00:手机从图像b的第一区域中确定出预设对象的图像的位置,如虚线框902对应的区域。S01:手机确定出该虚线框902对应区域内的多个像素点(记为多个像素点1)。S02:手机根据上述对应关系(如长焦摄像头的初始视野范围内的像素点与主摄像头的视野范围内的像素点之间的对应关系),确定出图像a的多个像素点(记为多个像素点2)中与上述多个像素点1对应的多个像素点(记为多个像素点3)。S03:手机确定图像a中包括上述多个像素点3的区域为第二区域。
情况(2):长焦摄像头变焦的情况。即长焦摄像头采集图像a时,该长焦摄像头的视野范围不是上述初始视野范围。
在情况(2)下,以图9中的(b)所示的虚线框902对应的区域为预设对象的图像在第一区域的位置为例。手机可以执行以下S10-S15,以确定出图像a中、预设对象(如人脸)所在的第二区域。S10:手机从图像b的第一区域中确定出预设对象的图像的位置,如虚线框902对应的区域。S11:手机确定出该虚线框902对应区域内的多个像素点(记为多个像素点①)。S12:手机根据长焦摄像头的初始视野范围内的像素点与主摄像头的视野范围内的像素点之间的对应关系(记为对应关系1),确定出长焦摄像头未变焦的情况下采集的图像的多个像素点(记为多个像素点②)中与上述多个像素点①对应的像素点(记为多个像素点③)。S13:手机获取长焦摄像头的变焦信息。其中,变焦信息可以包括变焦比例和中心焦点的位置。该变焦比例可以是长焦摄像头变焦后的视野范围与初始视野范围的比值。该中心焦点可以是长焦摄像头变焦后的视野范围的中心点。S14:手机根据长焦摄像头的变焦信息,确定上述图像a(即长焦摄像头变焦后采集的图像)中、与上述多个像素点③对应的多个像素点(记为像素点④)。S15:手机确定图像a中包括上述多个像素点④的区域为上述第二区域。
需要说明的是,由上述变焦比例和中心焦点的定义可知:上述变焦信息可以用于确定长焦摄像头变焦后的视野范围中各个像素点(即图像a中各个像素点),与上述初始视野范围中各个像素点(如上述像素点②)的对应关系(记为对应关系2)。而多个像素点②是长焦摄像头的初始视野范围内的像素点,多个像素点①是图像b中预 设对象的图像对应的像素点。因此,当手机执行S12确定出上述多个像素点③(即多个像素点②中与多个像素点①对应的像素点)之后,手机执行S14-S15,根据上述对应关系2,将图像a与多个像素点③对应的多个像素点④对应的区域确定为第二区域。
其中,长焦摄像头变焦后的视野范围中各个像素点(即图像a中各个像素点),与初始视野范围中各个像素点(如上述像素点②)的对应关系,即上述对应关系2,可以是根据长焦摄像头变焦后的光学倍率确定的。其中,长焦摄像头的变焦前的光学倍率为“1×”(即1倍)。手机检测第二区域的曝光值的方法,可以参考常规技术中电子设备检查图像的曝光值的方法,本实施例这里不予赘述。
S306、手机判断第二区域的曝光值是否小于第一曝光阈值。
需要说明的是,摄像头(如长焦摄像头或者主摄像头)拍摄的图像中各个区域的曝光值可以不同。
可以理解,手机无法从用户的视觉角度,判断用户是否可以从图像a中清楚的检测到预设对象。但是,手机可以通过上述图像a中、预设对象所在的第二区域的曝光值的大小,判断该图像a中预设对象的图像对用户而言是否清晰可见。
具体的,如果第二区域的曝光值大于或者等于第一曝光阈值,则表示该图像a中预设对象的图像对用户而言清晰可见,用户可以从图像a中清楚的检测到预设对象。在这种情况下,手机不需要更新第二区域的曝光值。具体的,手机可以执行S310。
如果第二区域的曝光值小于第一曝光阈值,则表示该图像a中预设对象的图像对用户而言较为模糊,用户无法从图像a中检测到预设对象。在这种情况下,手机可以调整长焦摄像头的曝光参数,以提升上述曝光值。具体的,手机可以执行S307。
其中,上述第一曝光阈值可以是预配置在手机中的曝光阈值。或者,该第一曝光阈值可以是根据手机周围的环境光亮度值确定的。该环境光亮度值可以由手机中的环境光传感器采集。手机中可以保存不同的环境光亮度值,以及每个环境光亮度值对应的第一曝光阈值。由上述术语介绍中的描述可知:曝光值的大小用曝光等级表示。例如,曝光值可以为-2,-1、0、1、2或者3等。该第一曝光阈值也可以是一个曝光等级,如0或1等任一曝光等级。
例如,如果上述第一曝光阈值是预配置在手机中的曝光阈值,那么该第一曝光阈值可以为曝光等级0。摄像头采集图像时,曝光等级0是一个明暗适当的曝光等级,有利于保证图像的图像质量。
可选的,在另一些实施例中,可以采用第二区域的平均灰度值或者第二区域的平均RGB值代替上述第二区域的曝光值。其中,第二区域的平均灰度值是指:第二区域中各个像素点的灰度值的平均值。第二区域的平均RGB值是指:第二区域中各个像素点的RGB值的平均值。可以理解,采用上述第二区域的平均灰度值代替上述第二区域的曝光值后,本申请实施例中所述的第一曝光阈值和第二曝光阈值则可以替换为相应的灰度阈值。采用上述第二区域的平均灰度值代替上述第二区域的曝光值后,本申请实施例中所述的第一曝光阈值和第二曝光阈值则可以替换为相应的RGB阈值。
S307、手机调整长焦摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值。
其中,手机可以调整长焦摄像头的曝光时间(如调大曝光时间),以提升上述曝 光值。或者,手机可以调整长焦摄像头的曝光时间(如调大曝光时间),并调整ISO感光度(如调高ISO感光度),以提升上述曝光值。或者,手机可以调整长焦摄像头的拍照帧数(如调大拍照帧数),以提升上述曝光值。或者,手机可以调整长焦摄像头的拍照帧数(如调大拍照帧数),并调整ISO感光度(如调高ISO感光度),以提升上述曝光值。
需要注意的是,手机调整长焦摄像头的曝光参数的目的在于:使长焦摄像头拍摄得到的预设对象的图像的曝光值等于或者大于第一曝光阈值。本申请实施例中,手机可以保存曝光值与曝光参数的对应关系表。手机可以按照该对应关系表调整上述曝光参数,以使曝光值大于第一曝光阈值。例如,请参考表1,其示出本申请实施例提供的一种曝光值与曝光参数的对应关系表示例。
表1
Figure PCTCN2021082090-appb-000001
其中,表1所示的曝光时间T1<T2<T3<T4<T5。表1所示的拍照帧数F1<F2<F3<F4<F5。表1所示的ISO 1<ISO 2<ISO 3<ISO 4。
例如,假设上述第二区域的曝光值为表1所示的序号3对应的曝光值0;此时,长焦摄像头的曝光时间为T2,拍照帧数为F2,ISO感光度为ISO 2。上述第一曝光阈值为2。那么,手机可以仅调整曝光时间,以提升曝光值。如手机可以将曝光时间调整为T4;如此,曝光值可以为表1所示的序号9对应的曝光值2。或者,手机也可以按照其他选项调整曝光参数,以提升曝光值。如手机可以将曝光时间调整为T3,将拍照帧数调整为F4,将拍照帧数调整为ISO 1;如此,曝光值可以为表1所示的序号8对应的曝光值2。或者,手机也可以取上述两项的平均值,例如表1所示的序号9对应的数据和序号8对应的数据的平均值。
又例如,假设上述第二区域的曝光值为表1所示的序号3对应的曝光值0;此时,长焦摄像头的曝光时间为T2,拍照帧数为F2,ISO感光度为ISO 2。上述第一曝光阈 值为3。那么,手机可以仅调整拍照帧数,以提升曝光值。如手机可以将拍照帧数调整为F4,将ISO感光度调整为ISO 3;如此,曝光值可以为表1所示的序号10对应的曝光值3。或者,手机可以按照其他选项调整曝光参数,以提升曝光值。如手机可以将曝光时间调整为T3,拍照帧数调整为F5,ISO感光度调整为ISO 3;如此,曝光值可以为表1所示的序号12对应的曝光值3。或者,手机也可以取上述两项的平均值,例如表1所示的序号10对应的数据和序号12对应的数据的平均值。或者,手机也可以取三项的平均值,例如表1所示的序号10对应的数据、序号11对应的数据和序号12对应的数据的平均值。
需要说明的是,表1中光圈为NA表示光圈不调整。当然,本申请实施例中也不排除通过调整光圈以提升曝光值的方式。可以理解,如果过度调高上述曝光参数,可能会导致摄像头(如长焦摄像头)拍摄的图像曝光过度而影响图像质量。因此,如果第二区域的曝光值小于第一曝光阈值,手机按照该第一曝光阈值对应的曝光参数,更新长焦摄像头的曝光参数即可,不需要过度调高上述曝光参数。这样,可以保证长焦摄像头拍摄的图像的图像质量。因此,上述实例中,手机调整长焦摄像头的曝光参数,都是以曝光值等于第一曝光阈值为标准。这样,可以避免过度调高曝光参数而影响图像质量。
S308、手机的长焦摄像头采用调整后的曝光参数采集第一预览图像,手机显示该第一预览图像。
例如,手机执行S307调整长焦摄像头的曝光参数后,长焦摄像头采用调整后的曝光参数采集第一预览图像可以为图10中的(a)所示的预览图像1001。也就是说,手机可以执行S308,显示图10中的(a)所示的预览图像1001。对比图10中的(a)所示的预览图像1001与图4中的(c)所示的预览图像404可知:通过本申请实施例的方法可以提升长焦摄像头拍摄得到的图像的图像质量。
S309、响应于用户的拍照操作,手机保存图像c。该图像c是长焦摄像头采用调整后的曝光参数所拍摄的。
具体的,图像c是基于长焦摄像头采用调整后的曝光参数采集的一帧或多帧预览图像获取的。
示例性的,该拍照操作可以是用户对图10中的(a)所示的拍摄键1003的点击操作(如单击操作)。或者,该拍照操作还可以是手机执行S308显示预览图像时,接收到的语音命令,该语音命令用于触发手机拍照。例如,该语音命令可以为“拍照”、“请拍照”或者“321”等语音信息。
其中,本申请实施例中的图像c是第三图像。该图像c可以是手机接收到拍照操作时,手机所采集的一帧第一预览图像。或者,该图像c可以是根据从接收到拍照操作开始手机所采集的多帧第一预览图像生成的。
示例性的,以拍照操作可以是用户对图10中的(a)所示的拍摄键1003的点击操作为例。响应于用户对拍摄键1003的点击操作,手机可以保存图像c至手机的相册。例如,响应于用户对拍摄键1003的点击操作,手机可以显示图10中的(b)所示的图像预览界面。图10中的(b)所示的图像预览界面中的预览图像1002可以为上述图像c。响应于用户对拍摄键1003的点击操作,相册键1004对应的图标上所显 示的照片由图10中的(a)所示的小女孩变成了图10中的(b)所示的预览图像1001的缩小的照片。
可以理解,用户使用手机拍照的过程中,可能会因为摄像头的光学抖动或者用户操作所产生的抖动,而影响拍摄的图像质量。例如,手机可能会拍摄得到图11所示的图像1101。为了提升拍摄的图像质量,手机可以对长焦摄像头采用调整后的曝光参数采集的第一预览图像,进行防抖处理。也就是说,上述图像c是对长焦摄像头采用调整后的曝光参数采集的第一预览图像,进行防抖处理获得的图像。例如,图11所示的图像1101是进行防抖处理前的图像,而预览图像1002是进行防抖处理后的图像。相比于图11所示的图像1101,预览图像1002的清晰度更高,图像质量更好。
例如,上述防抖处理可以包括光学防抖(optical image stabilization,OIS)和电子防抖(electronic image stabilization,EIS)。OIS是快门时间(即曝光时间)内的防抖,用于稳定摄像头,OIS模块集成在摄像头内。EIS是通过手机中的EIS传感器实现的,用于拍摄运动中的拍摄对象时,减少多帧模糊现象出现的可能性。
S310、响应于用户的拍照操作,手机保存图像d。该图像d是长焦摄像头采用调整前的曝光参数所拍摄的。
具体的,该图像d是基于长焦摄像头采集的图像a获取的。其中,本申请实施例中的图像d是第四图像。示例性的,该拍照操作可以是用户对图4中的(c)所示的拍摄键407的点击操作(如单击操作)。或者,该拍照操作还可以是手机执行S302显示图像a(即预览图像)时,接收到的语音命令,该语音命令用于触发手机拍照。例如,该语音命令可以为“拍照”、“请拍照”或者“321”等语音信息。其中,手机执行S310所保存的图像d可以为图6所示的图像601。
本申请实施例提供一种拍摄图像的方法,基于主摄像头的进光量大于长焦摄像头的进光量的特点,手机的长焦摄像头采集图像时,可以将主摄像头作为辅助摄像头。具体的,手机可以借助于主摄像头的进光量较大的优势,从长焦摄像头采集图像a中检测到预设对象的位置(即第二区域)。其中,图像a的图像质量较差,无法从该图像a中清楚的分辨出预设对象的原因在于:该预设对象在图像a中的位置(如第二区域)的曝光值低。因此,手机可以检测并调整长焦摄像头的曝光参数,以提升上述曝光值。这样,便可以提升长焦摄像头拍摄得到的图像的图像质量。如此,提升曝光值之后,长焦摄像头便可以拍摄得到图像质量较高的图像(如图像c)。
综上所述,手机采用长焦摄像头作为预览摄像头拍摄图像时,可以借助于其他摄像头(称为辅助摄像头,如主摄像头)相比于预览摄像头进光量较大的优势,控制辅助摄像头与预览摄像头协同工作,以提升预览摄像头拍摄得到的图像的图像质量。也就是说,本申请的方法中,手机可以利用各个摄像头的优势,控制多个摄像头协同工作,以提升拍摄得到的图像的图像质量。
由上述实施例可知:第一图像(如图像a)中预设对象所在位置(如第二区域)的曝光值低,会影响第一图像的图像质量。所以,本申请实施例中,可以调高上述曝光参数,以提升曝光值。但是,如果图像的曝光值过高,则可能会因为图像曝光过度而影响图像质量。也就是说,图像的曝光值过低或者过高,都会影响图像的图像质量。
基于此,可选的,在另一些实施例中,为了避免第一图像(如图像a)的第二区域的曝光值过高而影响图像质量。在上述S305之后,上述S306之前,上述拍摄图像的方法还包括S306′。S306′:手机判断第二区域的曝光值是否小于第二曝光阈值。该第二曝光阈值大于上述第一曝光阈值。
在S306′之后,如果第二区域的曝光值小于第二曝光阈值,则表示第一图像(如图像a)中预设对象的图像没有过度曝光。在这种情况下,手机可以执行S306,判断第二区域的曝光值是否小于第一曝光阈值。
在S306′之后,如果第二区域的曝光值大于或等于第二曝光阈值,则表示第一图像(如图像a)中预设对象的图像过度曝光,该图像a中预设对象的图像对用户而言较为模糊,用户无法从图像a中检测到预设对象。在这种情况下,手机可以调整长焦摄像头的曝光参数,以降低上述曝光值。具体的,手机可以执行S307′。S307′:手机调整长焦摄像头的曝光参数,以降低长焦摄像头拍摄得到的预设对象的图像的曝光值。在S307′之后,本申请实施例的方法还包括S308-S310。其中,手机执行S307′以降低图像的曝光值的方法,可以参考本申请实施例中S307中“手机调整曝光参数以提升曝光值”的相关介绍,这里不予赘述。
本申请实施例中,如果一个摄像头采集的图像中预设对象所在的图像区域(如第二区域)的曝光值较大,则可能会导致图像过度曝光,使得用户无法从该图像中检测到预设对象。针对这种情况,本申请实施例中,手机可以调整摄像头的曝光参数,以降低上述图像的曝光值。这样,可以提升拍摄得到的图像的图像质量。
在一些实施例中,手机检测到图像b的第一区域内包括预设对象的图像之后,如果该预设对象是静止的,手机才会执行S305-S310。如果该预设对象是运动的,手机可以不执行S305-S310。如果该预设对象是运动的,手机可以按照常规方案拍摄图像。
示例性的,手机执行S303,主摄像头可采集图像b。手机可以根据主摄像头采集的多个图像b中、预设对象的图像的位置,判断预设对象是静止或者运动的。例如,如果手机间隔第一预设时长(如10秒、5秒或者3秒)所采集的两帧图像b中、预设对象的图像的位置变化(如位置移动的距离)大于预设距离阈值,手机则可以确定预设对象是运动的。如果手机间隔第一预设时长所采集的两帧图像b中、预设对象的图像的位置变化小于或等于预设距离阈值,手机则可以确定预设对象是静止的。
本申请实施例中,预设对象静止的情况下,手机执行S307所调整的曝光参数可以包括:曝光时间;或者,曝光时间和ISO感光度。其中,在预设对象静止的情况下,手机调整曝光参数的具体方法,可以参考以下实施例中的相关描述,本实施例这里不予赘述。
需要说明的是,手机根据摄像头采集的图像,判断该图像中的预设对象静止或者运动的方法,包括但不限于上述方法;其他方法可以参考常规技术中的相关方法,本实施例这里不予赘述。
在一些实施例中,手机检测到图像b的第一区域内包括预设对象的图像之后,如果该预设对象是运动的,手机才会执行S305-S310。如果该预设对象是静止的,手机可以不执行S305-S310。如果该预设对象是静止的,手机可以按照常规方案拍摄图像。
本申请实施例中,预设对象运动的情况下,手机执行S307所调整的曝光参数可以 包括:拍照帧数;或者,拍照帧数和ISO感光度。其中,在预设对象运动的情况下,手机调整曝光参数的具体方法,可以参考以下实施例中的相关描述,本实施例这里不予赘述。
需要说明的是,手机判断上述预设对象静止或运动的具体方法,可以参考上述实施例中的详细描述,本实施例这里不予赘述。
在另一些实施例中,手机检测到图像b的第一区域内包括预设对象的图像之后,无论预设对象是静止或者运动的,手机都可以执行S305-S310。但是,预设对象静止的情况下手机调整的曝光参数,与预设对象运动的情况下手机调整的曝光参数不同。例如,预设对象运动的情况下,手机执行S307所调整的曝光参数除了曝光时间、ISO外,还可以包括拍照帧数。预设对象静止的情况下,手机执行S307所调整的曝光参数可以包括曝光时间。具体的,如图12所示,在S306之后,如果第二区域的曝光值小于第一曝光阈值,本申请实施例的方法还包括S1201;S307可以包括S307a和S307b。
S1201、手机判断预设对象静止或运动。
其中,手机判断上述预设对象静止或运动的具体方法,可以参考上述实施例中的详细描述,本实施例这里不予赘述。
具体的,S1201之后,如果预设对象静止,手机可以执行S307a;如果预设对象运动,手机可以执行S307b。
S307a、手机调整长焦摄像头的曝光时间(即曝光参数),使第二区域的曝光值等于或大于第一曝光阈值。
由上述术语介绍中的描述可知:本申请实施例中,为了提升预览摄像头的拍摄的图像质量,可以调整曝光时间、拍照帧数或ISO感光度等至少一个曝光参数,以实现更新曝光值的目的。并且,曝光时间越长,曝光值越大;拍照帧数越大,曝光值越大;ISO感光度越高,曝光值越大。由此可见,“调大曝光时间”、“调大拍照帧数”和“调高ISO感光度”中的任一个操作,都可以达到提升上述曝光值的目的。
但是,摄像头在拍摄静止的物体(如上述预设对象)时,调整拍照帧数对图像的曝光值的影响不会很大,甚至可以忽略。在预设对象静止的情况下,长焦摄像头的曝光时间、拍照帧数和ISO感光度中,影响上述曝光值的主要因素为曝光时间。因此,本申请实施例中,在预设对象静止的情况下,可以调整长焦摄像头的曝光时间,以达到提升曝光值的目的。
例如,假设上述第二区域的曝光值为表1所示的序号3对应的曝光值0;此时,长焦摄像头的曝光时间为T2,拍照帧数为F2,ISO感光度为ISO 2。上述第一曝光阈值为1。那么,手机可以将曝光时间调整为T3;如此,曝光值可以为表1所示的序号7对应的曝光值1。
当然,在预设对象静止的情况下,长焦摄像头的ISO感光度也会对曝光值产生一定的影响。可选的,在预设对象静止的情况下,手机不仅可以调整长焦摄像头的曝光时间以提升上述曝光值;还可以调整长焦摄像头的ISO感光度以提升上述曝光值。也就是说,预设对象静止的情况下,S307中所述的曝光参数可以包括曝光时间和ISO感光度。
例如,假设上述第二区域的曝光值为表1所示的序号1对应的曝光值-1;此时, 长焦摄像头的曝光时间为T1,拍照帧数为F2,ISO感光度为ISO 1。上述第一曝光阈值为2。那么,手机可以将曝光时间调整为T4,ISO感光度为ISO 2;如此,曝光值可以为表1所示的序号9对应的曝光值2。
由上述实施例可知:OIS快门时间(即曝光时间)内的防抖,用于稳定摄像头。而EIS用于拍摄运动中的拍摄对象时,减少多帧模糊现象出现的可能性。因此,在预设对象静止的情况下,手机可以对长焦摄像头采集的预览图像进行OIS防抖,不需要对长焦摄像头采集的预览图像进行EIS防抖。换言之,本申请实施例中,在预设对象静止的情况下,手机响应于用户的拍照操作,对长焦摄像头采集的预览图像进行的防抖操作包括OIS防抖。
S307b、手机调整长焦摄像头的拍照帧数(即曝光参数),使第二区域的曝光值等于或大于第一曝光阈值。
其中,摄像头在拍摄运动的物体(如上述预设对象)时,调整曝光时间对图像的曝光值的影响不会很大,甚至可以忽略。在预设对象运动的情况下,长焦摄像头的曝光时间、拍照帧数和ISO感光度中,影响上述曝光值的主要因素为拍照帧数。因此,本申请实施例中,在预设对象运动的情况下,可以调整长焦摄像头的拍照帧数,以达到提升曝光值的目的。
例如,假设上述第二区域的曝光值为表1所示的序号2对应的曝光值-1;此时,长焦摄像头的曝光时间为T2,拍照帧数为F1,ISO感光度为ISO 3。上述第一曝光阈值为1。那么,手机可以将拍照帧数调整为F3;如此,曝光值可以为表1所示的序号6对应的曝光值1。
当然,在预设对象运动的情况下,长焦摄像头的ISO感光度也会对曝光值产生一定的影响。可选的,在预设对象运动的情况下,手机不仅可以调整长焦摄像头的拍照帧数以提升上述曝光值;还可以调整长焦摄像头的ISO感光度以提升上述曝光值。也就是说,预设对象运动的情况下,S307中所述的曝光参数可以包括拍照帧数和ISO感光度。
例如,假设上述第二区域的曝光值为表1所示的序号5对应的曝光值0;此时,长焦摄像头的曝光时间为T3,拍照帧数为F2,ISO感光度为ISO 2。上述第一曝光阈值为3。那么,手机可以将拍照帧数调整为F5,ISO感光度为ISO 3;如此,曝光值可以为表1所示的序号12对应的曝光值3。
在预设对象运动的情况下,手机响应于用户的拍照操作,对长焦摄像头采集的第一预览图像进行的防抖操作可以包括OIS防抖和EIS防抖。这样,可以提升长焦摄像头拍摄运动物体的图像质量。可以理解,预设对象运动的情况下,手机可以融合(或者称为合成)长焦摄像头采集的多帧第一预览图像,得到上述图像c。上述EIS防抖可以用于手机融合多帧第一预览图像时,减少多帧模糊现象。即手机可以对上述多帧第一预览图像进行EIS防抖融合。
示例性的,本申请实施例中,手机可以采用神经网络融合算法,对上述多帧第一预览图像进行图像融合得到第三图像。当然,本申请实施例中,手机对多帧第一预览图像进行图像融合所采用的算法包括但不限于神经网络融合算法。例如,手机还可以采用多帧第一预览图像的加权平均算法,对上述多帧第一预览图像进行图像融合得到 第三图像。本申请实施例中,手机进行多帧图像的图像融合的其他方式,本实施例这里不予赘述。
在另一些实施例中,手机检测到变倍操作(即S301)之后,可以执行S302、S303和S304。手机执行S304图像b的第一区域板块预设对象的图像之后,可以执行S1201,判断预设对象静止或运动。S1201之后,手机可以执行S305,确定第二区域的曝光值。S305之后,手机可以执行S306,判断第二区域的曝光值是否小于第一曝光阈值。S306之后,在第二区域的曝光值小于第一曝光阈值的情况下,结合S1201的判断结果,如果预设对象是静止的,手机可执行S307a,如果预设对象是运动的,手机可执行S307b。S307a或S307b之后,手机可执行S308-S309。S306之后,如果第二区域的曝光值大于或等于第一曝光阈值,手机可执行S310。
本申请实施例中,预设对象静止的情况下手机调整的曝光参数,与预设对象运动的情况下手机调整的曝光参数不同。也就是说,手机可以根据拍摄对象(即预设对象)的运动状态(如静止或者运动),针对性的调整不同的曝光参数以提升曝光值。这样,有利于提升长焦摄像头拍摄的图像的图像质量。
在另一些实施例中,可能会存在预设对象是静止的,但是图像b(即第二图像)中的其他拍摄对象是运动的情况。
例如,假设预设对象的人脸,用户的头部是静止的,而用户头部以下的身体是运动的。这样,虽然预设对象是静止的,但是图像b中的其他拍摄对象(如用户头部以下的身体)是运动的。
又例如,假设预设对象是人脸,用户坐在车上,用户的头部是静止的,而车窗外的风景是变化的。这样,虽然预设对象是静止的,但是图像b中其他拍摄对象(如人脸之外背景)是运动(即变化)的。
针对上述图像b中部分拍摄对象(如预设对象)是静止的,而另一部分拍摄对象(如预设对象之外的背景)是运动(即变化)的情况,本申请另一实施例中,手机可以判断主摄像头采集的图像(如图像b)中是否存在运动的拍摄对象。如果图像b中不存在运动的拍摄对象,手机则可以执行S307a。如果图像b中存在运动的拍摄对象,手机则可以执行S307b。
示例性的,手机可以通过以下实现方式(i)和实现方式(ii),判断主摄像头采集的图像中是否存在运动的拍摄对象。
实现方式(i):
在实现方式(i)的一种情况下,手机可以对比主摄像头采集的多帧图像(如两帧图像)中的对应像素点,统计两帧图像中、存在差异的对应像素点的数量。如果统计得到的数量大于或等于第一预设数量阈值,则表示主摄像头采集的图像中存在运动的拍摄对象。如果统计得到的数量小于第一预设数量阈值,则表示主摄像头采集的图像中不存在运动的拍摄对象。
在实现方式(i)的另一种情况下,手机可以对比上述两帧图像中的对应像素点,计算两帧图像中的对应像素点的差异值(例如,差异值初始值为0,若两帧图像中的对应像素点不同,则差异值加1,比较完上述两帧图像中的对应像素点,最终所得的差异值可以认为是两帧图像中存在差异的像素点的数量);然后,手机可以统计差异 值大于或等于预设差异阈值的像素点的数量。如果差异值大于预设差异阈值的像素点的数量大于第二预设数量阈值,则表示主摄像头采集的图像中存在运动的拍摄对象。如果统计得到的数量小于第二预设数量阈值,则表示主摄像头采集的图像中不存在运动的拍摄对象。
可选的,由于两帧图像拍摄间隔非常短,两帧图像中,一帧图像的第i行第j列的像素点与另一帧图像的第i行第j列的像素点对应。i和j均为正整数。
可选的,若预设对象快速运动状态,两帧图像中,一帧图像的第i行第j列的像素点与另一帧图像的第m行第n列的像素点对应。i和j均为正整数。确定对应像素点的方法采用现有技术中的方法皆可实现,在此不再进行赘述。
实现方式(ii):手机通过运动检测算法或者运动矢量算法,判断主摄像头采集的图像中的拍摄对象是静止或者运动的。
由上述实施例可知:如果图像b中存在运动的拍摄对象,手机则可以执行S307b。在本实施例中,图像b中存在运动的拍摄对象,可能会存在以下两种情况。情况(1):图像b中的所有拍摄对象都是运动的。情况(2)图像b中的部分拍摄对象是运动的,而另一部分拍摄对象是静止的。
采用上述实现方式(i)和实现方式(ii),手机不仅可以判断出图像b中存在运动的拍摄对象,还可以判断出图像b中哪些拍摄对象的运动的,哪些拍摄对象是静止的。例如,以上述实现方式(i)为例,差异值大于预设差异阈值的像素点对应图像区域(称为运动区域)的拍摄对象是运动的;而差异值小于或等于预设差异阈值的像素点对应图像区域(称为静止区域)的拍摄对象是静止的。
本申请实施例中,手机执行S309,基于多帧第一预览图像获取第三图像时,针对静止区域的图像,只需要使用多帧第一预览图像中的任一帧图像中静止区域的图像即可;而对于运动区域的图像而言,则可以对多帧第一预览图像运动区域的图像采用图像融合的算法进行融合。
需要说明的是,以预设对象是人脸为例,本申请实施例中,手机识别预览图像中的各个区域是静止区域或运动区域时,可以采用如下方式划分各个区域,然后识别各个区域是静止区域或运动区域。例如,人脸(即预设对象)所在的图像区域单独划分为一个区域;预览图像中除了人脸所在图像区域之外的其他区域作为一个区域,该区域可以包括用户头部以下的身体的图像,以及用户身体之外的背景图像等。又例如,人脸(即预设对象)所在的图像区域单独划分为一个区域;预览图像中用户头部以下的身体所在的图像区域单独作为一个区域,用户身体之外的背景所在的图像区域单独作为一个区域。
又例如,本申请实施例中,还可以按照人体结构(如头部、颈部、躯干和四肢等),将用户身体所在的图像区域划分为多个区域。如人脸(即预设对象)所在的图像区域单独划分为一个区域;预览图像中用户的躯干所在的图像区域单独作为一个区域,用户的左手所在的图像区域单独作为一个区域,用户的右手所在的图像区域单独作为一个区域,用户的左腿所在的图像区域单独作为一个区域,用户的右腿所在的图像区域单独作为一个区域。同样地,对用户身体之外的背景图像的划分,也可以进行多区域划分,如用户全部身体左侧的背景所在的图像区域、用户全部身体右侧的背景所在的 图像区域、用户头部上面的背景所在的图像区域和用户脚部以下的背景所在的图像区域。
需要说明的是,手机识别预览图像中的各个区域是静止区域或运动区域时,将预览图像划分区域的方式包括但不限于上述示例中的方式,其他方式本申请实施例这里不予赘述。
在一些实施例中,手机接收到变倍操作后,主摄像头可以先不采集图像。响应于该变倍操作,手机的环境光传感器检测环境光亮度。手机可以确定环境光亮度值X(即上述环境光亮度的具体数值),如果环境光亮度值X低于第一亮度阈值,手机则可以进入智能拍摄模式。在该智能拍摄模式下,手机的主摄像头可采集图像(如图像b)。其中,上述环境光亮度值X是第一环境光亮度值或者第三环境光亮度值。
可以理解,如果环境光亮度值X较高(如环境光亮度值X高于或等于第一亮度阈值);那么,即使长焦摄像头的进光量小,也不会影响拍摄得到的图像的图像质量。本申请实施例中,在暗光场景(即环境光亮度值1低于第一亮度阈值的场景)下,响应于上述变倍操作,手机才会进入智能拍摄模式。其中,在智能拍摄模式下,手机的主摄像头可协助长焦摄像头拍摄图像,以提升长焦摄像头拍摄得到的图像的图像质量。如果环境光亮度较高,手机则不会执行本申请实施例的方法;手机可以按照常规技术中的方法拍摄图像。这样,可以减少手机的功耗,并且可以提升手机拍照的响应时间。
在另一些实施例中,手机接收到变倍操作后,主摄像头可以先不采集图像。响应于该变倍操作,手机的环境光传感器检测环境光亮度。手机可以确定环境光亮度值X(即上述环境光亮度的具体数值),如果环境光亮度值X低于第一亮度阈值,手机则可以请求用户确认是否进入智能拍摄模式。如果用户选择进入智能拍摄模式,手机的主摄像头可采集图像,协助长焦摄像头拍摄图像。
在另一些实施例中,手机接收到变倍操作后,主摄像头可以先不采集图像。响应于该变倍操作,手机可以请求用户确认是否进入智能拍摄模式。如果用户选择进入智能拍摄模式,手机的主摄像头可采集图像,协助长焦摄像头拍摄图像。
具体的,图3所示的S303或图12所示的S303可以替换为S1301-S1303。例如,如图13所示,图12所示的S303可以替换为S1301-S1303。
S1301、响应于上述变倍操作,手机显示第一用户界面。该第一用户界面用于请求用户确认是否采用主摄像头协助长焦摄像头拍摄图像。
其中,在智能拍摄模式下,手机的主摄像头可协助长焦摄像头拍摄图像,以提升长焦摄像头拍摄得到的图像的图像质量。也就是说,上述第一用户界面可用于请求用户确认是否进入智能拍摄模式。
例如,手机可显示图4中的(b)所示的图像预览界面。响应于用户在图4中的(b)所示的图像预览界面输入的变倍操作,手机可显示图14中的(a)所示的第一用户界面1401。该第一用户界面1401包括指示信息“请确认是否进入智能拍摄模式?”1402,以及提示信息“在智能拍摄模式下,手机可启动主摄像头协助拍照,可提升图像质量!”1403。该第一用户界面1401还包括“是”按钮和“否”按钮。“是”按钮用于指示手机进入智能拍摄模式,“否”按钮用于指示手机不用进入智能拍摄模式。
需要说明的是,响应于上述变倍操作,虽然手机可以先不启动主摄像头;而是显 示第一用户界面。如果用户在第一用户界面选择进入智能拍摄模式,手机则可以启动主摄像头,长焦摄像头便可以采集图像。但是,响应于上述变倍操作,手机可以启动长焦摄像头,长焦摄像头可以采集图像(如图像a),并显示长焦摄像头采集的图像a(即预览图像),并在该预览图像上显示上述第一用户界面。例如,响应于用户在图4中的(b)所示的图像预览界面输入的变倍操作,手机可显示图14中的(b)所示的界面1404。该界面1404中,长焦摄像头采集的图像1405显示在底层,第一用户界面1406显示在图像1405的上层。
S1302、手机检测到用户对第一用户界面的第一操作。
S1303、响应于上述第一操作,手机的主摄像头采集图像b。
其中,上述第一操作用于触发手机进入智能拍摄模式。例如,该第一操作可以是用户对图14中的(a)或图14中的(b)所示的“是”按钮的点击操作(如单击操作)。或者,该第一操作还可以是用户发出的语音命令,如“进入智能拍摄模式”、“是”或者“进入”等语音信息。或者,该第一操作还可以是用户在第一用户界面输入的预设手势,如S形手势或L形手势等任一手势。
响应于用户在第一用户界面的第一操作,手机的主摄像头可采集图像b,并执行S304-S310。其中,手机执行S308,可显示图10中的(a)所示的图像预览界面。例如,响应于用户对图14中的(a)或图14中的(b)所示的“是”按钮的点击操作(即第一操作),手机可显示图10中的(a)所示的图像预览界面。
当然,用户也可能在第一用户界面选择不进入智能拍摄模式。即手机可接收用户在第一用户界面的第二操作。例如,该第二操作可以是用户对图14中的(a)或图14中的(b)所示的“否”按钮的点击操作(如单击操作)。或者,该第二操作还可以是用户发出的语音命令,如“不进入智能拍摄模式”、“否”或者“不进入”等语音信息。响应于该第二操作,手机不需要进入智能拍摄模式,手机可以按照常规技术中的方法拍摄图像。例如,响应于用户对图14中的(a)或图14中的(b)所示的“否”按钮的点击操作(即第二操作),手机可显示图4中的(c)所示的图像预览界面。
可选的,上述第一用户界面还可以提供选项“下次不再提示我”等类似内容的提示框,在这种情况下,如果用户选择了“下次不再提示我”的选项,手机可以根据上一次打开拍照界面的操作进行相同的操作,而不再显示上述提示框;如果用户不选择“下次不再提示我”的选项,下次可以继续弹出该提示框提示用户。也可以在用户不选择“下次不再提示我”的选项超过一定次数后,手机自动按照上一次打开拍照界面的操作进行相同的操作,例如,手机用户界面提供提示信息1402的同时也提供选项“下次不再提示我”的选项,用户每次都选择进入智能拍摄模式,但都不勾选“下次不再提示我”的选项,在超过5次或者10次后,手机不再提供提示1402,而进入智能拍摄模式。
本申请实施例中,手机可以在第一用户界面请求用户确认是否进入智能拍摄模式;如果用户选择进入智能拍摄模式,手机才会启动主摄像头协助长焦摄像头拍摄图像。也就是说,手机可以按照用户的意愿,启动主摄像头协助长焦摄像头拍摄图像。这样,可以提升手机与用户交互过程中的用户体验。
可选的,手机还可以提供智能拍摄模式下的图像效果预览功能。也就是说,手机 可以为用户展示智能拍摄模式下的效果预览图像,以供用户根据效果预览图像选择是否进入智能拍摄模式。具体的,本申请实施例的方法还包括S1401-S1403。
S1401、手机检测到用户对第一用户界面的第三操作。
其中,该第三操作用于触发手机显示第一摄像头采集的第一预览图像(即智能拍摄模式下的效果预览图像)。例如,如图14中的(a)所示,第一用户界面1401还包括第一控件,如“智能拍摄模式的效果预览”按钮1407。如图14中的(b)所示,第一用户界面1406还包括第一控件,如“智能拍摄模式的效果预览”按钮1408。该第三操作可以是用户对上述第一控件(如“智能拍摄模式的效果预览”按钮)的点击操作(如单击操作、双击操作、三击操作等)。或者,上述第三操作可以是用户输入的语音命令,如“智能拍摄模式预览效果”、“预览效果”、“图像预览”或者“效果预览”等语音信息。或者,上述第三操作还可以是用户输入的预设手势,如打勾“√”手势、画圆圈手势、双指并拢、双指画“Z”形、三指下滑等手势,对此手势本申请不进行限定,在此不再进行赘述。
S1402、响应于该第三操作,手机显示第二用户界面。
其中,上述第二用户界面包括上述长焦摄像头采用调整后的曝光参数采集的第一预览图像,即手机进入智能拍摄模式前长焦摄像头采集的预览图像(如上述图像a)。也就是说,响应于该第三操作,手机可以暂时进入智能拍摄模式以得到S308中所述的预览图像。可选的,上述第二用户界面还可以包括手机进入智能拍摄模式后长焦摄像头采集的预览图像(如S308中所述的预览图像)。这样,有助于用户对比智能拍摄模式下的预览图像和非智能模式下的预览图像,以根据这两个预览图像的图像效果决定是否控制手机进入智能拍摄模式。
例如,响应于用户对图14中的(a)所示的“智能拍摄模式的效果预览”按钮1407(即第一控件)的点击操作(如单击操作),手机可显示图15A所示的第二用户界面1501。该第二用户界面1501可以包括:指示信息“请根据以下图像效果,确认是否进入智能拍摄模式?”1502、非智能拍摄模式的预览图像1503,智能拍摄模式的预览图像1504(即上述第一预览图像)。其中,非智能拍摄模式的预览图像1503是手机进入智能拍摄模式前长焦摄像头采集的预览图像(如上述图像a)。智能拍摄模式的预览图像1504是手机进入智能拍摄模式后长焦摄像头采集的预览图像(如S308中所述的预览图像)。第二用户界面1501还包括“是”按钮和“否”按钮。“是”按钮用于指示手机进入智能拍摄模式,“否”按钮用于指示手机不用进入智能拍摄模式。
S1403、响应于用户在第二用户界面的第四操作,手机的主摄像头采集图像b。
其中,上述第四操作用于触发手机进入智能拍摄模式。例如,该第四操作可以是用户对图15A所示的“是”按钮的点击操作(如单击操作)。或者,该第四操作还可以是用户发出的语音命令,如“进入智能拍摄模式”、“是”或者“进入”等语音信息。
响应于用户在第二用户界面的第四操作,手机的主摄像头可采集图像b,并执行S304-S310。其中,手机执行S308,可显示图10中的(a)所示的图像预览界面。例如,响应于用户对图15A所示的“是”按钮的点击操作(即第四操作),手机可显示图10中的(a)所示的图像预览界面。
当然,用户也可能在第二用户界面选择不进入智能拍摄模式。即手机可接收用户在第二用户界面的第五操作。例如,该第五操作可以是用户对图15A所示的“否”按钮的点击操作(如单击操作)。或者,该第五操作还可以是用户发出的语音命令,如“不进入智能拍摄模式”、“否”或者“不进入”等语音信息。响应于该第五操作,手机不需要进入智能拍摄模式,手机可以按照常规技术中的方法拍摄图像。例如,响应于用户对图15A所示的“否”按钮的点击操作(即第五操作),手机可显示图4中的(c)所示的图像预览界面。
本申请实施例中,响应于用户在第一用户界面的第三操作,手机可显示第二用户界面。该第二用户界面包括:手机进入智能拍摄模式前长焦摄像头采集的预览图像(如上述图像a);以及手机进入智能拍摄模式后长焦摄像头采集的预览图像(如S308中所述的预览图像)。也就是说,手机可以为用户提供非智能拍摄模式下的图像效果预览和智能拍摄模式下的图像效果预览功能。这样,可以便于用户对比非智能拍摄模式的预览图像和智能拍摄模式的预览图像,根据预览图像的图像效果决定是否控制手机进入智能拍摄模式。
在另一些实施例中,手机可以在上述第一用户界面显示:手机进入智能拍摄模式前长焦摄像头采集的预览图像(如上述图像a);以及手机进入智能拍摄模式后长焦摄像头采集的预览图像(如S308中所述的预览图像)。
例如,手机执行S1301,可以显示图15B中的(a)所示的第一用户界面1505。该第一用户界面1505不仅包括指示信息“请确认是否进入智能拍摄模式?”、提示信息“在智能拍摄模式下,手机可启动主摄像头协助拍照,可提升图像质量!”、“是”按钮和“否”按钮,还包括非智能拍摄模式的预览图像1506和智能拍摄模式的预览图像1507。
又例如,手机执行1301,可以显示图15B中的(b)所示的第一用户界面1508。该第一用户界面1508不仅包括指示信息“请确认是否进入智能拍摄模式?”、提示信息“在智能拍摄模式下,手机可启动主摄像头协助拍照,可提升图像质量!”、“是”按钮和“否”按钮,还包括非智能拍摄模式的预览图像1509和智能拍摄模式的预览图像1510。
本实施例中,手机响应于变倍操作,可以直接在第一用户界面显示手机进入智能拍摄模式前长焦摄像头采集的预览图像(如上述图像a);以及手机进入智能拍摄模式后长焦摄像头采集的预览图像(如S308中所述的预览图像)。也就是说,手机可以直接在第一用户界面为用户提供非智能拍摄模式下的图像效果预览和智能拍摄模式下的图像效果预览功能。这样,这样,可以便于用户直接在第一用户界面对比非智能拍摄模式的预览图像和智能拍摄模式的预览图像,根据预览图像的图像效果决定是否控制手机进入智能拍摄模式。
在一些实施例中,手机中包括可见光摄像头和红外摄像头。其中,上述可见光摄像头也可以成为RGB摄像头。RGB摄像头只可以感知可见光,不能感知红外光。上述红外摄像头不仅可以感知可见光,还可以感知红外光。例如,上述红外光可以为890纳米(nm)-990nm的红外光。即红外摄像头可以感知波长为890nm-990nm的红外光。当然,不同的红外摄像头能够感知的红外光(即红外光的波长)可以不同。其中,上 述可见光摄像头也可以成为普通波段的摄像头,该普通波段是可见光的波长所在的波段。
在暗光场景(如傍晚、深夜或者暗室内)下,可见光的强度较低。可见光摄像头无法感知到光线或者感知到的光线较弱,因此无法采集到预设对象的清晰图像。而红外光摄像头可以感知视野范围内有温度的人或动物(即预设对象)发出红外光,因此可以采集到预设对象的图像。
针对可见光摄像头和红外摄像头的上述特点,手机在暗光场景下,采用可见光摄像头作为预览摄像头(即第一摄像头)采集图像时,为了避免由于可见光较弱而影响图像质量,可以借助于红外摄像头能够感知红外光的优势,将红外摄像头作为辅助摄像头(即第二摄像头)协助可见光摄像头工作,以提升可见光摄像头拍摄得到的图像的图像质量。
具体的,本申请实施例提供一种拍摄图像的方法,该方法可以应用于包括主摄像头和长焦摄像头的手机。如图16所示,该方法可以包括S1601-S1611。
S1601、手机检测到预设操作1。该预设操作1用于触发手机的可见光摄像头采集图像。
具体的,该预设操作1用于触发手机启动可见光摄像头,使可见光摄像头采集图像,然后显示可见光摄像头采集的图像。
S1602、响应于上述预设操作1,手机的可见光摄像头采集图像I,手机显示可见光摄像头采集的图像I。
示例性的,上述可见光摄像头可以是长焦摄像头、广角摄像头、主摄像头或黑白摄像头等任一摄像头。其中,用于触发手机启动不同可见光摄像头的预设操作1不同。例如,用于触发手机启动主摄像头的预设操作1可以是图4中的(a)所示的操作1,即用户启动“照相机”应用的操作。又例如,用于触发手机启动长焦摄像头的预设操作1可以是S301所述的变倍操作。又例如,用于触发手机启动广角摄像头的预设操作1可以是用户在“照相机”中开启全景拍摄模式的操作。又例如,用于触发手机启动广角摄像头的预设操作1可以是用户在“照相机”中开启黑白拍摄模式的操作。其中,本申请实施例中的图像I是第一图像。
S1603、响应于预设操作1,手机的环境光传感器检测环境光亮度,手机确定第二环境光亮度值,并判断第二环境光亮度值是否低于第二亮度阈值。
例如,该第二亮度阈值可以低于上述第一亮度阈值。如该第二亮度阈值可以为深夜室外的环境光亮度,第一亮度阈值可以为傍晚室外的环境光亮度的具体数值。
可以理解,如果手机的环境光传感器采集的环境光亮度值(即第二环境光亮度值)高于或者等于第二亮度阈值,则表示环境光亮度较高,手机不需要进入智能拍摄模式启动红外摄像头协助可见光摄像头拍照。此时,手机则不进入智能拍摄模式。手机的可见光摄像头继续采集图像I,手机显示可见光摄像头采集的图像I,然后执行S1611。
如果第二环境光亮度值低于第二亮度阈值,则表示环境光亮度较低,可见光的强度较低,手机处于暗光场景中。这种情况下,可见光摄像头无法感知到光线或者感知到的光线较弱,因此无法采集到预设对象的清晰图像。此时,手机可以将红外摄像头作为辅助摄像头协助可见光摄像头工作,以提升可见光摄像头拍摄得到的图像的图像 质量。具体的,如果环境光亮度低于第二亮度阈值,手机可以执行S1604。
S1604、手机的红外摄像头采集图像II。
其中,如果第二环境光亮度值低于第二亮度阈值,手机可以启动红外摄像头,红外摄像头可采集图像II。本申请实施例中的图像II是第二图像。
可选的,如果第二环境光亮度值低于第二亮度阈值,手机可以先不启动红外摄像头,而是显示第一用户界面,由用户选择是否进入智能拍摄模式,以启动红外摄像头协助可见光摄像头拍摄图像。响应于用户在第一用户界面的第一操作,手机可执行S1604。响应于用户在第一用户界面的第二操作,手机可执S1611。其中,第一用户界面、第一操作和第二操作的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。
可选的,响应于用户在第一用户界面的第三操作,手机还可以显示第二用户界面。该第二用户界面包括:手机进入智能拍摄模式前可见光摄像头采集的预览图像(如上述图像I);以及手机进入智能拍摄模式后可见光摄像头采集的预览图像(如S1609中所述的预览图像)。响应于用户在该第二用户界面的第四操作,手机可执行S1604。响应于用户在第二用户界面的第五操作,手机可执行S1611。其中,第二用户界面、第四操作和第五操作的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。
S1605、手机检测到图像II的第一区域内包括预设对象的图像。该第一区域是图像II中、与可见光摄像头的视野范围对应的区域。
其中,S1605中“手机检测到图像II的第一区域内包括预设对象的图像”的方法,可以参考上述实施例所述的S304中“手机检测到图像b的第一区域内包括预设对象的图像”的方法,本实施例这里不予赘述。
S1606、手机确定第二区域的曝光值。第二区域是图像I中预设对象的头像所在的区域。
其中,S1606中“手机确定出图像I中的第二区域,并检测第二区域的曝光值”的方法,可以参考上述实施例所述的S305中“手机确定出图像a中的第二区域,并检测第二区域的曝光值”的方法,本实施例这里不予赘述。
S1607、手机判断第二区域的曝光值是否小于第一曝光阈值。
其中,S1607中“手机判断第二区域的曝光值是否小于第一曝光阈值”的方法,可以参考上述实施例对S306的详细描述,本实施例这里不予赘述。
具体的,如果第二区域的曝光值大于或者等于第一曝光阈值,则表示该图像I中预设对象的图像对用户而言清晰可见,用户可以从图像I中清楚的检测到预设对象的图像。在这种情况下,手机不需要更新第二区域的曝光值。具体的,手机可以执行S1611。
如果第二区域的曝光值小于第一曝光阈值,则表示该图像I中预设对象的图像对用户而言较为模糊,用户无法从图像I中检测到预设对象的图像。在这种情况下,手机可以调整可见光摄像头的曝光参数,以提升上述曝光值。具体的,手机可以执行S1608。
S1608、手机调整可见光摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值。
其中,S1608中“手机调整可见光摄像头的曝光参数,使第二区域的曝光值等于 或者大于第一曝光阈值”的方法,可以参考上述实施例所述的S307中“手机调整长焦摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值”的方法,本实施例这里不予赘述。
在该实施例中,手机也可以根据拍摄对象(即预设对象)的运动状态(如静止或者运动),针对性的调整不同的曝光参数以提升曝光值。例如,预设对象运动的情况下,手机执行S1608所调整的曝光参数可以包括拍照帧数。预设对象静止的情况下,手机执行S1608所调整的曝光参数可以包括曝光时间。
具体的,在S1607之后,如果第二区域的曝光值大于或者等于第一曝光阈值,手机可以执行S1201。S1201之后,如果预设对象静止,手机可以执行S1608a;如果预设对象运动,手机可以执行S1608b。S1608a:手机调整可见光摄像头的曝光时间(即曝光参数),使第二区域的曝光值等于或者大于第一曝光阈值。S1608b:手机调整可见光摄像头的拍照帧数(即曝光参数),使第二区域的曝光值等于或者大于第一曝光阈值。其中,S1608a的具体实现方式,可以参考上述实施例对S307a的详细介绍;S1608b的具体实现方式,可以参考上述实施例对S307b的详细介绍,本实施例这里不予赘述。
S1609、手机的可见光摄像头采用调整后的曝光参数采集第一预览图像,手机显示该第一预览图像。
其中,S1609的具体实现方式,可以参考上述实施例对S308的详细介绍,本实施例这里不予赘述。
S1610、响应于用户的拍照操作,手机保存图像III。该图像III是可见光摄像头采用调整后的曝光参数所拍摄的。
具体的,上述图像III是可见光摄像头采用调整后的曝光参数采集的一帧或多帧第一预览图像获取的。
其中,本申请实施例中的图像III是第三图像。本实施例中S1610的具体实现方式,可以参考上述实施例对S309的详细介绍,本实施例这里不予赘述。
在该实施例中,在预设对象静止的情况下,手机响应于用户的拍照操作,对可见光摄像头采集的预览图像进行的防抖操作包括OIS防抖。在预设对象运动的情况下,手机响应于用户的拍照操作,对可见光摄像头采集的预览图像进行的防抖操作可以包括OIS防抖和EIS防抖。
S1611、响应于用户的拍照操作,手机保存图像IV。该图像IV是基于可见光摄像头采集的图像I获取的。
其中,本申请实施例中的图像IV是第四图像。本实施例中S1611的具体实现方式,可以参考上述实施例对S310的详细介绍,本实施例这里不予赘述。
本申请实施例提供一种拍摄图像的方法,基于红外摄像头具备感知可见光和红外光的能力,而可见光摄像头具备感知可见光的能力不具备感知红外光的能力的特点,手机在暗光场景下,可见光摄像头采集图像时,可以将红外摄像头作为辅助摄像头。具体的,手机可以借助于红外摄像头可以感知红外光的优势,从可见光摄像头采集图像I中检测到预设对象的位置(即第二区域)。其中,图像I的图像质量较差,无法从该图像I中清楚的分辨出预设对象的原因在于:该预设对象在图像I中的位置(如第二区域)的曝光值低。因此,手机可以检测并调整可见光摄像头的曝光参数,以提 升上述曝光值。这样,便可以提升可见光摄像头拍摄得到的图像的图像质量。如此,提升曝光值之后,可见光摄像头便可以拍摄得到图像质量较高的图像(如图像III)。
在一些实施例中,上述可见光摄像头是长焦摄像头,手机包括长焦摄像头、主摄像头和红外摄像头。在该实施例中,基于主摄像头的进光量大于长焦摄像头的进光量,红外摄像头具备感知可见光和红外光的能力,长焦摄像头具备感知可见光的能力不具备感知红外光的能力的特点,手机采用长焦摄像头作为预览摄像头采集图像时,可根据环境光亮度的高低,选择主摄像头或者红外摄像头作为辅助摄像头协助长焦摄像头拍照。具体的,该方法可以包括S1601-S1602、S1701-S1703、1604-S1611和S304-S310。
其中,在该实施例中,S1601-S1602中所述的预设操作1是变倍操作。该变倍操作的详细介绍,可以参考上述实施例的相关描述,本实施例这里不予赘述。
如图17所示,在S1601之后,本申请实施例的方法还可以包括S1701-S1703。
S1701、响应于变倍操作(即预设操作1),手机的环境光传感器检测环境光亮度,手机确定第二环境光亮度值,并判断第二环境光亮度值是否低于第一亮度阈值。
具体的,如果第二环境光亮度值高于或者等于第一亮度阈值,则表示环境光亮度较高;那么,即使长焦摄像头的进光量小,也不会影响拍摄得到的图像的图像质量。在这种情况下,手机不需要进入智能拍摄模式。因此,手机可以不进入智能拍摄模式,可见光摄像头采集图像I,手机显示可见光摄像头采集的图像I,然后执行S1611。
如果第二环境光亮度值低于第一亮度阈值,则表示环境光亮度较低。在这种情况下,手机可以进入智能拍摄模式,采用主摄像头或者红外摄像头协助长焦摄像头拍照。可以理解的是,在环境光亮度特别低的情况下,即使主摄像头的进光量大,也可能因为可见光弱而无法采集到预设对象的清晰图像。而红外光摄像头可以感知视野范围内有温度的人或动物(即预设对象)发出红外光,因此可以采集到预设对象的图像。因此,在第二环境光亮度值低于第一亮度阈值,但大于或者等于第二亮度阈值的情况下,手机可以采用主摄像头协助长焦摄像头拍照。在第二环境光亮度值低于第二亮度阈值的情况下,手机可以采用红外摄像头协助长焦摄像头拍照。其中,第二亮度阈值低于第一亮度阈值。例如,第二亮度阈值可以为深夜室外的环境光亮度值,第一亮度阈值可以为傍晚室外的环境光亮度值。如图17所示,在S1701之后,如果第二环境光亮度值低于第一亮度阈值,手机可执行S1702。
S1702、手机判断第二环境光亮度值是否低于第二亮度阈值。
具体的,如果第二环境光亮度值低于第二亮度阈值,手机可进入智能拍摄模式,采用红外摄像头协助长焦摄像头拍照。如图17所示,S1702之后,如果第二环境光亮度值低于第二亮度阈值,手机可执行1604-S1611,进入智能拍摄模式,将红外摄像头作为辅助摄像头。
如果第二环境光亮度值高于或者等于第二亮度阈值,手机可进入智能拍摄模式,采用主外摄像头协助长焦摄像头拍照。如图17所示,S1702之后,如果第二环境光亮度值高于或者等于第二亮度阈值,手机可执行S1703和S304-S310,进入智能拍摄模式,将主摄像头作为辅助摄像头。
S1703、手机的主摄像头采集图像b。
如图17所示,在S1703之后,本申请实施例的方法还可以包括S304-S310。
需要说明的是,在该实施例中,S1601和S1602所述的图像I,与S305和S310中所述的图像a相同。其中,图像I和图像a均为手机进入智能拍摄模式前,长焦摄像头作为预览摄像头采集的预览图像。
图像II与图像b不同。其中,图像b是主摄像头作为辅助摄像头采集的预览图像。图像II是红外摄像头作为辅助摄像头采集的预览图像。
图像III与图像c不同。其中,图像c是手机进入智能拍摄模式后,长焦摄像头作为预览摄像头,主摄像头作为辅助摄像头的情况下,长焦摄像头采集的图像。图像III是手机进入智能拍摄模式后,长焦摄像头作为预览摄像头,红外摄像头作为辅助摄像头的情况下,长焦摄像头采集的图像。
图像IV与图像d不同。其中,图像d是手机响应于拍照操作,基于图像a(即预览图像)获得的图像。图像IV是手机响应于拍照操作,基于图像I(即预览图像)获得的图像。
本申请实施例提供一种拍摄图像的方法,在暗光场景下,手机的长焦摄像头采集图像时,手机可以根据环境光亮度,选择主摄像头或者红外摄像头作为辅助摄像头协助长焦摄像头拍照,以提升长焦摄像头拍摄得到的图像的图像质量。
需要说明的是,当可见光摄像头是除长焦摄像头之外的其他摄像头(如广角摄像头)时,手机采用该其他摄像头作为预览摄像头,将主摄像头或者红外摄像头作为辅助摄像头协助其他摄像头拍照的方法,与上述方法类似,本实施例这里不予赘述。
在另一实施例中,手机中包括彩色摄像头和黑白摄像头。其中,彩色摄像头可以采集到彩色的图像。相比于彩色摄像头而言,黑白摄像头的进光量较大。但是,黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩。例如,上述主摄像头、长焦摄像头和广角摄像头等均为彩色摄像头。
针对彩色摄像头和黑白摄像头的上述特点,手机在暗光场景下,采用彩色摄像头作为预览摄像头(即第一摄像头)采集图像时,为了避免由于环境光亮度较弱而影响图像质量,可以借助于黑白摄像头进光量大的优势,将黑白摄像头作为辅助摄像头(即第二摄像头)协助彩色摄像头工作,以提升彩色焦摄像头拍摄得到的图像的图像质量。
需要说明的是,手机采用彩色摄像头作为预览摄像头,并将黑白摄像头作为辅助摄像头协助彩色摄像头拍照的方法,可以参考上述实施例中“可见光摄像头作为预览摄像头,红外摄像头作为辅助摄像头协助可见光摄像头拍照”的方法(即S1601-S1611),本实施例这里不予赘述。
在另一实施例中,上述实施例中所述的彩色摄像头是长焦摄像头,手机包括长焦摄像头、进光量大于长焦摄像头的摄像头(如主摄像头)、红外摄像头和黑白摄像头。在该实施例中,手机采用长焦摄像头作为预览摄像头采集图像时,可根据环境光亮度的高低,选择主摄像头、红外摄像头或者黑白摄像头作为辅助摄像头协助长焦摄像头拍照。具体的,如果环境光亮度值(如第三环境光亮度值)低于第一亮度阈值、但高于或等于第三亮度阈值,手机可以将主摄像头作为辅助摄像头协助长焦摄像头拍照。如果第三环境光亮度值低于第三亮度阈值、但高于或等于第二亮度阈值,手机可以将黑白摄像头作为辅助摄像头协助长焦摄像头拍照。如果第三环境光亮度值低于第二亮度阈值,手机可以将红外摄像头作为辅助摄像头协助长焦摄像头拍照。其中,上述第 一亮度阈值高于第三亮度阈值,该第三亮度阈值高于第二亮度阈值。
需要说明的是,手机采用长焦摄像头作为预览摄像头采集图像时,将主摄像头、红外摄像头或者黑白摄像头作为辅助摄像头协助长焦摄像头拍照的具体方法,可以参考上述实施例中的相关描述,本实施例这里不予赘述。
本申请实施例提供一种拍摄图像的方法,在暗光场景下,手机的长焦摄像头采集图像时,手机可根据环境光亮度,选择主摄像头、红外摄像头或者黑白摄像头作为辅助摄像头协助长焦摄像头拍照,以提升长焦摄像头拍摄得到的图像的图像质量。
在另一实施例中,手机中包括彩色摄像头和深度摄像头(如ToF摄像头)。手机将彩色摄像头作为预览摄像头采集图像时,可能会因为拍摄对象(如上述预设对象)的颜色与背景颜色接近而无法清晰拍摄到预设对象的轮廓。而深度摄像头可以采集到预设对象的深度信息,该深度信息可以用于检测到该预设对象的轮廓。因此,该实施例中,手机采用彩色摄像头作为预览摄像头(即第一摄像头)采集图像时,可以将深度摄像头作为辅助摄像头(即第二摄像头)协助彩色摄像头工作,以提升彩色摄像头拍摄得到的图像的图像质量。
其中,本实施例中所述的彩色摄像头可以是主摄像头、长焦摄像头和广角摄像头等任一摄像头。如图18所示,本申请实施例提供的一种拍摄图像的方法可以包括S1801-S1811。
S1801、手机检测到预设操作2。该预设操作2用于触发手机的彩色摄像头采集图像。
具体的,该预设操作2用于触发手机启动彩色摄像头,使彩色摄像头采集图像,然后手机可显示彩色摄像头采集的图像。
S1802、响应于上述预设操作2,手机的彩色摄像头采集图像i,手机显示彩色摄像头采集的图像i。
其中,用于触发手机启动不同彩色摄像头的预设操作2不同。例如,用于触发手机启动主摄像头的预设操作2可以是图4中的(a)所示的操作1,即用户启动“照相机”应用的操作。又例如,用于触发手机启动长焦摄像头的预设操作2可以是S301所述的变倍操作。又例如,用于触发手机启动广角摄像头的预设操作2可以是用户在“照相机”中开启全景拍摄模式的操作。本申请实施例中的图像i是第一图像。
S1803、手机确定图像i中各个像素点的RGB值,并确定该图像i是否满足预设条件1。
其中,上述预设条件1是第一预设条件,该预设条件1是指:图像i包括第三区域。该第三区域中多个像素点的RGB值的差异小于预设RGB阈值。
示例性的,手机可以计算图像i中相距K个像素点的两个像素点的RGB值的差值。然后,手机可以判断图像i中是否包括这样一个图像区域(即第三区域)。该图像区域(即第三区域)中计算得到的上述差值均小于预设RGB阈值;或者,该图像区域(即第三区域)中计算得到的上述差值小于预设RGB阈值的数量大于预设数量阈值。其中,上述图像区域的大小(如面积或者像素点的个数)可以是预先设定的。可以理解,如果图像i中包括该图像区域,则表示该图像i满足预设条件1。如果图像i中不包括该图像区域,则表示该图像i不满足预设条件1。
具体的,如果图像i满足预设条件1,手机可以执行S1804;如果图像i不满足预设条件1,手机则不进入智能拍摄模式。手机的彩色摄像头继续采集图像i,手机显示彩色摄像头采集的图像i,然后执行S1811。
S1804、手机的深度摄像头采集图像ii。
其中,本申请实施例中的图像ii是第二图像。
可选的,如果图像i满足预设条件1,手机可以先不启动深度摄像头,而是显示第一用户界面,由用户选择是否进入智能拍摄模式,以启动深度摄像头协助彩色摄像头拍摄图像。响应于用户在第一用户界面的第一操作,手机可执行S1804。响应于用户在第一用户界面的第二操作,手机可执S1811。其中,第一用户界面、第一操作和第二操作的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。
可选的,响应于用户在第一用户界面的第三操作,手机还可以显示第二用户界面。该第二用户界面包括:手机进入智能拍摄模式前彩色摄像头采集的预览图像(如上述图像i);以及手机进入智能拍摄模式后彩色摄像头采集的预览图像(如S1809中所述的预览图像)。响应于用户在该第二用户界面的第四操作,手机可执行S1804。响应于用户在第二用户界面的第五操作,手机可执行S1811。其中,第二用户界面、第四操作和第五操作的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。
S1805、手机检测到图像ii的第一区域内包括预设对象的图像。该第一区域是图像ii中、与彩色摄像头的视野范围对应的区域。
其中,S1805中“手机检测到图像ii的第一区域内包括预设对象的图像”的方法,可以参考上述实施例所述的S304中“手机检测到图像b的第一区域内包括预设对象的图像”的方法,本实施例这里不予赘述。
S1806、手机确定第二区域的曝光值。该第二区域是图像i中预设对象的图像所在的区域。
其中,S1806中“手机确定出图像i中的第二区域,并检测第二区域的曝光值”的方法,可以参考上述实施例所述的S305中“手机确定出图像a中的第二区域,并检测第二区域的曝光值”的方法,本实施例这里不予赘述。
S1807、手机判断第二区域的曝光值是否小于第一曝光阈值。
其中,S1807中“手机判断第二区域的曝光值是否小于第一曝光阈值”的方法,可以参考上述实施例对S306的详细描述,本实施例这里不予赘述。
具体的,如果第二区域的曝光值大于或者等于第一曝光阈值,则表示该图像i中预设对象的图像对用户而言清晰可见,用户可以从图像i中清楚的检测到预设对象的图像。在这种情况下,手机不需要更新第二区域的曝光值。具体的,手机可以执行S1811。
如果第二区域的曝光值小于第一曝光阈值,则表示该图像i中预设对象的图像对用户而言较为模糊,用户无法从图像i中检测到预设对象的图像。在这种情况下,手机可以调整彩色摄像头的曝光参数,以提升上述曝光值。具体的,手机可以执行S1808。
S1808、手机调整彩色摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值。
其中,S1808中“手机调整彩色摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值”的方法,可以参考上述实施例所述的S307中“手机调整长焦摄 像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值”的方法,本实施例这里不予赘述。
在该实施例中,手机也可以根据拍摄对象(即预设对象)的运动状态(如静止或者运动),针对性的调整不同的曝光参数以提升曝光值。例如,预设对象运动的情况下,手机执行S1808所调整的曝光参数可以包括拍照帧数。预设对象静止的情况下,手机执行S1808所调整的曝光参数可以包括曝光时间。
具体的,在S1807之后,如果第二区域的曝光值大于或者等于第一曝光阈值,手机可以执行S1201。S1201之后,如果预设对象静止,手机可以执行S1808a;如果预设对象运动,手机可以执行S1808b。S1808a:手机调整彩色摄像头的曝光时间(即曝光参数),使第二区域的曝光值等于或者大于第一曝光阈值。S1808b:手机调整彩色摄像头的拍照帧数(即曝光参数),使第二区域的曝光值等于或者大于第一曝光阈值。其中,S1808a的具体实现方式,可以参考上述实施例对S307a的详细介绍;S1808b的具体实现方式,可以参考上述实施例对S307b的详细介绍,本实施例这里不予赘述。
S1809、手机的彩色摄像头采用调整后的曝光参数采集第一预览图像,手机显示该第一预览图像。
其中,S1809的具体实现方式,可以参考上述实施例对S308的详细介绍,本实施例这里不予赘述。
S1810、响应于用户的拍照操作,手机保存图像iii。该图像iii是彩色摄像头采用调整后的曝光参数所拍摄的。
具体的,该图像iii是基于彩色摄像头采用调整后的曝光参数采集的一帧或多帧第一预览图像获取的。
其中,本申请实施例中的图像iii是第三图像。本实施例中S1810的具体实现方式,可以参考上述实施例对S309的详细介绍,本实施例这里不予赘述。
在该实施例中,在预设对象静止的情况下,手机响应于用户的拍照操作,对彩色摄像头采集的预览图像进行的防抖操作包括OIS防抖。在预设对象运动的情况下,手机响应于用户的拍照操作,对彩色摄像头采集的预览图像进行的防抖操作可以包括OIS防抖和EIS防抖。
S1811、响应于用户的拍照操作,手机保存图像iv。该图像iv是基于长焦摄像头采集的图像i获取的。
其中,本申请实施例中的图像iv是第四图像。本实施例中S1811的具体实现方式,可以参考上述实施例对S310的详细介绍,本实施例这里不予赘述。
本申请实施例提供一种拍摄图像的方法,基于深度摄像头具备获取所述预设对象的深度信息的能力的特点,手机的彩色摄像头采集图像时,可以将深度摄像头作为辅助摄像头。具体的,手机可以借助于深度摄像头可以采集到预设对象的深度信息的优势,从彩色摄像头采集图像i中检测到预设对象的位置(即第二区域)。其中,图像i的图像质量较差,无法从该图像i中清楚的分辨出预设对象的原因在于:该预设对象在图像i中的位置(如第二区域)的曝光值低。因此,手机可以检测并调整彩色摄像头的曝光参数,以提升上述曝光值。这样,便可以提升彩色摄像头拍摄得到的图像的图像质量。如此,提升曝光值之后,彩色摄像头便可以拍摄得到图像质量较高的图 像(如图像iii)。
在另一实施例中,手机中包括黑白摄像头和彩色摄像头。其中,彩色摄像头可以采集到彩色的图像。但是,黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩。因此,采用黑白摄像头拍照,可能会因为拍摄对象(如上述预设对象)中包括相近且不易于用灰度区分的颜色,而影响图像质量。本申请实施例中,手机采用黑白摄像头作为预览摄像头(即第一摄像头)采集图像时,可以借助于彩色摄像头可以拍摄出拍摄对象的真实色彩的优势,将彩色摄像头作为辅助摄像头(即第二摄像头)协助黑白摄像头工作,以提升黑白焦摄像头拍摄得到的图像的图像质量。
例如,上述彩色摄像头可以是主摄像头、长焦摄像头和广角摄像头等任一摄像头。本实施例中,以彩色摄像头是主摄像头为例。如图19所示,本申请实施例提供的一种拍摄图像的方法可以包括S1901-S1911。
S1901、手机检测到预设操作3。该预设操作3用于触发手机的黑白摄像头采集图像。
S1902、响应于上述预设操作3,手机的黑白摄像头采集图像A,手机显示黑白摄像头采集的图像A。
例如,预设操作3可以是用户在“照相机”中开启黑白拍摄模式的操作。本申请实施例中的图像A是第一图像。
S1903、手机确定图像A中各个像素点的灰度值,并确定该图像A是否满足预设条件2。
其中,该预设条件2是第二预设条件。该预设条件2是指:图像A包括第四区域。该第四区域中多个像素点的灰度值的差异小于预设灰度阈值。
示例性的,手机可以计算图像A中相距K个像素点的两个像素点的灰度值的差值。然后,手机可以判断图像A中是否包括这样一个图像区域(即第四区域)。该图像区域(即第四区域)中计算得到的上述差值均小于预设灰度阈值;或者,该图像区域(即第四区域)中计算得到的上述差值小于预设灰度阈值的数量大于预设数量阈值。其中,上述图像区域的大小(如面积或者像素点的个数)可以是预先设定的。可以理解,如果图像A中包括该图像区域,则表示该图像A满足预设条件2。如果图像A中不包括该图像区域,则表示该图像A不满足预设条件2。
具体的,如果图像A满足预设条件2,手机可以执行S1904;如果图像A不满足预设条件2,手机则不进入智能拍摄模式。手机可的黑白摄像头继续采集图像A,手机显示黑白摄像头采集的图像A,然后执行S1911。
S1904、手机的主摄像头(即彩色摄像头)采集图像B。
其中,本申请实施例中的图像B是第二图像。可选的,如果图像A满足预设条件2,手机可以先不启动主摄像头(即彩色摄像头),而是显示第一用户界面,由用户选择是否进入智能拍摄模式,以启动主摄像头协助黑白摄像头拍摄图像。响应于用户在第一用户界面的第一操作,手机可执行S1904。响应于用户在第一用户界面的第二操作,手机可执S1911。其中,第一用户界面、第一操作和第二操作的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。
可选的,响应于用户在第一用户界面的第三操作,手机还可以显示第二用户界面。该第二用户界面包括:手机进入智能拍摄模式前黑白摄像头采集的预览图像(如上述图像A);以及手机进入智能拍摄模式后黑白摄像头采集的预览图像(如S1909中所述的预览图像)。响应于用户在该第二用户界面的第四操作,手机可执行S1904。响应于用户在第二用户界面的第五操作,手机可执行S1911。其中,第二用户界面、第四操作和第五操作的详细描述,可以参考上述实施例中的相关介绍,这里不予赘述。
S1905、手机检测到图像B的第一区域内包括预设对象的图像。该第一区域是图像B中、与黑白摄像头的视野范围对应的区域。
其中,S1905中“手机检测到图像B的第一区域内包括预设对象的图像”的方法,可以参考上述实施例所述的S304中“手机检测到图像b的第一区域内包括预设对象的图像”的方法,本实施例这里不予赘述。
S1906、手机确定第二区域的曝光值。该第二区域是图像A中预设对象的图像所在的区域。
其中,S1906中“手机确定出图像A中的第二区域,并检测第二区域的曝光值”的方法,可以参考上述实施例所述的S305中“手机确定出图像a中的第二区域,并检测第二区域的曝光值”的方法,本实施例这里不予赘述。
S1907、手机判断第二区域的曝光值是否小于第一曝光阈值。
其中,S1907中“手机判断第二区域的曝光值是否小于第一曝光阈值”的方法,可以参考上述实施例对S306的详细描述,本实施例这里不予赘述。
具体的,如果第二区域的曝光值大于或者等于第一曝光阈值,则表示该图像A中预设对象的图像对用户而言清晰可见,用户可以从图像A中清楚的检测到预设对象。在这种情况下,手机不需要更新第二区域的曝光值。具体的,手机可以执行S1911。
如果第二区域的曝光值小于第一曝光阈值,则表示该图像A中预设对象的图像对用户而言较为模糊,用户无法从图像A中检测到预设对象。在这种情况下,手机可以调整黑白摄像头的曝光参数,以提升上述曝光值。具体的,手机可以执行S1908。
S1908、手机调整黑白摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值。
其中,S1908中“手机调整黑白摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值”的方法,可以参考上述实施例所述的S307中“手机调整长焦摄像头的曝光参数,使第二区域的曝光值等于或者大于第一曝光阈值”的方法,本实施例这里不予赘述。
在该实施例中,手机也可以根据拍摄对象(即预设对象)的运动状态(如静止或者运动),针对性的调整不同的曝光参数以提升曝光值。例如,预设对象运动的情况下,手机执行S1908所调整的曝光参数可以包括拍照帧数。预设对象静止的情况下,手机执行S1908所调整的曝光参数可以包括曝光时间。
具体的,在S1907之后,如果第二区域的曝光值大于或者等于第一曝光阈值,手机可以执行S1201。S1201之后,如果预设对象静止,手机可以执行S1908a;如果预设对象运动,手机可以执行S1908b。S1908a:手机调整黑白摄像头的曝光时间(即曝光参数),使第二区域的曝光值等于或者大于第一曝光阈值。S1908b:手机调整黑白 摄像头的拍照帧数(即曝光参数),使第二区域的曝光值等于或者大于第一曝光阈值。其中,S1908a的具体实现方式,可以参考上述实施例对S307a的详细介绍;S1908b的具体实现方式,可以参考上述实施例对S307b的详细介绍,本实施例这里不予赘述。
S1909、手机的黑白摄像头采用调整后的曝光参数采集第一预览图像,手机显示该第一预览图像。
其中,S1909的具体实现方式,可以参考上述实施例对S308的详细介绍,本实施例这里不予赘述。
S1910、响应于用户的拍照操作,手机保存图像C。该图像C是黑白摄像头采用调整后的曝光参数所拍摄的。
具体的,该图像C是基于黑白摄像头采用调整后的曝光参数采集的一帧或多帧第一预览图像获取的。其中,本申请实施例中的图像C是第三图像。本实施例中S1910的具体实现方式,可以参考上述实施例对S309的详细介绍,本实施例这里不予赘述。
在该实施例中,在预设对象静止的情况下,手机响应于用户的拍照操作,对黑白摄像头采集的预览图像进行的防抖操作包括OIS防抖。在预设对象运动的情况下,手机响应于用户的拍照操作,对黑白摄像头采集的预览图像进行的防抖操作可以包括OIS防抖和EIS防抖。
S1911、响应于用户的拍照操作,手机保存图像D。该图像D是基于长焦摄像头采集的图像A获取的。
其中,本申请实施例中的图像D是第四图像。本实施例中S1911的具体实现方式,可以参考上述实施例对S310的详细介绍,本实施例这里不予赘述。
本申请实施例提供一种拍摄图像的方法,基于彩色摄像头可采集到彩色图像;而黑白摄像头采集到的图像只能呈现出不同等级的灰度,不能呈现出拍摄对象的真实色彩的特点。手机的黑白摄像头采集图像时,手机可以将主摄像头(即彩色摄像头)作为辅助摄像头。具体的,手机可以借助于彩色摄像头可以采集到彩色图像的优势,从黑白摄像头采集图像A中检测到预设对象的位置(即第二区域)。其中,图像A的图像质量较差,无法从该图像A中清楚的分辨出预设对象的原因在于:该预设对象在图像A中的位置(如第二区域)的曝光值低。因此,手机可以检测并调整黑白摄像头的曝光参数,以提升上述曝光值。这样,便可以提升黑白摄像头拍摄得到的图像的图像质量。如此,提升曝光值之后,黑白摄像头便可以拍摄得到图像质量较高的图像(如图像C)。
可以理解的是,上述电子设备(如手机)为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备(如手机)进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能 集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
在采用集成的单元的情况下,图20示出了上述实施例中所涉及的电子设备2000的一种可能的结构示意图。该电子设备2000可以包括:处理模块2001、显示模块2002、第一采集模块2003、第二采集模块2004和存储模块2005。
其中,处理模块2001用于对电子设备2000的动作进行控制管理。第一采集模块2003和第二采集模块2004用于采集图像。显示模块2002用于显示处理模块2001生成的图像,以及第一采集模块2003和第二采集模块2004采集的图像。
具体的,上述处理模块2001可以用于支持电子设备2000执行上述方法实施例中的S301,S304,S305,S306,S307,S1201,S307a,S307b,S1302,S1401,S1601,S1603中“判断环境光亮度是否低于第二亮度阈值”的操作,S1605,S1606,S1607,S1608,S1701中“判断环境光亮度是否低于第一亮度阈值”的操作,S1702中“判断环境光亮度是否低于第二亮度阈值”的操作,S1801,S1803,S1805,S1806,S1807,S1808,S1901,S1903,S1905,S1906,S1907,S1908,和/或用于本文所描述的技术的其它过程。
上述显示模块2002可以用于支持电子设备2000执行上述方法实施例中的S302中“显示图像a”的操作,S308中“显示第一预览图像”的操作,S1301,S1402,S1602中“显示图像I”的操作,S1609中“显示第一预览图像”的操作,S1802中“显示图像i”的操作,S1809中“显示第一预览图像”的操作,S1902中“显示图像A”的操作,S1909中“显示第一预览图像”的操作,和/或用于本文所描述的技术的其它过程。
上述第一采集模块2003可以用于支持电子设备2000执行上述方法实施例中的S302中“采集图像a”的操作,S308中“采集第一预览图像”的操作,S1602中“采集图像I”的操作,S1609中“采集第一预览图像”的操作,S1802中“采集图像i”的操作,S1902中“采集图像A”的操作,S1909中“采集第一预览图像”的操作,和/或用于本文所描述的技术的其它过程。
上述第二采集模块2004可以用于支持电子设备2000执行上述方法实施例中的S303中“采集图像b”的操作,S1303,S1403,S1604,S1703,S1804,S1809中“采集第一预览图像”的操作,S1904,和/或用于本文所描述的技术的其它过程。
上述存储模块2005可以用于支持电子设备2000执行上述方法实施例中的S309中“保存图像c”的操作,S310中“保存图像d”的操作,S1610中“保存图像III”的操作,S1611中“保存图像IV”的操作,S1810中“保存图像iii”的操作,S1811中“保存图像iv”的操作,S1910中“保存图像C”的操作,S1911中“保存图像D”的操作,和/或用于本文所描述的技术的其它过程。存储模块还可以用于保存电子设备2000的程序代码和数据。
可选的,该电子设备2000还可以包括传感器模块、通信模块等其他功能模块。例如,传感器模块用于检测环境光亮度。具体的,上述传感器模块可以用于支持电子设备2000执行上述方法实施例中的S1603和S1701中“检测环境光亮度”的操作,和/或用于本文所描述的技术的其它过程。通信模块用于支持电子设备2000与其他设备的 通信。
其中,处理模块2001可以是处理器或控制器,例如可以是中央处理器(Central Processing Unit,CPU),数字信号处理器(Digital Signal Processor,DSP),专用集成电路(Application-Specific Integrated Circuit,ASIC),现场可编程门阵列(Field Programmable Gate Array,FPGA)或者其他可编程逻辑器件、晶体管逻辑器件、硬件部件或者其任意组合。处理器可以包括应用处理器和基带处理器。其可以实现或执行结合本申请公开内容所描述的各种示例性的逻辑方框,模块和电路。所述处理器也可以是实现计算功能的组合,例如包含一个或多个微处理器组合,DSP和微处理器的组合等等。
例如,处理模块2001为一个或多个处理器(如图1所示的处理器110),存储模块2005可以为存储器(如图1所示的内部存储器121)。显示模块2002可以为显示屏(如图1所示的显示屏194)。上述第一采集模块2003可以是第一摄像头(如图1所示的预览摄像头),第二采集模块2004可以是第二摄像头(如图1所示的辅助摄像头)。上述传感器模块可以是图1所示的传感器模块180,图1所示的传感器模块180包括环境光传感器。本申请实施例所提供的电子设备2000可以为图1所示的电子设备100。其中,上述一个或多个处理器、存储器、第一摄像头、第二摄像头和显示屏等可以连接在一起,例如通过总线连接。
本申请实施例还提供一种芯片系统,如图21所示,该芯片系统2100包括至少一个处理器2101和至少一个接口电路2102。处理器2101和接口电路2102可通过线路互联。例如,接口电路2102可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路2102可用于向其它装置(例如处理器2101)发送信号。示例性的,接口电路2102可读取存储器中存储的指令,并将该指令发送给处理器2101。当所述指令被处理器2101执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (42)

  1. 一种拍摄图像的方法,其特征在于,应用于电子设备,所述电子设备包括第一摄像头和第二摄像头,所述第二摄像头与所述第一摄像头不同;所述方法包括:
    所述电子设备检测到预设操作;
    响应于所述预设操作,所述电子设备的所述第一摄像头采集第一图像,所述电子设备显示所述第一图像;
    所述电子设备的所述第二摄像头采集第二图像,所述电子设备不显示所述第二图像,其中,所述第二图像包括第一区域,所述第一区域是对应于所述第一摄像头的视野范围的区域;
    所述电子设备检测到所述第一区域内包括预设对象的图像,所述预设对象包括以下至少一种:人脸、人体、植物、动物、建筑或文字;
    所述电子设备确定第二区域的曝光值,其中,所述第二区域是所述第一图像中所述预设对象的图像所在的区域;
    若所述电子设备确定所述第二区域的曝光值小于第一曝光阈值,所述电子设备调整所述第一摄像头的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;
    所述第一摄像头采用调整后的曝光参数采集第一预览图像,所述电子设备显示所述第一预览图像;
    响应于用户的拍照操作,所述电子设备保存第三图像,所述第三图像是所述第一摄像头采用调整后的曝光参数所拍摄的。
  2. 根据权利要求1所述的方法,其特征在于,所述曝光参数包括曝光时间、拍照帧数和ISO感光度中的至少一项。
  3. 根据权利要求1或2所述的方法,其特征在于,所述电子设备调整所述第一摄像头的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:
    如果所述预设对象是静止的,所述电子设备调整所述第一摄像头的曝光时间,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;或者,
    如果所述预设对象是静止的,所述电子设备调整所述第一摄像头的曝光时间和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值。
  4. 根据权利要求3所述的方法,其特征在于,所述响应于用户的拍照操作,所述电子设备保存第三图像,包括:
    响应于所述拍照操作,所述电子设备对所述第一摄像头采集的一帧所述第一预览图像进行光学防抖OIS防抖,得到并保存所述第三图像。
  5. 根据权利要求要求1-4中任一项所述的方法,其特征在于,所述电子设备调整所述第一摄像头的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:
    如果所述预设对象是运动的,所述电子设备调整所述第一摄像头的拍照帧数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;或者,
    如果所述预设对象是运动的,所述电子设备调整所述第一摄像头的拍照帧数和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值。
  6. 根据权利要求5所述的方法,其特征在于,所述响应于用户的拍照操作,所述电子设备保存第三图像,包括:
    响应于所述拍照操作,所述电子设备对所述第一摄像头采集的多帧所述第一预览图像进行OIS防抖和电子EIS防抖融合,得到并保存所述第三图像。
  7. 根据权利要求5或6所述的方法,其特征在于,所述响应于用户的拍照操作,所述电子设备保存第三图像,包括:
    响应于所述拍照操作,所述电子设备对所述第一摄像头采集的多帧所述第一预览图像进行OIS防抖,并对多帧所述第一预览图像的运动区域的图像进行EIS防抖融合,得到并保存所述第三图像。
  8. 根据权利要求1-7中任一项所述的方法,其特征在于,所述方法还包括:
    若所述电子设备确定所述第二区域的曝光值大于第二曝光阈值,所述电子设备调整所述第一摄像头的曝光参数,使所述第二区域的曝光值等于或者小于所述第二曝光阈值。
  9. 根据权利要求1-8中任一项所述的方法,其特征在于,所述电子设备的所述第二摄像头采集第二图像,包括:
    响应于所述预设操作,所述电子设备显示第一用户界面,所述第一用户界面用于请求用户确认是否使用所述第二摄像头协助所述第一摄像头拍摄图像;
    所述电子设备检测到所述用户对所述第一用户界面的第一操作;
    响应于所述第一操作,所述电子设备的所述第二摄像头采集所述第二图像。
  10. 根据权利要求9所述的方法,其特征在于,所述方法还包括:
    所述电子设备检测到所述用户对所述第一用户界面的第二操作;
    响应于所述第二操作,所述电子设备的所述第二摄像头不采集图像。
  11. 根据权利要求9或10所述的方法,其特征在于,所述第一用户界面还包括所述第一预览图像。
  12. 根据权利要求9或10所述的方法,其特征在于,所述方法还包括:
    所述电子设备检测到所述用户对所述第一用户界面的第三操作;
    响应于所述第三操作,所述电子设备显示第二用户界面,其中,所述第三操作用于触发所述电子设备显示所述第一摄像头采集的所述第一预览图像,所述第二用户界面包括所述第一预览图像;
    所述电子设备检测到所述用户对所述第二用户界面的第四操作;
    响应于所述第四操作,所述电子设备的所述第二摄像头采集所述第二图像。
  13. 根据权利要求12所述的方法,其特征在于,所述第一用户界面包括第一控件,所述第三操作是所述用户对所述第一控件的点击操作;
    或者,所述第三操作是预设手势。
  14. 根据权利要求1-13中任一项所述的方法,其特征在于,所述第一摄像头是长焦摄像头,所述第二摄像头是主摄像头或者红外摄像头;或者,
    所述第一摄像头是彩色摄像头,所述第二摄像头是黑白摄像头;或者,
    所述第一摄像头是可见光摄像头,所述第二摄像头是红外摄像头;或者,
    所述第一摄像头是彩色摄像头,所述第二摄像头是深度摄像头;或者,
    所述第一摄像头是黑白摄像头,所述摄像头是彩色摄像头;
    其中,所述彩色摄像头至少包括主摄像头、长焦摄像头或广角摄像头中的任一种。
  15. 根据权利要求14所述的方法,其特征在于,所述电子设备的所述第二摄像头采集第二图像,包括:
    响应于所述预设操作,所述电子设备的环境光传感器检测环境光亮度;
    所述电子设备确定第一环境光亮度值;
    若所述第一环境光亮度值低于第一亮度阈值,所述电子设备的所述第二摄像头采集所述第二图像。
  16. 根据权利要求14所述的方法,其特征在于,所述第一摄像头是长焦摄像头,所述第二摄像头是红外摄像头或者主摄像头;所述预设操作是变倍操作;
    其中,所述电子设备的所述第二摄像头采集第二图像,包括:
    响应于所述预设操作,所述电子设备的环境光传感器检测环境光亮度;
    所述电子设备确定第二环境光亮度值;
    若所述第二环境光亮度值低于第一亮度阈值,且低于第二亮度阈值,所述电子设备的所述红外摄像头采集所述第二图像,所述第二摄像头是所述红外摄像头;
    若所述第二环境光亮度值低于所述第一亮度阈值,且大于或者等于所述第二亮度阈值,所述电子设备的所述主摄像头采集所述第二图像,所述第二摄像头是所述主摄像头;
    其中,所述第二亮度阈值小于所述第一亮度阈值。
  17. 根据权利要求14所述的方法,其特征在于,所述第一摄像头是彩色摄像头,所述第二摄像头是深度摄像头;
    其中,所述电子设备的所述第二摄像头采集第二图像,包括:
    响应于所述预设操作,所述电子设备确定所述第一图像中像素点的红绿蓝RGB值;
    若所述电子设备确定所述第一图像满足第一预设条件,所述电子设备的所述深度摄像头采集所述第二图像;
    其中,所述第一预设条件是指:所述第一图像包括第三区域,所述第三区域中多个像素点的RGB值的差异小于预设RGB阈值。
  18. 根据权利要求14所述的方法,其特征在于,所述第一摄像头是黑白摄像头,所述摄像头是彩色摄像头;
    其中,所述电子设备的所述第二摄像头采集第二图像,包括:
    响应于所述预设操作,所述电子设备确定所述第一图像中像素点的灰度值;
    若所述电子设备确定所述第一图像满足第二预设条件,所述电子设备的所述彩色摄像头采集所述第二图像;
    其中,所述第二预设条件是指:所述第一图像包括第四区域,所述第四区域中多个像素点的灰度值的差异小于预设灰度阈值。
  19. 根据权利要求1-18中任一项所述的方法,其特征在于,在所述电子设备确定第二区域的曝光值之前,所述方法还包括:
    所述电子设备根据所述预设对象的图像在所述第一图像中所述第一区域的位置,确定所述第一图像中所述预设对象的图像所在的所述第二区域。
  20. 根据权利要求1-19中任一项所述的方法,其特征在于,所述第一摄像头是长焦摄像头,所述第二摄像头是主摄像头,所述预设操作是变倍操作;
    其中,所述电子设备的所述第二摄像头采集第二图像,包括:
    响应于所述预设操作,所述电子设备的环境光传感器检测环境光亮度;所述电子设备确定第三环境光亮度值;若所述第三环境光亮度值低于第一亮度阈值,所述电子设备的所述第二摄像头采集所述第二图像;
    其中,所述电子设备调整所述第一摄像头的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:
    如果所述预设对象是静止的,所述电子设备调整所述第一摄像头的曝光时间,或者所述电子设备调整所述第一摄像头的曝光时间和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;
    如果所述预设对象是运动的,所述电子设备调整所述第一摄像头的拍照帧数,或者所述电子设备调整所述第一摄像头的拍照帧数和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;
    其中,所述响应于用户的拍照操作,所述电子设备保存第三图像,包括:
    响应于用户的所述拍照操作,
    如果所述预设对象是静止的,所述电子设备对所述第一摄像头采集的一帧所述第一预览图像进行OIS防抖,得到并保存所述第三图像;
    如果所述预设对象是运动的,所述电子设备对所述第一摄像头采集的多帧所述第一预览图像进行OIS防抖,得到并保存所述第三图像。
  21. 一种电子设备,其特征在于,所述电子设备包括第一采集模块、第二采集模块和显示模块,所述电子设备还包括处理模块和存储模块;
    所述处理模块,用于检测预设操作;
    所述第一采集模块,用于响应于所述处理模块检测到的所述预设操作,采集第一图像;
    所述显示模块,用于显示所述第一图像;
    所述第二采集模块,用于采集第二图像,其中,所述显示模块不显示所述第二图像,所述第二图像包括第一区域,所述第一区域是对应于所述第一采集模块的视野范围的区域;
    所述处理模块,还用于检测所述第一区域内包括预设对象的图像,所述预设对象包括以下至少一种:人脸、人体、植物、动物、建筑或文字;还用于确定第二区域的曝光值,其中,所述第二区域是所述第一图像中所述预设对象的图像所在的区域;
    所述处理模块,还用于确定若所述第二区域的曝光值小于第一曝光阈值,调整所述第一采集模块的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;
    所述第一采集模块,还用于采用调整后的曝光参数采集第一预览图像;
    所述显示模块,还用于显示所述第一预览图像;
    所述第一采集模块,还用于响应于用户的拍照操作,采用调整后的曝光参数拍摄第三图像;
    所述存储模块,用于保存所述第三图像。
  22. 根据权利要求21所述的电子设备,其特征在于,所述曝光参数包括曝光时间、拍照帧数和ISO感光度中的至少一项。
  23. 根据权利要求21或22所述的电子设备,其特征在于,所述处理模块,用于调整所述第一采集模块的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:
    所述处理模块,用于:
    如果所述预设对象是静止的,调整所述第一采集模块的曝光时间,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;或者,
    如果所述预设对象是静止的,调整所述第一采集模块的曝光时间和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值。
  24. 根据权利要求23所述的电子设备,其特征在于,所述处理模块,还用于响应于所述拍照操作,对所述第一采集模块采集的一帧所述第一预览图像进行光学防抖OIS防抖,得到所述第三图像。
  25. 根据权利要求要求21-24中任一项所述的电子设备,其特征在于,所述处理模块,用于调整所述第一采集模块的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:
    所述处理模块,用于:
    如果所述预设对象是运动的,调整所述第一采集模块的拍照帧数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;或者,
    如果所述预设对象是运动的,调整所述第一采集模块的拍照帧数和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值。
  26. 根据权利要求25所述的电子设备,其特征在于,所述处理模块,还用于响应于所述拍照操作,对所述第一采集模块采集的多帧所述第一预览图像进行OIS防抖和电子EIS防抖融合,得到所述第三图像。
  27. 根据权利要求25或26所述的电子设备,其特征在于,所述处理模块,还用于响应于所述拍照操作,对所述第一采集模块采集的多帧所述第一预览图像进行OIS防抖,并对多帧所述第一预览图像的运动区域的图像进行EIS防抖融合,得到所述第三图像。
  28. 根据权利要求21-27中任一项所述的电子设备,其特征在于,所述处理模块,还用于确定所述第二区域的曝光值是否大于第二曝光阈值;
    若所述处理模块确定所述第二区域的曝光值大于所述第二曝光阈值,
    所述处理模块,还用于调整所述第一采集模块的曝光参数,使所述第二区域的曝光值等于或者小于所述第二曝光阈值。
  29. 根据权利要求21-28中任一项所述的电子设备,其特征在于,所述显示模块,还用于响应于所述预设操作,显示第一用户界面,所述第一用户界面用于请求用户确认是否使用所述第二采集模块协助所述第一采集模块拍摄图像;
    所述处理模块,还用于检测到所述用户对所述第一用户界面的第一操作;
    所述第二采集模块,还用于响应于所述第一操作,采集所述第二图像。
  30. 根据权利要求29所述的电子设备,其特征在于,所述处理模块,还用于检测到所述用户对所述第一用户界面的第二操作;
    其中,所述第二采集模块响应于所述第二操作,不采集图像。
  31. 根据权利要求29或30所述的电子设备,其特征在于,所述第一用户界面还包括所述第一预览图像。
  32. 根据权利要求29或30所述的电子设备,其特征在于,所述处理模块,还用于检测到所述用户对所述第一用户界面的第三操作;
    所述显示模块,还用于响应于所述第三操作,显示第二用户界面,其中,所述第二用户界面包括所述第一预览图像,所述第一预览图像是所述第一采集模块采集的;
    所述处理模块,还用于检测到所述用户对所述第二用户界面的第四操作;
    所述第二采集模块,还用于响应于所述第四操作,采集所述第二图像。
  33. 根据权利要求32所述的电子设备,其特征在于,所述第一用户界面包括第一控件,所述第三操作是所述用户对所述第一控件的点击操作;
    或者,所述第三操作是预设手势。
  34. 根据权利要求21-33中任一项所述的电子设备,其特征在于,所述第一采集模块是长焦摄像头,所述第二采集模块是主摄像头或者红外摄像头;或者,
    所述第一采集模块是彩色摄像头,所述第二采集模块是黑白摄像头;或者,
    所述第一采集模块是可见光摄像头,所述第二采集模块是红外摄像头;或者,
    所述第一采集模块是彩色摄像头,所述第二采集模块是深度摄像头;或者,
    所述第一采集模块是黑白摄像头,所述摄像头是彩色摄像头;
    其中,所述彩色摄像头至少包括主摄像头、长焦摄像头或广角摄像头中的任一种。
  35. 根据权利要求34所述的电子设备,其特征在于,所述电子设备还包括传感器模块;
    所述传感器模块,用于响应于所述预设操作,检测环境光亮度;
    所述处理模块,还用于确定第一环境光亮度值;
    所述处理模块,还用于确定所述第一环境光亮度值是否低于第一亮度阈值;
    若所述处理模块确定所述第一环境光亮度值低于所述第一亮度阈值,
    所述第二采集模块,还用于采集所述第二图像。
  36. 根据权利要求34所述的电子设备,其特征在于,所述第一采集模块是长焦摄像头,所述第二采集模块是红外摄像头或者主摄像头;所述预设操作是变倍操作;所述电子设备还包括传感器模块;
    所述传感器模块,用于响应于所述预设操作,检测环境光亮度;
    所述处理模块,还用于确定第二环境光亮度值;
    所述处理模块,还用于确定所述第二环境光亮度值是否低于第一亮度阈值和第二亮度阈值;
    若所述处理模块确定所述第二环境光亮度值低于所述第一亮度阈值和所述第二亮度阈值,
    所述第二采集模块,还用于采集所述第二图像,所述第二采集模块是所述红外摄像头;
    所述处理模块,还用于确定所述第二环境光亮度值是否低于所述第一亮度阈值,且大于或者等于所述第二亮度阈值;
    若所述处理模块确定所述第二环境光亮度值低于所述第一亮度阈值,且大于或者等于所述第二亮度阈值,
    所述第二采集模块,还用于采集所述第二图像,所述第二采集模块是所述主摄像头;
    其中,所述第二亮度阈值小于所述第一亮度阈值。
  37. 根据权利要求34所述的电子设备,其特征在于,所述第一采集模块是彩色摄像头,所述第二采集模块是深度摄像头;
    所述处理模块,还用于响应于所述预设操作,确定所述第一图像中像素点的红绿蓝RGB值;
    所述处理模块,还用于确定所述第一图像是否满足第一预设条件;
    若所述处理模块确定所述第一图像满足所述第一预设条件,
    所述第二采集模块,还用于采集所述第二图像;
    其中,所述第一预设条件是指:所述第一图像包括第三区域,所述第三区域中多个像素点的RGB值的差异小于预设RGB阈值。
  38. 根据权利要求34所述的电子设备,其特征在于,所述第一采集模块是黑白摄像头,所述摄像头是彩色摄像头;
    所述处理模块,还用于响应于所述预设操作,确定所述第一图像中像素点的灰度值;
    所述处理模块,还用于确定所述第一图像是否满足第二预设条件;
    若所述处理模块确定所述第一图像满足所述第二预设条件,
    所述第二采集模块,还用于采集所述第二图像;
    其中,所述第二预设条件是指:所述第一图像包括第四区域,所述第四区域中多个像素点的灰度值的差异小于预设灰度阈值。
  39. 根据权利要求21-38中任一项所述的电子设备,其特征在于,所述处理模块,还用于在确定第二区域的曝光值之前,根据所述预设对象的图像在所述第一图像中所述第一区域的位置,确定所述第一图像中所述预设对象的图像所在的所述第二区域。
  40. 根据权利要求21-39中任一项所述的电子设备,其特征在于,所述第一采集模块是长焦摄像头,所述第二采集模块是主摄像头,所述预设操作是变倍操作;所述电子设备还包括传感器模块;
    所述传感器模块,用于响应于所述预设操作,检测环境光亮度;
    所述处理模块,还用于确定第三环境光亮度值;
    所述处理模块,还用于确定所述第三环境光亮度值是否低于第一亮度阈值;
    若所述处理模块确定所述第三环境光亮度值低于所述第一亮度阈值,
    所述第二采集模块,还用于采集所述第二图像;
    其中,所述处理模块,用于调整所述第一采集模块的曝光参数,使所述第二区域的曝光值等于或者大于所述第一曝光阈值,包括:
    所述处理模块,用于如果所述预设对象是静止的,调整所述第一采集模块的曝光 时间,或者所述电子设备调整所述第一采集模块的曝光时间和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;如果所述预设对象是运动的,调整所述第一采集模块的拍照帧数,或者所述电子设备调整所述第一采集模块的拍照帧数和ISO感光度,使所述第二区域的曝光值等于或者大于所述第一曝光阈值;
    其中,所述处理模块,还用于响应于用户的所述拍照操作,如果所述预设对象是静止的,对所述第一采集模块采集的一帧所述第一预览图像进行OIS防抖,得到所述第三图像;如果所述预设对象是运动的,对所述第一采集模块采集的多帧所述第一预览图像进行OIS防抖,得到所述第三图像。
  41. 一种电子设备,包括一个或多个触摸屏,一个或多个存储器,一个或多个处理器;其中所述一个或多个储存器存储有一个或多个程序;其特征在于,当所述一个或多个处理器在执行所述一个或多个程序时,使得所述电子设备实现如权利要求1至20任一项所述的方法。
  42. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如权利要求1-20中任一项所述的方法。
PCT/CN2021/082090 2020-03-20 2021-03-22 一种拍摄图像的方法及电子设备 WO2021185374A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202010201964.8A CN113497880A (zh) 2020-03-20 2020-03-20 一种拍摄图像的方法及电子设备
CN202010201964.8 2020-03-20

Publications (1)

Publication Number Publication Date
WO2021185374A1 true WO2021185374A1 (zh) 2021-09-23

Family

ID=77770569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/082090 WO2021185374A1 (zh) 2020-03-20 2021-03-22 一种拍摄图像的方法及电子设备

Country Status (2)

Country Link
CN (1) CN113497880A (zh)
WO (1) WO2021185374A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422682A (zh) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 拍摄方法、电子设备和可读存储介质
CN114863510A (zh) * 2022-03-25 2022-08-05 荣耀终端有限公司 一种人脸识别方法和装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796579A (zh) * 2015-04-30 2015-07-22 联想(北京)有限公司 信息处理方法及电子设备
CN105472245A (zh) * 2015-12-21 2016-04-06 联想(北京)有限公司 一种拍照方法、电子设备
US20170070666A1 (en) * 2015-09-09 2017-03-09 Samsung Electronics Co., Ltd. Electronic device and method for adjusting camera exposure
CN107613218A (zh) * 2017-09-15 2018-01-19 维沃移动通信有限公司 一种高动态范围图像的拍摄方法及移动终端
CN108307114A (zh) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 图像的处理方法、装置、存储介质及电子设备
CN108337445A (zh) * 2018-03-26 2018-07-27 华为技术有限公司 拍照方法、相关设备及计算机存储介质
CN108377341A (zh) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 拍照方法、装置、终端及存储介质

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104796579A (zh) * 2015-04-30 2015-07-22 联想(北京)有限公司 信息处理方法及电子设备
US20170070666A1 (en) * 2015-09-09 2017-03-09 Samsung Electronics Co., Ltd. Electronic device and method for adjusting camera exposure
CN105472245A (zh) * 2015-12-21 2016-04-06 联想(北京)有限公司 一种拍照方法、电子设备
CN107613218A (zh) * 2017-09-15 2018-01-19 维沃移动通信有限公司 一种高动态范围图像的拍摄方法及移动终端
CN108307114A (zh) * 2018-01-31 2018-07-20 广东欧珀移动通信有限公司 图像的处理方法、装置、存储介质及电子设备
CN108337445A (zh) * 2018-03-26 2018-07-27 华为技术有限公司 拍照方法、相关设备及计算机存储介质
CN108377341A (zh) * 2018-05-14 2018-08-07 Oppo广东移动通信有限公司 拍照方法、装置、终端及存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114422682A (zh) * 2022-01-28 2022-04-29 安谋科技(中国)有限公司 拍摄方法、电子设备和可读存储介质
CN114422682B (zh) * 2022-01-28 2024-02-02 安谋科技(中国)有限公司 拍摄方法、电子设备和可读存储介质
CN114863510A (zh) * 2022-03-25 2022-08-05 荣耀终端有限公司 一种人脸识别方法和装置

Also Published As

Publication number Publication date
CN113497880A (zh) 2021-10-12

Similar Documents

Publication Publication Date Title
WO2021093793A1 (zh) 一种拍摄方法及电子设备
WO2020177583A1 (zh) 一种图像裁剪方法和电子设备
WO2021147482A1 (zh) 一种长焦拍摄的方法及电子设备
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
CN109218606B (zh) 摄像控制设备、其控制方法及计算机可读介质
CN112153272B (zh) 一种图像拍摄方法与电子设备
WO2021223500A1 (zh) 一种拍摄方法及设备
WO2021219141A1 (zh) 拍照方法、图形用户界面及电子设备
WO2022001806A1 (zh) 图像变换方法和装置
CN113840070B (zh) 拍摄方法、装置、电子设备及介质
WO2022252780A1 (zh) 拍摄方法及电子设备
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
WO2022252660A1 (zh) 一种视频拍摄方法及电子设备
WO2018184260A1 (zh) 文档图像的校正方法及装置
US20230188845A1 (en) Electronic device and method for controlling preview image
US20230262321A1 (en) Electronic device and operating method thereof
US11284020B2 (en) Apparatus and method for displaying graphic elements according to object
KR20200043818A (ko) 전자 장치 및 그의 이미지 촬영 방법
WO2022266907A1 (zh) 处理方法、终端设备及存储介质
WO2022083325A1 (zh) 拍照预览方法、电子设备以及存储介质
CN113850709A (zh) 图像变换方法和装置
CN116530090A (zh) 使用多个相机拍摄照片的方法及其装置
WO2022228259A1 (zh) 一种目标追踪方法及相关装置
CN116055867B (zh) 一种拍摄方法和电子设备
WO2024055817A1 (zh) 一种扫码方法及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21772022

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21772022

Country of ref document: EP

Kind code of ref document: A1