WO2022262344A1 - 一种拍摄方法及电子设备 - Google Patents

一种拍摄方法及电子设备 Download PDF

Info

Publication number
WO2022262344A1
WO2022262344A1 PCT/CN2022/081755 CN2022081755W WO2022262344A1 WO 2022262344 A1 WO2022262344 A1 WO 2022262344A1 CN 2022081755 W CN2022081755 W CN 2022081755W WO 2022262344 A1 WO2022262344 A1 WO 2022262344A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
electronic device
image
target object
distance value
Prior art date
Application number
PCT/CN2022/081755
Other languages
English (en)
French (fr)
Inventor
崔瀚涛
杨阳
冯帅
林婕
常玲丽
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP22823835.8A priority Critical patent/EP4195650A1/en
Priority to US18/043,373 priority patent/US20230247286A1/en
Publication of WO2022262344A1 publication Critical patent/WO2022262344A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B7/00Mountings, adjusting means, or light-tight connections, for optical elements
    • G02B7/28Systems for automatic generation of focusing signals
    • G02B7/282Autofocusing of zoom lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/57Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • H04N23/951Computational photography systems, e.g. light-field imaging systems by using two or more images to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/2224Studio circuitry; Studio devices; Studio equipment related to virtual studio applications
    • H04N5/2226Determination of depth image, e.g. for foreground/background separation

Definitions

  • the embodiments of the present application relate to the field of electronic technologies, and in particular, to a photographing method and electronic equipment.
  • an electronic device integrated with multiple cameras has a background blur effect with a large aperture, a zoom effect, and the like when shooting.
  • a large aperture background blur mode can be used at this time.
  • the background can be blurred to highlight the subject (focused object) in the image, so that the subject in the image has high definition and the background can be blurred. For example, when taking a portrait, the portrait is displayed clearly, and the background blur is blurred, thereby producing a large aperture background blur effect.
  • the present application provides a photographing method and an electronic device, which can provide a photographed image with large aperture background blurring, and can improve the definition of an object at a focal point and the blurring effect of a part of the background.
  • the present application provides a photographing method, which can be applied to an electronic device.
  • the electronic device may at least include a first camera, a second camera and a third camera.
  • the method may include: the electronic device receives a first operation, and the first operation is used to trigger the electronic device to enter a large aperture mode.
  • the large aperture mode is a shooting mode, and the first operation can trigger the electronic device to shoot images or videos in the large aperture mode.
  • the electronic device enters into the large aperture mode for shooting first the electronic device can acquire a distance value from the target object to be shot.
  • the electronic device activates the first camera and the second camera to collect images of the target object, so that the electronic device can display a preview image including the target object. If the distance value exceeds the first distance value, the electronic device can activate the first camera and the third camera to collect images of the target object, so that the electronic device can display a preview image including the target object. It can be understood that the preview image is the corresponding preview image in the large aperture mode.
  • the electronic device activates different cameras according to the distance value from the target object to be photographed. It can be understood that when the electronic device activates different cameras, the display effect of the preview image displayed by the electronic device is different.
  • the image in the large aperture mode includes a clear display part and a blurred display part.
  • the electronic device is generating the image in the large aperture mode, and the electronic device generates a preview image including the target object based on the images collected by the two cameras.
  • the parallax of the two cameras is different. In some shooting scenarios, it may make it difficult to produce a depth of field in the captured image, which affects the display effect of the image in the large aperture mode.
  • the distance between the electronic device and the target object does not exceed the preset distance, and the first camera and the second camera are activated to shoot the scene at close range, and the electronic device can calculate the depth of field, so that the depth of field in the large aperture image displayed by the electronic device is obvious and clear
  • the display effect of the display part and the blur display part is better.
  • the electronic device may further perform the following operation: the electronic device activates the first camera to collect an image of the target object.
  • the acquisition of the distance value between the above-mentioned electronic device and the target object to be photographed specifically includes: the electronic device acquires an AF code (AF code) of the first camera, and the AF code indicates the distance value between the first camera and the target object. Based on the autofocus of the first camera, the electronic device can use the autofocus code as the distance value between the electronic device and the target object.
  • AF code AF code
  • the electronic device can obtain the distance value between the electronic device and the target object according to the auto-focus code of the first camera. Therefore, the electronic device can encode the auto-focus code as the distance value between the electronic device and the target object. Specifically, after the electronic device obtains the auto-focus code of the first camera, the electronic device may use a preset focusing algorithm to obtain a distance value between the electronic device and the target object.
  • the electronic device may further include a distance sensor.
  • the acquisition of the distance value between the electronic device and the photographed target object specifically includes: the electronic device activates a distance sensor to determine the distance value between the electronic device and the target object.
  • the distance sensor is activated, and the electronic device can directly obtain the distance value from the target object according to the data fed back by the distance sensor.
  • the electronic device when the electronic device obtains the distance value by using the first camera auto-focus encoding method, the electronic device can turn on the first camera to obtain the distance value.
  • the electronic device uses a distance sensor to obtain a distance value, the electronic device needs to control the distance sensor to determine the target object, so that the distance sensor can accurately feedback the distance value between the electronic device and the target object.
  • the zoom ratio of the electronic device is the first ratio.
  • the first magnification may be a preset magnification, such as 1 ⁇ .
  • the electronic device displays the image including the target object, it may specifically include: the electronic device outputs the original pixel image in a first pixel signal combination manner to generate an image of the target object, and displays the image including the target object.
  • the zoom ratio of the electronic device is the first ratio.
  • the first magnification may be a preset magnification, such as 1 ⁇ . If the distance value does not exceed the first distance value, the electronic device activates the first camera and the second camera, and after collecting the image of the target object, the electronic device displays the image including the target object, which may specifically include: the electronic device uses the first pixel signal combination method The raw pixel image is output to generate an image of the target object, and the image including the target object is displayed.
  • the zoom magnification of the electronic device is the first magnification
  • the electronic device may output the original pixel image in a first pixel signal combination manner, so that the electronic device generates an image including the target object according to the original pixel image.
  • the first magnification is a preset magnification
  • the first pixel signal combination method is that the electronic device performs an analog combination operation on pixel information in the pixel image to output an image including the target object.
  • the method may further include: the electronic device receives a second operation, and the second operation instructs the electronic device to adjust the zoom ratio to the first Double magnification, the second magnification is greater than the first magnification.
  • the electronic device outputs the original pixel image in a second pixel signal combination manner to generate an image of the target object, and displays the image including the target object.
  • the second magnification is greater than the first magnification. That is to say, when the electronic device displays the preview image in the large aperture mode, the electronic device receives a zoom operation input by the user, and the electronic device displays the image in the large aperture mode of the second magnification.
  • the electronic device can rearrange the pixels in the pixel image so as to improve the definition of the zoomed image. Therefore, in the high magnification (that is, the magnification of the enlarged image) large aperture mode, the electronic device adopts the second pixel signal combination method to output the original pixel image, which can effectively improve the clarity of the image displayed by the electronic device and make the display effect of the preview image better.
  • the first camera, the second camera, and the third camera are arranged on the first surface of the electronic device, for example, these three cameras are arranged on the back of the mobile phone.
  • the distance value from the first camera to the second camera is smaller than the distance value from the first camera to the third camera.
  • the first camera, the second camera, and the third camera are arranged on rear cameras of the electronic device.
  • the first camera can be the main camera on the back
  • the second camera can be a wide-angle camera
  • the third camera can be a telephoto or depth camera.
  • the first camera, the second camera, and the third camera are arranged on the first surface of the electronic device.
  • the camera arranged on the first surface is turned on.
  • the electronic device may further include a fourth camera and a fifth camera, and the fourth camera and the fifth camera may be set on the second side of the electronic device, for example, the front of the electronic device.
  • the electronic device may also perform the following operation: the electronic device receives a third operation, and the third operation is used to trigger the electronic device to turn on the camera on the second surface.
  • the electronic device activates the fourth camera and the fifth camera to collect images of the target object.
  • the fourth camera is used as the main camera
  • the fifth camera is used as the auxiliary camera
  • the main camera is used to focus on the target object
  • the auxiliary camera is used to calculate the depth of field.
  • the electronic device displays an image including the target object, the image is a preview image corresponding to the large aperture mode, and the zoom ratio corresponding to the preview image is the first ratio.
  • the electronic device receives a second operation, and the second operation instructs the electronic device to adjust the zoom ratio to a second ratio, and the second ratio is greater than the first ratio.
  • the electronic device adjusts the fifth camera as the main camera, and uses the sixth camera as the auxiliary camera. Based on these operations, the electronic device displays an image including the target object, the image is a preview image corresponding to the large aperture mode, and the zoom magnification corresponding to the preview image is the second magnification.
  • the fourth camera and the fifth camera are arranged on the other side (namely the second side) of the electronic device, and the electronic device starts the fourth camera and the fifth camera when receiving an operation of switching cameras.
  • the fourth camera is used as the main camera
  • the fifth camera is used as the auxiliary camera
  • the electronic device outputs images by combining the first pixel signals.
  • the electronic device switches from the first magnification to the second magnification
  • the electronic device switches the main camera and the auxiliary camera, and adjusts to output images in the second pixel signal combination mode.
  • the zoom display In order to improve the display effect of the preview image displayed by the electronic device in the large aperture mode, and to improve the clarity of the zoom display.
  • the electronic device activates the fourth camera and the fifth camera to collect images of the target object.
  • the above method may further include: the electronic device outputs the original pixel image in a first pixel signal combination manner to generate an image of the target object, and displays the image including the target object.
  • the above electronic device display includes a target object, the image is a preview image corresponding to the large aperture mode, and the zoom magnification corresponding to the preview image is the second magnification, and the method may further include: the electronic device outputs the original pixel image in a second pixel signal combination method, to generate an image of the target object, and display the image of the target object.
  • the electronic device activates the first camera and the second camera to collect images of the target object.
  • the method further includes: the electronic device obtains a current distance value from the target object; if the current distance value exceeds the second distance value, the electronic device activates the first camera and the third camera to collect the distance of the target object. An image; wherein the second distance value is greater than the first distance value.
  • the electronic device activates the first camera and the third camera to collect images of the target object. After the electronic device displays the image including the target object, the electronic device can also perform the following operations: the electronic device obtains the current distance value from the target object again; if the current distance value does not exceed the third distance value, the electronic device starts the first camera and the second camera. The camera collects images of the target object; wherein, the third distance value is smaller than the first distance value.
  • the preview image corresponding to the large aperture mode includes a blurred display part and a clear display part. If the distance value does not exceed the first distance value, the electronic device activates the first camera and the second camera to collect images of the target object; wherein, the first camera is used as a main camera, and the second camera is used as an auxiliary camera.
  • the electronic device When the electronic device displays an image including the target object, the electronic device specifically executes: the main camera captures the first image, and the auxiliary camera captures the second image; the electronic device determines the target object according to the first image, and determines the target object as a clearly displayed part; The device calculates the depth of field according to the second image, and determines the blurred display part; the electronic device generates and displays a preview image corresponding to the large aperture mode according to the first image and the second image.
  • the preview image corresponding to the large aperture mode includes a blurred display part and a clear display part. If the distance value exceeds the first distance value, the electronic device activates the first camera and the third camera to collect images of the target object; wherein, the first camera is used as a main camera, and the third camera is used as an auxiliary camera.
  • the electronic device displays the image including the target object, and the electronic device specifically executes: the main camera acquires the first image, and the auxiliary camera acquires the second image; the main camera uses the first pixel signal combination method to output the original pixel image, and obtains the second image based on the original pixel image.
  • the preview image corresponding to the aperture mode is displayed.
  • the preview image corresponding to the large aperture mode includes a blurred display part and a clear display part.
  • the electronic device activates the fourth camera and the fifth camera to collect images of the target object, specifically including: the first image is collected by the main camera, and the second image is collected by the auxiliary camera; the main camera uses the first pixel signal to combine The original pixel image is output in a manner, and the first image is obtained according to the original pixel image; the electronic device determines the target object according to the original pixel image, so as to determine the target object as a clear display part; the electronic device calculates the depth of field according to the second image, and determines the blurred display part.
  • the electronic device generates an image of the target object according to the first image and the second image; in response to the second operation, the electronic device adjusts the fifth camera as the main camera, and uses the sixth camera as the auxiliary camera, including: the first image is collected by the main camera , the auxiliary camera captures the second image; the main camera uses the second pixel signal combination method to output the original pixel image, and cuts the original pixel image to obtain the first image; the electronic device determines the target object according to the original pixel image to determine the target The object is a clear display part; the electronic device calculates the depth of field according to the second image, and determines the blurred display part; the electronic device generates an image of the target object according to the first image and the second image.
  • the first pixel signal combining method includes: the electronic device acquires a pixel image of the target object, and performs an analog combining operation on the pixel information in the pixel image to output a pixel signal including the target object. image.
  • the second method of combining pixel signals includes: the electronic device acquires a pixel image of the target object, and rearranges pixels in the pixel image to output an image including the target object.
  • the electronic device uses the second pixel signal combination method to generate an image of the target object, and displays the image including the target object, including: the electronic device uses the second pixel signal combination method to output the original The image; the electronic device cuts the original image to generate an image of the target object; the electronic device displays the image including the target object.
  • an embodiment of the present application provides an electronic device, including: a first camera, a second camera, and a third camera for capturing images; a display screen for displaying an interface; one or more processors; and a memory; and one or more computer programs, wherein the one or more computer programs are stored in the memory.
  • the electronic device may perform the following steps: the electronic device receives a first operation, and the first operation is used to trigger the electronic device to enter a large aperture mode. It can be understood that the large aperture mode is a shooting mode, and the first operation can trigger the electronic device to shoot images or videos in the large aperture mode.
  • the electronic device When the electronic device enters into the large aperture mode for shooting, first the electronic device can acquire a distance value from the target object to be shot. If the distance value does not exceed the first distance value, the electronic device activates the first camera and the second camera to collect images of the target object, so that the electronic device can display a preview image including the target object. If the distance value exceeds the first distance value, the electronic device can activate the first camera and the third camera to collect images of the target object, so that the electronic device can display a preview image including the target object. It can be understood that the preview image is the corresponding preview image in the large aperture mode.
  • the electronic device may further perform the following operations: the electronic device activates the first camera to collect an image of the target object.
  • the acquisition of the distance value between the above-mentioned electronic device and the target object to be photographed specifically includes: the electronic device acquires an AF code (AF code) of the first camera, and the AF code indicates the distance value between the first camera and the target object. Based on the autofocus of the first camera, the electronic device can use the autofocus code as the distance value between the electronic device and the target object.
  • AF code AF code
  • the electronic device may further include a distance sensor.
  • the acquisition of the distance value between the electronic device and the photographed target object specifically includes: the electronic device activates a distance sensor to determine the distance value between the electronic device and the target object.
  • the electronic device when it displays an image including a target object, it may specifically include: the electronic device uses a first pixel signal combination method to output an original pixel image to generate an image of the target object, And display the image including the target object.
  • the electronic device activates the first camera and the second camera, and after capturing the image of the target object, the electronic device displays an image including the target object , specifically may include: the electronic device outputs the original pixel image in a first pixel signal combination manner to generate an image of the target object, and displays the image including the target object.
  • the electronic device may further perform: the electronic device receives a second operation, and the second operation instructs the electronic device to adjust the zoom ratio to the first Double magnification, the second magnification is greater than the first magnification.
  • the electronic device outputs the original pixel image in a second pixel signal combination manner to generate an image of the target object, and displays the image including the target object.
  • the first camera, the second camera and the third camera are arranged on the first surface of the electronic device, for example, these three cameras are arranged on the back of the mobile phone.
  • the distance value from the first camera to the second camera is smaller than the distance value from the first camera to the third camera.
  • the first camera, the second camera, and the third camera are arranged on a rear camera of the electronic device.
  • the first camera can be the main camera on the back
  • the second camera can be a wide-angle camera
  • the third camera can be a telephoto or depth camera.
  • the first camera, the second camera, and the third camera are arranged on the first surface of the electronic device.
  • the camera arranged on the first surface is turned on.
  • the electronic device may further include a fourth camera and a fifth camera, and the fourth camera and the fifth camera may be set on the second side of the electronic device, for example, the front of the electronic device.
  • the electronic device may also perform the following operation: the electronic device receives a third operation, and the third operation is used to trigger the electronic device to turn on the camera on the second surface.
  • the electronic device activates the fourth camera and the fifth camera to collect images of the target object.
  • the fourth camera is used as the main camera
  • the fifth camera is used as the auxiliary camera
  • the main camera is used to focus on the target object
  • the auxiliary camera is used to calculate the depth of field.
  • the electronic device displays an image including the target object, the image is a preview image corresponding to the large aperture mode, and the zoom ratio corresponding to the preview image is the first ratio.
  • the electronic device receives a second operation, and the second operation instructs the electronic device to adjust the zoom ratio to a second ratio, and the second ratio is greater than the first ratio.
  • the electronic device adjusts the fifth camera as the main camera, and uses the sixth camera as the auxiliary camera. Based on these operations, the electronic device displays an image including the target object, the image is a preview image corresponding to the large aperture mode, and the zoom magnification corresponding to the preview image is the second magnification.
  • the electronic device activates the first camera and the second camera to collect images of the target object.
  • the method further includes: the electronic device obtains a current distance value from the target object; if the current distance value exceeds the second distance value, the electronic device activates the first camera and the third camera to collect the distance of the target object. An image; wherein the second distance value is greater than the first distance value.
  • the electronic device activates the first camera and the third camera to collect images of the target object. After the electronic device displays the image including the target object, the electronic device can also perform the following operations: the electronic device obtains the current distance value from the target object again; if the current distance value does not exceed the third distance value, the electronic device starts the first camera and the second camera. The camera collects images of the target object; wherein, the third distance value is smaller than the first distance value.
  • the preview image corresponding to the large aperture mode includes a blurred display part and a clear display part. If the distance value does not exceed the first distance value, the electronic device activates the first camera and the second camera to collect images of the target object; wherein, the first camera is used as the main camera, and the second camera is used as the auxiliary camera.
  • the electronic device When the electronic device displays an image including the target object, the electronic device specifically executes: the main camera captures the first image, and the auxiliary camera captures the second image; the electronic device determines the target object according to the first image, and determines the target object as a clearly displayed part; The device calculates the depth of field according to the second image, and determines the blurred display part; the electronic device generates and displays a preview image corresponding to the large aperture mode according to the first image and the second image.
  • the preview image corresponding to the large aperture mode includes a blurred display part and a clear display part. If the distance value exceeds the first distance value, the electronic device activates the first camera and the third camera to collect images of the target object; wherein, the first camera is used as a main camera, and the third camera is used as an auxiliary camera.
  • the electronic device displays the image including the target object, and the electronic device specifically executes: the main camera acquires the first image, and the auxiliary camera acquires the second image; the main camera uses the first pixel signal combination method to output the original pixel image, and obtains the second image based on the original pixel image.
  • the preview image corresponding to the aperture mode is displayed.
  • the preview image corresponding to the large aperture mode includes a blurred display part and a clear display part.
  • the electronic device activates the fourth camera and the fifth camera to collect images of the target object, specifically including: the first image is collected by the main camera, and the second image is collected by the auxiliary camera; the main camera uses the first pixel signal to combine The original pixel image is output in a manner, and the first image is obtained according to the original pixel image; the electronic device determines the target object according to the original pixel image, so as to determine the target object as a clear display part; the electronic device calculates the depth of field according to the second image, and determines the blurred display part.
  • the electronic device generates an image of the target object according to the first image and the second image; in response to the second operation, the electronic device adjusts the fifth camera as the main camera, and uses the sixth camera as the auxiliary camera, including: the first image is collected by the main camera , the auxiliary camera captures the second image; the main camera uses the second pixel signal combination method to output the original pixel image, and cuts the original pixel image to obtain the first image; the electronic device determines the target object according to the original pixel image to determine the target The object is a clear display part; the electronic device calculates the depth of field according to the second image, and determines the blurred display part; the electronic device generates an image of the target object according to the first image and the second image.
  • the first pixel signal combining method includes: the electronic device obtains the pixel image of the target object, and performs a simulation combining operation on the pixel information in the pixel image to output the pixel signal including the target object. image.
  • the second method of combining pixel signals includes: the electronic device acquires a pixel image of the target object, and rearranges pixels in the pixel image to output an image including the target object.
  • the electronic device uses the second pixel signal combination method to generate an image of the target object, and displays the image including the target object, including: the electronic device uses the second pixel signal combination method to output the original The image; the electronic device cuts the original image to generate an image of the target object; the electronic device displays the image including the target object.
  • the present application also provides an electronic device, including: a camera for capturing images; a display screen for displaying an interface; one or more processors; memory; and one or more computer programs, one or more of which A plurality of computer programs are stored in the memory, and one or more computer programs include instructions.
  • the electronic device executes the photographing method in the above first aspect and any possible design manner thereof.
  • the present application also provides a computer-readable storage medium, which is characterized by including computer instructions, and when the computer instructions are run on the computer, the computer is made to execute the first aspect and any possible design methods thereof. Shooting method.
  • an embodiment of the present application provides a computer program product, which, when running on a computer, causes the computer to execute the method performed by the electronic device in the above first aspect and any possible design thereof.
  • the embodiment of the present application provides a chip system, and the chip system is applied to an electronic device.
  • the chip system includes one or more interface circuits and one or more processors; the interface circuits and processors are interconnected by lines; the interface circuits are used to receive signals from the memory of the electronic device and send signals to the processor, and the signals include Stored computer instructions; when the processor executes the computer instructions, the electronic device is made to execute the method in the above first aspect and any possible design thereof.
  • the electronic device of the second aspect can be understood that the electronic device of the second aspect, the electronic device of the third aspect, the computer-readable storage medium of the fourth aspect, the computer program product of the fifth aspect and the chip system of the sixth aspect provided by the present application can For the beneficial effects achieved, reference may be made to the beneficial effects in the first aspect and any possible design manner thereof, which will not be repeated here.
  • FIG. 1A is a schematic diagram of a pixel signal combining method provided by an embodiment of the present application.
  • FIG. 1B is a schematic diagram of another pixel signal combination method provided by the embodiment of the present application.
  • FIG. 2 is a schematic diagram of lens imaging provided by an embodiment of the present application.
  • FIG. 3A is a schematic diagram of an electronic device shooting a target object according to an embodiment of the present application.
  • FIG. 3B is a schematic diagram of a photographing interface of an electronic device provided in an embodiment of the present application.
  • FIG. 4 is a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • FIG. 5 is a schematic diagram of another electronic device provided by the embodiment of the present application.
  • FIG. 6A is a schematic diagram of a shooting scene provided by an embodiment of the present application.
  • FIG. 6B is a schematic diagram of another shooting interface provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of another shooting scene provided by the embodiment of the present application.
  • FIG. 8A is a flow chart of a shooting method provided by an embodiment of the present application.
  • FIG. 8B is a flow chart of another shooting method provided by the embodiment of the present application.
  • FIG. 9 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • the first pixel signal combining method (Binning): during the process of capturing an image by the electronic device, the light reflected by the target object is collected by the camera, so that the reflected light is transmitted to the image sensor.
  • the image sensor includes a plurality of photosensitive elements, and the charge collected by each photosensitive element is a pixel, and an analog binning (Binning) operation is performed on the pixel information.
  • Binning can combine n ⁇ n pixels into one pixel.
  • Binning can synthesize adjacent 3 ⁇ 3 pixels into one pixel, that is, the colors of adjacent 3 ⁇ 3 pixels are presented in the form of one pixel.
  • first pixel signal combination method may be referred to as “first pixel arrangement method”, “first pixel combination method”, “first image readout mode” and so on.
  • FIG. 1A is a schematic diagram of a process of reading out an image in a Binning manner after the electronic device acquires an image.
  • FIG. 1A is a schematic diagram of a 6 ⁇ 6 pixel, and adjacent 3 ⁇ 3 pixels are synthesized into one pixel.
  • FIG. 1A is a schematic diagram of pixels read out by the Binning method.
  • the 3 ⁇ 3 pixels in the 01 area in (a) in Figure 1A are formed into the pixel G in (b) in 1A; the 3 ⁇ 3 pixels in the 02 area in (a) in Figure 1A are pixels to form pixel B in (b) in Figure 1A; 3 ⁇ 3 pixels in the 03 area in Figure 1A (a) to form pixel R in (b) in Figure 1A; in Figure 1A ( The 3 ⁇ 3 pixels in the 04 area of a) form the pixel G in (b) in FIG. 1A.
  • the Bayer format image refers to an image that only includes red, blue and green (ie three primary colors) in the image. For example, pixel A formed by 3 ⁇ 3 pixels in area 01 is red, pixel B formed by 3 ⁇ 3 pixels in area 02 is green, pixel C formed by 3 ⁇ 3 pixels in area 03 is green, Pixel D formed by 3 ⁇ 3 pixels in the 04 area is blue.
  • the second pixel signal combination method when the image is read out in the Remosaic method, the pixels are rearranged into a Bayer format image. For example, assuming that a pixel in an image is composed of n ⁇ n pixels, then using Remosaic can rearrange n ⁇ n pixels in a pixel in the image.
  • the second pixel signal combination method may also be referred to as "second pixel arrangement method", “second pixel combination method”, “second image readout mode” and so on.
  • FIG. 1B is a schematic diagram of a pixel, and each pixel is composed of adjacent 3 ⁇ 3 pixels.
  • FIG. 1B is a schematic diagram of an image in Bayer format read out by using the Remosaic method. Specifically, pixel A in (a) of FIG. 1B is red, pixel B and pixel C are green, and pixel D is blue. Each pixel in (a) in Figure 1B is divided into 3 ⁇ 3 pixels and rearranged respectively. That is, the Remosaic method is used to read out, and the read out image is an image in the Bayer format shown in (b) in FIG. 1B.
  • Out-of-focus imaging There are clear parts and blurred parts (or blurred parts) in an image. Then, the imaging of the blurred part is called out-of-focus imaging. Specifically, the blurred parts in the image include foreground blurring and background blurring.
  • a lens in an electronic device is generally composed of at least one lens, and the lens includes a convex lens and a concave lens.
  • the lens includes a convex lens and a concave lens.
  • Circle of confusion Taking the lens as a convex lens as an example, if the image plane (or projection surface) happens to include the focal point of the convex lens, in this case, the image of the light beam reflected by the target object on the image plane is a Clear point. If the image plane does not include the focus position, it does not matter whether the image plane is located between the focus and the convex lens, or, the image plane is located behind the focus. Then, the image of the light beam reflected by the target object on the image plane is not a point, but a circular area, which is the circle of confusion.
  • the circle of confusion can also be called: circle of confusion, ring of confusion, circle of diffuse light, circle of confusion, disc of scatter, etc.
  • FIG. 2 it is a schematic diagram of lens imaging.
  • O1 is the optical axis of the lens L1
  • the distance between the lens L1 and the focal point F is the focal length.
  • the electronic device can display a preview image, the preview image includes a clear area of the circle of confusion, and the image in the circle of confusion is perceived to be clear by human eyes, and the circle of confusion at this time is called the allowable circle of confusion.
  • Depth of field During the imaging process of electronic equipment, the light reflected by the target object propagates to the imaging surface, so that the imaging surface can collect the light reflected by the target object.
  • the imaging surface includes a focal point
  • the electronic device can obtain a clear image. It can be understood that the electronic device can still acquire a clear image of the target object if the target object is located within a certain area before and after the focusing point. This area is called the depth of field, where, with the lens as the near point, the clear imaging range from the focus point to the near point is called the foreground depth, and the imaging range from the focus point to the farthest point where the image is clear is called the far depth of field.
  • FIG. 3A it is a schematic diagram of an electronic device shooting a target object.
  • O1 is the optical axis of the lens L1
  • f represents the focusing point of the lens L1
  • the distance shown by S2 is the depth of field. That is to say, when the target object is located within the range between M1 and M2, the light reflected by the target object can be reflected to the imaging surface through the lens L1, so that the electronic device can obtain a clear image of the target object.
  • Aperture A device used to control the beam of light passing through a lens, usually inside the lens.
  • the aperture may be composed of several rolled-leaf metal sheets. These metal sheets can form a hole with an adjustable size.
  • the electronic device adjusts the size of the aperture, the size of the hole is adjusted by the rotation of the metal sheet, thereby achieving the purpose of adjusting the size of the shooting aperture.
  • the shooting method provided in the embodiment of the present application can be applied to an electronic device including multiple cameras, and multiple shooting modes can be set in the electronic device, such as a portrait mode, a large aperture mode, a professional mode, and a night shooting mode.
  • a user uses an electronic device to capture an image
  • the user may select a large aperture mode in the electronic device.
  • the electronic device uses a large aperture mode to generate an image
  • the depth of field in the captured image can be made shallower, so that the subject (or called the focused object) that the lens focuses on in the electronic device is clear, and other objects in the non-focused range ( or referred to as the target object) displayed on the electronic device will be blurred to highlight the focused subject. That is to say, when an electronic device adopts a large aperture mode to shoot an image, the subject in the obtained image is clear and the background is blurred.
  • FIG. 3B it is a schematic diagram of a mobile phone displaying a camera interface in a large aperture mode.
  • 301 represents the size of the aperture, that is, the part clearly displayed in the generated image with a large aperture.
  • 302 represents an aperture size adjustment area, and FIG. 3B shows that the current aperture size is f4. If the user wants to modify the aperture size, the user can slide and adjust the numbers or dots in the 302 area.
  • the aperture size shown in 301 in the shooting interface displayed by the mobile phone can be changed.
  • the adjustment range of the aperture in the mobile phone is between f0.95 and f16, and the smaller the aperture value, the larger the aperture.
  • the electronic device may adopt a dual camera (that is, two cameras work simultaneously) with a large aperture to generate a captured image.
  • one of the two cameras is the main camera (or called the main camera, hereinafter referred to as the main camera by "main camera")
  • the other is the auxiliary camera (or called the auxiliary camera, hereinafter referred to as the auxiliary camera) ).
  • the main camera and the auxiliary camera are turned on and are in a working state.
  • the electronic device can obtain the images captured by the main camera and the auxiliary camera. Since the main camera and the auxiliary camera are different cameras, the focal points of the main camera and the auxiliary camera are different, and the field of view (Field Angle) of the image captured by the main camera and the auxiliary camera of View, FOV) are also different.
  • the electronic device may use an image cropping algorithm to determine overlapping portions of the first image and the second image according to the first image captured by the main camera and the second image captured by the auxiliary camera. Further, the electronic device determines the target object according to the focus of the main camera, and then determines the clearly displayed part of the displayed image; the electronic device calculates the depth of field according to the focus of the auxiliary camera, so as to determine the blurred part of the displayed image.
  • the electronic device may perform blurring processing on a part that requires a blurred display based on a blurring algorithm, so that a part of the displayed image has a blurred display effect. In this way, the electronic device forms a display image with a blurred background of a large aperture. In the displayed image, the focused target object is clear, and the background is blurred.
  • FIG. 4 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, a battery 142, an antenna 1, and an antenna 2.
  • the sensor module 190 may include a pressure sensor, a gyroscope sensor, an acceleration sensor, a distance sensor, a proximity light sensor, a temperature sensor, a touch sensor, an ambient light sensor and the like.
  • the camera module 180 may include 2-N cameras, for example, the camera module 180 includes a first camera 181 and a second camera 182 .
  • the first camera 181 is the main camera
  • the second camera 182 is the auxiliary camera.
  • the electronic device can call the first camera 181 and the second camera 182, the electronic device can calculate the depth of field according to the image collected by the second camera 182, and generate a preview image (or captured image) according to the image collected by the first camera 181. ).
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the electronic device 100 .
  • the electronic device 100 may include more or fewer components than shown in the figure, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the controller may be the nerve center and command center of the electronic device 100 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the memory in processor 110 is a cache memory.
  • the memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • processor 110 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship between the modules shown in the embodiment of the present invention is only a schematic illustration, and does not constitute a structural limitation of the electronic device 100 .
  • the electronic device 100 may also adopt different interface connection manners in the foregoing embodiments, or a combination of multiple interface connection manners.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module 140 can receive charging input from the wired charger through the USB interface 130 .
  • the charging management module 140 may receive a wireless charging input through a wireless charging coil of the electronic device 100 . While the charging management module 140 is charging the battery 142 , it can also provide power for electronic devices through the power management module 141 .
  • the power management module 141 is used for connecting the battery 142 , the charging management module 140 and the processor 110 .
  • the power management module 141 receives the input from the battery 142 and/or the charging management module 140 to provide power for the processor 110 , the internal memory 121 , the external memory, the display screen 173 , the camera 193 , and the wireless communication module 160 .
  • the power management module 141 can also be used to monitor parameters such as battery capacity, battery cycle times, and battery health status (leakage, impedance).
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1 , the antenna 2 , the mobile communication module 150 , the wireless communication module 160 , a modem processor, a baseband processor, and the like.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 100 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 100 .
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signals modulated by the modem processor, and convert them into electromagnetic waves through the antenna 1 for radiation.
  • a modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low frequency baseband signal. Then the demodulator sends the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is passed to the application processor after being processed by the baseband processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 173 .
  • the modem processor may be a stand-alone device.
  • the wireless communication module 160 can provide applications on the electronic device 100 including wireless local area networks (wireless local area networks, WLAN) (such as Wi-Fi network), Bluetooth (blue tooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), NFC, infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the antenna 1 of the electronic device 100 is coupled to the mobile communication module 150, and the antenna 2 is coupled to the wireless communication module 160, so that the electronic device 100 can communicate with the network and other devices through wireless communication technology.
  • the electronic device 100 implements a display function through a GPU, a display screen 173 , and an application processor.
  • the GPU is a microprocessor for image processing, connected to the display screen 173 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 173 is used to display images, videos and the like.
  • the display screen 173 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device 100 may include 1 or N display screens 173 , where N is a positive integer greater than 1.
  • the electronic device 100 can realize the shooting function through the ISP, the camera module 180 , the video codec, the GPU, the display screen 173 and the application processor.
  • the ISP is mainly used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • the camera module 180 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193 , where N is a positive integer greater than 1.
  • the first camera 181 and the second camera 182 collect original images, and the camera module 180 performs image processing on the collected original images to obtain the first image and the second image.
  • the first image is generated according to the original image collected by the first camera 181
  • the second image is generated according to the original image collected by the second camera 182 .
  • the second camera 182 serves as an auxiliary camera, and the ISP can perform image processing on the second image and the data fed back by the second camera 182 to calculate the depth of field in the current shooting scene.
  • the first camera 182 serves as the main camera, and the ISP can determine the blurred part of the first image according to the calculated depth of field.
  • the ISP can also determine the target object that the main camera focuses on according to the overlapping portion in the first image and the second image. In this way, the ISP can process the first image based on a preset algorithm, so that the target object in the first image is imaged more clearly and the background part is blurred. Based on this, the ISP may generate a display image by processing the first image and the second image, and transmit the display image to the display screen, so that the display screen displays the display image.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the electronic device 100 selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs.
  • the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4 and so on.
  • MPEG moving picture experts group
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • Applications such as intelligent cognition of the electronic device 100 can be realized through the NPU, such as image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, so as to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the processor 110 executes various functional applications and data processing of the electronic device 100 by executing instructions stored in the internal memory 121 .
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 100 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the keys 170 include a power key, a volume key and the like.
  • the key 170 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100 .
  • the motor 171 can generate a vibrating reminder.
  • the motor 171 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the indicator 172 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the electronic device in the embodiment of the present application can be a mobile phone with a camera function, an action camera (GoPro), a digital camera, a tablet computer, a desktop, a laptop, a handheld computer, a notebook computer, a vehicle-mounted device, a super mobile Personal computer (ultra-mobile personal computer, UMPC), netbook, and cell phone, personal digital assistant (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, etc.
  • PDA personal digital assistant
  • augmented reality augmented reality, AR
  • VR virtual reality
  • the touch screen of the mobile phone may include a display panel and a touch panel.
  • the display panel can display the interface, and the touch panel can detect the user's touch operation and report it to the mobile phone processor for corresponding processing.
  • the mobile phone provided in the embodiment of the present application is provided with cameras with different focal lengths.
  • the back of the mobile phone includes 4 cameras
  • the front of the mobile phone includes 2 cameras.
  • the four cameras on the back of the phone include the rear main camera, macro camera, wide-angle camera and depth camera.
  • the positional relationship of the four cameras on the back of the mobile phone is shown in Figure 5(a).
  • the first rear camera 501 shows the main camera on the back
  • the second rear camera 502 shows the macro camera
  • the third rear camera 502 shows the macro camera.
  • the front camera 503 shows a wide-angle camera
  • the fourth rear camera 504 shows a depth camera.
  • the two cameras on the front of the mobile phone include a front main camera and a secondary camera.
  • the positional relationship between the two cameras on the front of the mobile phone is shown in (b) in Figure 5.
  • the first front camera 505 is the front main camera
  • the second front camera is the front main camera.
  • the front camera 506 shown is the secondary camera.
  • the mobile phone uses the camera on the back to generate images, and the mobile phone can call any two of the four cameras on the back to be in working condition.
  • the main camera on the back is used as the main camera, and the wide-angle camera is used as the auxiliary camera; or, the main camera on the back is used as the main camera, and the depth camera is used as the auxiliary camera.
  • the mobile phone can also use the two cameras on the front to generate images with a large aperture background blur effect. It can be the main camera on the front as the main camera and the secondary camera as the auxiliary camera; or the secondary camera as the main camera and the main camera on the front as the Auxiliary photography.
  • a camera application (or other applications having a function of activating the camera of the mobile phone) is installed in the mobile phone, and an image is generated when the mobile phone runs the camera application.
  • the mobile phone can be instructed to start the camera application by means of touch operation, button operation, gesture operation or voice operation.
  • the mobile phone runs a camera application, which can display a preview image in real time.
  • the mobile phone can generate photos in various shooting modes. For example, portrait mode, large aperture mode, slow motion, panorama mode, etc.
  • the image generated by the camera application in the large aperture mode has a clear display of the target object and a blurred background effect.
  • the two cameras of the mobile phone are in working condition. Among them, since the focal length of each camera is different, and the field of view angle of the lens is different. If the distance between the two cameras is greater, the difference in field angles of the images collected by the two cameras is more obvious. It should be noted that when generating an image in the large aperture mode, the mobile phone needs to generate an image with a background blur effect based on two images collected by the main camera and the auxiliary camera respectively. In this case, the greater the difference in the field of view angles on the images captured by the two cameras, the more accurate the mobile phone will be in calculating the depth of field of the mid-distance scene.
  • the mobile phone may not be able to determine the depth of field in the close-range scene due to the depth-of-field blind spot when calculating the depth-of-field of the near-distance scene, making the mobile phone calculate the depth of field in the close-distance scene. An error occurred.
  • the mobile phone can first determine the distance between the current target object and the mobile phone. In this way, the mobile phone can determine whether the current shooting scene is a close-range scene, a middle-distance scene or a long-distance scene.
  • the mobile phone determines the main camera and auxiliary camera to be used according to the current shooting scene, so that the mobile phone can generate an image with a background blur effect based on the images collected by the main camera and auxiliary camera.
  • the image sensor included in the mobile phone may be a Quadra Sensor (four sensors, that is, an image sensor with a larger size).
  • the Quadra Sensor can be connected with the camera for image processing on the image collected by the camera, and the Quadra Sensor can output the image in the first or second pixel binning mode.
  • the Quadra Sensor can output a higher-definition image while ensuring that the background blurred image is generated.
  • the specifications of the main camera on the back of the mobile phone are 108M (108 million pixels), and the camera uses a 3x3 Quadra sensor; the specification of the wide-angle camera (or super wide-angle camera) is 8M (8 million pixels); the specification of the depth camera 2M (that is, the highest imaging resolution is 2 million pixels); the macro camera specification is 2M (2 million pixels).
  • the configuration of the two front cameras of the mobile phone can be: the front main camera has a specification of 32M, and the camera uses a 2x2Quadra sensor; the secondary camera is a wide-angle camera, and the specification is 12M.
  • the distance between the main camera on the back and the wide-angle camera is medium.
  • the depth camera and the macro camera are equidistantly distributed on both sides of the wide-angle camera. Both the depth camera and the macro camera are farther away from the back main camera, and the distance from the back main camera to the depth camera is equal to the distance from the back main camera to the macro camera. Therefore, the distance between the main camera on the back and the depth camera (or macro camera) can be called a long distance.
  • the preset distance or called the first distance value
  • the distance between the mobile phone and the target object to be photographed is within 50cm (which can be understood as a close-range shooting scene)
  • the combination of the main camera on the back and the wide-angle camera can be used.
  • the distance between the mobile phone 601 and the photographed target object 602 is 45 cm
  • the mobile phone runs the camera application, and the large aperture mode is selected, and it is determined that an image with a large aperture is about to be generated.
  • the mobile phone when the mobile phone displays the shooting interface, 61 is an aperture mark, and (b) in FIG. 6A shows that the current aperture size is f4 (default aperture).
  • the current aperture size of the mobile phone is f4
  • the zoom ratio is 1 ⁇ . It can be understood here that when the mobile phone is in the large aperture shooting mode, the default aperture of the mobile phone is f4, and the zoom ratio is 1 ⁇ . In other implementations, the mobile phone can also set the default aperture to f4 and the zoom ratio to 2 ⁇ .
  • the mobile phone obtains the distance value from the target object.
  • the mobile phone determines that the distance value between the mobile phone and the target object (the object to be photographed) is 45cm, and it is determined to be a close-up scene
  • the mobile phone can select the main camera on the back as the main camera.
  • the camera serves as a secondary camera.
  • the rear main camera acquires the first image
  • the wide-angle camera acquires the second image.
  • the mobile phone calculates the depth of field according to the second image
  • the mobile phone determines the overlapping area of the two images according to the first image and the second image, and the overlapping area of the two images includes the target object 602 to be photographed.
  • the main camera focuses on the target object 602, and the mobile phone makes the captured target object appear clearly and the background part is blurred and displayed according to the overlapping area.
  • the user can check the large aperture image effect through the display interface of the mobile phone, if the user wants to adjust the display effect of the large aperture. For example, if the user wants to adjust the size of the aperture, the user can click the aperture icon 61 in (b) of FIG. 6A. In response to the user's click operation on the aperture mark 61, the mobile phone can display a photo preview interface as shown in FIG. 6B. As shown in FIG. 6B , the shooting interface of the mobile phone includes an aperture adjustment axis, and the user can adjust the aperture size in the large aperture image displayed on the mobile phone by adjusting the aperture adjustment axis.
  • the combination of the main camera on the back and the depth camera, or the combination of the main camera on the back and the macro camera can be used.
  • the distance between the mobile phone 601 and the target object 602 to be shot is 90 cm.
  • the mobile phone runs the camera application, and the large aperture mode is selected, and it is determined to use the large aperture to generate an image.
  • the mobile phone first acquires the distance value to the target object, and when the mobile phone determines that the distance to the target object is 90cm (greater than the preset distance value), it is determined to be a long-distance scene.
  • the main camera on the back is used as the main camera, and the depth camera is used as the auxiliary camera.
  • the main camera on the back collects the first image, and the depth camera collects the second image.
  • the mobile phone can generate an image with a blurred background display effect according to the first image and the second image.
  • the short distance means within 50cm.
  • the distance between the main camera on the back and the wide-angle camera is relatively short, so the mobile phone can use a combination of the main camera on the back and the wide-angle camera to avoid the phenomenon that the depth of field is not easy to occur.
  • Long distance means 50cm away. In this shooting scene, there may be obvious parallax between the two cameras when the mobile phone collects images.
  • the main camera on the back is the same distance from the depth camera and the macro camera respectively.
  • the distance between the main camera on the back and the depth camera (or macro camera) is equal, and the mobile phone uses a combination of the main camera on the back and the depth camera (or macro camera), which can improve the problem of obvious parallax between the two cameras.
  • the depth of field calculation is accurate.
  • the mobile phone selects the main camera and auxiliary camera in the mobile phone according to different shooting scenarios (that is, different distances from the target object), and the image output method of the main camera sensor. For details, refer to the corresponding relationship in Table 1 below.
  • Table 1 Correspondence between shooting scenes and camera selection when mobile phones generate large aperture images
  • the method for the mobile phone to determine the shooting scene may be: the camera on the mobile phone automatically focuses to generate an image, and the mobile phone determines the distance value between the mobile phone and the target object according to the coding of the camera focus.
  • the mobile phone can also calculate the distance value between the mobile phone and the target object according to a sensor (eg, a laser sensor). For example, when the mobile phone is in a shooting state and the mobile phone is in a large aperture mode, the sensor is activated to calculate the distance value between the mobile phone and the target object.
  • a sensor eg, a laser sensor
  • the electronic device increases the zoom factor in the process of capturing an image
  • the clarity of the image will be affected.
  • the binning method can be used to combine multiple pixels into one pixel, the photosensitive performance of the image sensor can be improved and the signal-to-noise ratio can be increased.
  • the Remosaic method is used to rearrange a pixel into a Bayer-format image, the clarity of the image can be improved.
  • the image sensor collects the pixel image and transmits it to the ISP.
  • the image undergoes post-algorithm processing.
  • a Remosaic method is used to rearrange a pixel into a Bayer format image, and the Bayer format image is also a pixel image.
  • the Binning method multiple pixels can be synthesized into one pixel to obtain a pixel-processed image, and this format image is also a pixel image.
  • the main camera on the back of the mobile phone uses the first pixel signal combination method to process the pixel image to generate an original pixel image, and the mobile phone can further process the original pixel image to generate a large aperture mode image including the target object.
  • the mobile phone determines that the shooting scene is within 50cm, the mobile phone uses the main camera on the back as the main camera, and the wide-angle camera as the auxiliary camera.
  • the main camera of the mobile phone outputs an image, it can adopt the first pixel binning method or the second pixel binning method.
  • the main camera of the mobile phone if the brightness of the scene currently captured by the mobile phone is less than a preset brightness threshold, the main camera of the mobile phone outputs an image using the first pixel combination method. If the mobile phone determines that the brightness of the current shooting scene is greater than the preset brightness threshold, the main camera of the mobile phone may output an image in a second pixel binning manner.
  • the main camera of the mobile phone uses the Remosaic method to output images, the pixels of the image are increased, and the clarity of the image is improved.
  • the mobile phone also needs to crop the output image so that the image displayed by the mobile phone meets the image display requirements.
  • the second image output method can still ensure that the output image has a good display effect. Therefore, in other implementations, if the mobile phone is more than 50cm away, in the scene without zoom operation, the main camera on the back of the mobile phone can be used as the main camera, the depth camera can be used as the auxiliary camera, and the mobile phone can output images by output binning. If the mobile phone is beyond 50cm and the mobile phone receives a zoom operation (for example, the zoom is increased by 3 times "3x zoom"), the mobile phone can use the main camera on the back as the main camera, the macro camera as the auxiliary camera, and the mobile phone uses Remosaic to output image, and crop the output image.
  • a zoom operation for example, the zoom is increased by 3 times "3x zoom
  • the mobile phone when generating an image in the large aperture mode, the user also needs to adjust the FOV by zooming.
  • the mobile phone is in the large aperture mode, and the mobile phone receives the user's zoom operation, and the mobile phone can use the Remosaic mode of the main camera to output images, improve the effective pixels of the main camera, and effectively improve the definition of the basic image with a large aperture.
  • the non-bokeh area that is, the clearly displayed area
  • the contrast between the non-bokeh area and the blurred background area in the image is more obvious.
  • the mobile phone when the mobile phone generates an image under a large aperture, the mobile phone will display a preview image in real time. Before the mobile phone generates a large aperture image, the mobile phone can detect the current scene in real time, that is, the distance value between the mobile phone and the target object. It should be noted that if the distance value between the mobile phone and the target object is close to 50cm, when the mobile phone displays the preview image, if the user carries the mobile phone, the distance between the mobile phone and the target object changes.
  • the scenes detected by the mobile phone may switch between close-range scenes (within 50cm) and mid-range and long-distance scenes (beyond 50cm), which will cause the main camera and auxiliary camera of the mobile phone to switch between ping-pong.
  • the mobile phone can set a threshold protection range, for example, the threshold protection range is set to be the third distance value to the second distance value.
  • the third distance value is smaller than the first distance value (ie, the preset distance value), and the first distance value is smaller than the second distance value.
  • the threshold protection range may be between 60cm-45cm. Specifically, when the current scene of the mobile phone is a close-range scene, that is, the distance between the mobile phone and the target object is within 50 cm, and when the mobile phone detects that the distance value between the target objects is 60 cm, the mobile phone determines that the current scene has changed. Then switch the working camera.
  • the main camera on the back of the mobile phone is used as the main camera, and the wide-angle camera is used as the auxiliary camera.
  • the mobile phone switches the working camera.
  • the main camera on the back of the mobile phone is set as the main camera, and the depth camera is the auxiliary camera. That is to say, in this case, the current shooting scene of the mobile phone is a medium and long-distance scene. If the distance between the mobile phone and the target object changes from 60cm to 45cm, the mobile phone can switch the currently working camera again, the main camera on the back of the mobile phone is used as the main camera, and the wide-angle camera is used as the auxiliary camera.
  • the distance value between the mobile phone and the target object is higher than the preset distance threshold (ie, 60cm is higher than 50cm).
  • the distance value between the mobile phone and the target object is lower than the preset distance value (ie, 45cm is lower than 50cm).
  • the lenses in mobile phones are composed of multiple lenses, so the effective focal length is often set
  • EFL Effective Focal Length
  • the lens automatically focuses (Automatic Focus, AF) to 60cm
  • the depth of field corresponding to the image generated by the mobile phone is 53cm to 70cm.
  • the AF of the lens is 50cm
  • the depth of field corresponding to the image generated by the mobile phone is 45cm to 56cm.
  • the AF of the lens is 45cm
  • the depth of field corresponding to the image generated by the mobile phone is 41cm to 50cm. That is to say, in the data around the preset distance of 50cm, when the AF of the lens in the mobile phone is 45cm and 60cm, the depth of field range of the image generated by the mobile phone does not overlap. Based on this, the mobile phone can set the range of 60cm-45cm as the threshold protection range.
  • the embodiment of the present application also provides a photographing method.
  • the application of the method to a mobile phone is taken as an example.
  • the implementation flow of the method is shown in FIG. 8A , and the method may include step 801 - step 805 .
  • Step 801 The mobile phone starts the camera application.
  • Step 802 In response to the mode selection operation, the mobile phone uses the large aperture mode to display the preview image.
  • the mobile phone in the large aperture mode, can generate images with a bokeh effect.
  • the preview image displayed on the mobile phone and the image generated in response to the shooting key are images with a bokeh effect.
  • the "portrait” part of the image generated by the “portrait mode” in the mobile phone is clear, and the background part of the portrait is blurred. That is to say, the "portrait mode” in the camera application is also a kind mentioned in the embodiment of this application. Large aperture mode.
  • Step 803 the mobile phone determines the distance value between the target object and the mobile phone, and judges whether the distance value is greater than a preset distance value. If the distance value is greater than the preset distance value, execute step 804; if the distance value is less than or equal to the preset distance value, execute step 805.
  • the distance between the cameras will affect the parallax of the collected images due to the different FOVs of the cameras. Adjusting the camera used according to the distance value between the phone and the camera can improve the quality of the image generated by the phone.
  • Step 804 The mobile phone uses the first camera as the main camera, and the second camera as the auxiliary camera to collect images and display the preview image.
  • Step 805 the mobile phone uses the first camera as the main camera, and the third camera as the auxiliary camera to collect images, and display the preview image.
  • the distance value between the first camera and the second camera is greater than the distance value between the first camera and the third camera.
  • the embodiment of the present application also provides a photographing method, taking the implementation of the photographing method by a mobile phone as an example. As shown in FIG. 8B, the method includes step 8-1 to step 8-6.
  • Step 8-1 The mobile phone receives the first operation, and the first operation is used to trigger the mobile phone to enter the large aperture mode.
  • the mobile phone runs a camera application, and the mobile phone displays a photo preview interface.
  • the photo preview interface of the mobile phone includes a variety of shooting modes, and the shooting modes include a large aperture mode.
  • the first operation is the user's click operation on the "large aperture mode" in the shooting mode, so that the mobile phone performs the large aperture mode.
  • the mobile phone is running the first application (the first application is a non-camera application), and the first application has the permission to start the camera.
  • the first application calls the camera, and the mobile phone displays a shooting interface corresponding to the camera.
  • the shooting interface includes multiple shooting modes provided by the mobile phone, and the shooting modes include a large aperture mode.
  • the first operation is the user's click operation on the "large aperture mode" in the shooting mode, so that the mobile phone performs the large aperture mode.
  • Step 8-2 In response to the first operation, acquire the distance value between the mobile phone and the photographed target object.
  • a plurality of cameras are generally installed in a mobile phone.
  • the mobile phone adopts a large aperture mode to capture images, the distance values between the mobile phone and the target object to be photographed are different, and the cameras activated by the mobile phone are also different. Therefore, before the mobile phone displays the preview image with a large aperture, the mobile phone acquires a distance value from the photographed target object in response to the first operation.
  • the mobile phone includes a laser sensor, and the mobile phone can obtain a distance value between the mobile phone and the target object through the laser sensor.
  • the main camera in the mobile phone is activated, and the mobile phone obtains images collected by the main camera, and according to the focus in the images collected by the main camera, the mobile phone can calculate the distance value to the target object based on the images collected by the main camera.
  • Step 8-3 Determine whether the distance value exceeds the first distance value. If the distance value exceeds the first distance value, perform step 8-4; if the distance value does not exceed the first distance value, perform step 8-5.
  • the shooting distance is related to the camera activated by the mobile phone, and the specific corresponding relationship and implementation details have already been described, and will not be repeated here.
  • “exceeds” may indicate that the distance value is greater than the first distance value, or may indicate that the distance value is greater than or equal to the first distance value. For example, if the distance value is greater than the first distance value, the mobile phone performs step 8-4; if the distance value is less than or equal to the first distance value, the mobile phone performs step 8-5. For another example, if the distance value is greater than or equal to the first distance value, the mobile phone performs step 8-4; if the distance value is smaller than the first distance value, the mobile phone performs step 8-5.
  • Step 8-4 The mobile phone starts the main camera and the depth camera on the back to collect images of the target object.
  • Step 8-5 Start the main camera and wide-angle camera on the back of the mobile phone to collect images of the target object.
  • Step 8-6 The mobile phone generates a preview image including the target object according to the image of the target object, and displays the preview image.
  • the preview image is a preview image corresponding to the large aperture mode.
  • the image of the target object in the preview image is a clearly displayed part of the image
  • the non-target object is a blurred display part of the image.
  • the clearly displayed target object is in the aperture 301
  • the area outside the aperture 301 is a blurred display area.
  • the phone when the phone determines that the target object has changed, the phone detects a current distance value to the target object. so that the activated camera is adjusted according to the current distance value. During the shooting process, frequent switching of the camera may cause the display image of the mobile phone to flicker.
  • a threshold protection range can be set in the mobile phone. The setting of the threshold protection range in the mobile phone has been described in detail in the above scene switching, and will not be repeated here.
  • the method provided in the embodiment of the present application is described above by taking the electronic device as a mobile phone as an example. When the electronic device is other devices, the above method can also be used. I won't go into details here.
  • the electronic device provided in the embodiment of the present application includes corresponding hardware structures and/or software modules for performing various functions.
  • the embodiments of the present application can be implemented in the form of hardware or a combination of hardware and computer software in combination with the example units and algorithm steps described in the embodiments disclosed herein. Whether a certain function is executed by hardware or computer software drives hardware depends on the specific application and design constraints of the technical solution. Professionals and technicians may use different methods to implement the described functions for each specific application, but such implementation should not be regarded as exceeding the scope of the embodiments of the present application.
  • the embodiments of the present application may divide the above-mentioned electronic device into functional modules according to the above-mentioned method examples.
  • each functional module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above-mentioned integrated modules can be implemented in the form of hardware or in the form of software function modules. It should be noted that the division of modules in the embodiment of the present application is schematic, and is only a logical function division, and there may be other division methods in actual implementation.
  • the chip system includes at least one processor 901 and at least one interface circuit 902 .
  • the processor 901 and the interface circuit 902 may be interconnected through wires.
  • interface circuit 902 may be used to receive signals from other devices, such as memory of an electronic device.
  • the interface circuit 902 may be used to send signals to other devices (such as the processor 901).
  • the interface circuit 902 can read instructions stored in the memory, and send the instructions to the processor 901 .
  • the electronic device may be made to execute various steps in the foregoing embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the above-mentioned electronic device, the electronic device is made to perform the various functions or steps performed by the mobile phone in the above-mentioned method embodiment .
  • the embodiment of the present application also provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute each function or step performed by the mobile phone in the method embodiment above.
  • the disclosed devices and methods may be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Computing Systems (AREA)
  • Studio Devices (AREA)
  • Indication In Cameras, And Counting Of Exposures (AREA)

Abstract

本申请实施例提供一种拍摄方法及电子设备,涉及电子技术领域。可以提供具有大光圈背景虚化的拍摄图像,并且能够提高焦点处物体的清晰度,以及背景部分虚化效果。该方法应用于电子设备,电子设备包括第一摄像头、第二摄像头和第三摄像头。方法包括:电子设备接收到第一操作,第一操作用于触发所述电子设备进入大光圈模式;响应于第一操作,电子设备获取与被拍摄的目标对象的距离值;若距离值不超过第一距离值,电子设备启动第一摄像头和第二摄像头,采集所述目标对象的影像;若距离值超过第一距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像;电子设备显示包括目标对象的预览图像,其中,预览图像为大光圈模式对应的预览图像。

Description

一种拍摄方法及电子设备
本申请要求于2021年6月15日提交国家知识产权局、申请号为202110662928.6、发明名称为“一种拍摄方法及电子设备”的中国专利申请的优先权,以及于2021年9月15日提交国家知识产权局、申请号为202111081692.3、发明名称为“一种拍摄方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请实施例涉及电子技术领域,尤其涉及一种拍摄方法及电子设备。
背景技术
随着电子技术的发展,电子设备上集成的摄像头越来越多,多个摄像头用以满足用户在多种拍摄场景的拍照需求。例如,集成多个摄像头的电子设备在拍摄时,具备大光圈背景虚化效果、变焦效果等。
其中,电子设备在拍摄图像时,如果显示的图像上背景过于凌乱,且用户想要突出图像中的主体,这个时候可以使用大光圈背景虚化模式。这样一来,电子设备在拍摄图像时可以对背景虚化,突出图像中的主体(对焦物体),使得图像中主体的清晰度高,虚化背景。例如,人像拍摄时,人像显示清晰,背景虚化模糊,从而产生大光圈背景虚化效果。
发明内容
本申请提供一种拍摄方法及电子设备,可以提供具有大光圈背景虚化的拍摄图像,并且能够提高焦点处物体的清晰度,以及背景部分虚化效果。
为实现上述技术目的,本申请采用如下技术方案:
第一方面,本申请提供一种拍摄方法,该方法可以应用于电子设备。电子设备可以至少包括第一摄像头、第二摄像头和第三摄像头。电子设备实施该方法时,该方法可以包括:电子设备接收到第一操作,第一操作用于触发电子设备进入大光圈模式。可以理解的,大光圈模式是一种拍摄模式,第一操作可以触发电子设备以大光圈模式拍摄图像或视频。当电子设备进入大光圈模式拍摄,首先电子设备可以获取与被拍摄的目标对象的距离值。如果该距离值不超过第一距离值,则电子设备启动第一摄像头和第二摄像头,采集目标对象的影像,使得电子设备可以显示包括目标对象的预览图像。如果该距离值超过第一距离值,则电子设备可以启动第一摄像头和第三摄像头,采集目标对象的影像,使得电子设备可以显示包括目标对象的预览图像。可以理解的,该预览图像就是大光圈模式下对应的预览图像。
其中,电子设备根据与被拍摄的目标对象的距离值,启动不同的摄像头。可以理解的,当电子设备启动不同的摄像头,电子设备显示的预览图像的显示效果是不同的。大光圈模式图像中包括清晰显示部分和虚化显示部分,电子设备在生成大光圈模式的图像,电子设备根据两个摄像头采集的图像生成包括目标对象的预览图像。两个摄像头的视差是不同的,在一些拍摄场景中,可能会使得拍摄的图像难以产生景深,影响大光圈模式下图像的显示效果。如,电子设备与目标对象的距离不超过预设距离,为 近距离拍摄场景,启动第一摄像头和第二摄像头,电子设备可以计算得到景深,使得电子设备显示的大光圈图像中景深明显,清晰显示部分和虚化显示部分的显示效果更佳。
结合第一方面,在一种可能的实施方式中,上述电子设备在接收到第一操作之前,电子设备还可以执行如下操作:电子设备启动第一摄像头,采集目标对象的影像。
上述电子设备获取与被拍摄的目标对象的距离值,具体包括:电子设备获取第一摄像头的自动对焦编码(AF code),自动对焦编码指示第一摄像头与目标对象的距离值。基于第一摄像头的自动对焦,电子设备可以讲自动对焦编码作为电子设备与目标对象的距离值。
可以理解的,电子设备根据第一摄像头的自动对焦编码,可以得到电子设备与目标对象的距离值。因此,电子设备可以将自动对焦编码作为电子设备与目标对象的距离值。具体来说,在电子设备获取到第一摄像头的自动对焦编码之后,电子设备可以采用预设的对焦算法得到电子设备与目标对象的距离值。
结合第一方面,另一种可能的实施方式中,电子设备中还可以包括距离传感器。上述电子设备获取与被拍摄的目标对象的距离值,具体包括:电子设备启动距离传感器,以确定电子设备与目标对象的距离值。
其中,距离传感器启动,电子设备可以直接根据距离传感器反馈的数据,得到与目标对象的距离值。
具体来说,当电子设备采用第一摄像头自动对焦编码方式获取距离值,电子设备可以开启第一摄像头,就可以得到距离值。当电子设备采用距离传感器获取距离值,电子设备需要控制距离传感器确定目标对象,这样,距离传感器就可以准确反馈出电子设备与目标对象的距离值。
结合第一方面,另一种可能的实施方式中,电子设备显示包括目标对象的预览图像时,电子设备的变焦倍率为第一倍率。其中,第一倍率可以是预设倍率,如1×。电子设备在显示包括目标对象的图像时,具体可以包括:电子设备采用第一像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
结合第一方面,另一种可能的实施方式中,电子设备显示包括目标对象的预览图像时,电子设备的变焦倍率为第一倍率。其中,第一倍率可以是预设倍率,如1×。如果距离值不超过第一距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像之后,电子设备显示包括目标对象的图像,具体可以包括:电子设备采用第一像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
其中,电子设备的变焦倍率为第一倍率,电子设备可以采用第一像素信号合并方式输出原始像素图像,以便电子设备根据原始像素图像生成包括目标对象的图像。第一倍率是预设倍率,第一像素信号合并方式是电子设备对像素图像中的像素信息进行模拟合并操作,以输出包括目标对象的图像。
结合第一方面,另一种可能的实施方式中,在电子设备显示包括目标对象的图像之后,该方法还可以包括:电子设备接收到第二操作,第二操作指示电子设备调整变焦倍率为第二倍率,第二倍率大于第一倍率。响应于第二操作,电子设备采用第二像 素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
可以理解的,第二倍率大于第一倍率。也就是说,电子设备在以大光圈模式显示预览图像时,接收到用户输入的变焦操作,电子设备显示第二倍率的大光圈模式图像。第二像素信号合并方式下,电子设备可以对像素图像中的像素进行重新排列,以便提高变焦后的图像的清晰度。因此,电子设备在高倍率(也就是变大图像的倍率)大光圈模式下,采用第二像素信号合并方式输出原始像素图像,可以有效提高电子设备显示图像的清晰度,使得预览图像的显示效果更好。
结合第一方面,另一种可能的实施方式中,第一摄像头、第二摄像头和第三摄像头设置在电子设备的第一面,如,这三个摄像头设置在手机背面。第一摄像头到第二摄像头的距离值,小于第一摄像头到第三摄像头的距离值。
结合第一方面,另一种可能的实施方式中,第一摄像头、第二摄像头和第三摄像头设置在电子设备的背面摄像头。第一摄像头可以是背面主摄像头,第二摄像头可以是广角摄像头,第三摄像头可以是长焦或深度摄像头。
结合第一方面,另一种可能的实施方式中,第一摄像头、第二摄像头和第三摄像头设置在电子设备的第一面。当电子设备获取图像时,开启设置在第一面的摄像头。电子设备中还可以包括第四摄像头和第五摄像头,第四摄像头和第五摄像头可以设置在电子设备的第二面,如,电子设备正面。
上述电子设备在接收到第一操作之后,电子设备还可以执行如下操作:电子设备接收到第三操作,第三操作用于触发电子设备开启在第二面的摄像头。响应于第三操作,电子设备启动第四摄像头和第五摄像头,采集目标对象的影像。其中,第四摄像头作为主摄像头,第五摄像头作为辅摄像头,主摄像头用于对焦目标对象,辅摄像头用于计算景深。电子设备显示包括目标对象的图像,该图像为大光圈模式对应的预览图像,该预览图像对应的变焦倍率为第一倍率。电子设备接收到第二操作,第二操作指示电子设备调整变焦倍率为第二倍率,第二倍率大于第一倍率。响应于第二操作,电子设备将第五摄像头调整为主摄像头,将第六摄像头作为辅摄像头。基于这些操作,电子设备显示包括目标对象的图像,该图像为大光圈模式对应的预览图像,预览图像对应的变焦倍率为第二倍率。
其中,第四摄像头和第五摄像头设置在电子设备的另一面(即第二面),电子设备接收到切换摄像头的操作,则启动第四摄像头和第五摄像头。开启转换摄像头之后,以第四摄像头为主摄像头,第五摄像头为辅摄像头,电子设备采用第一像素信号合并方式输出图像。这样一来,当电子设备接收到变焦操作,电子设备从第一倍率切换为第二倍率,电子设备切换主摄像头和辅摄像头,并且调整为第二像素信号合并方式输出图像。以改善电子设备显示大光圈模式下预览图像的显示效果,以及提高变焦显示的清晰度。
结合第一方面,另一种可能的实施方式中,上述响应于第二操作,电子设备启动第四摄像头和第五摄像头,采集目标对象的影像。上述方法还可以包括:电子设备采用第一像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
上述电子设备显示包括目标对象,该图像为大光圈模式对应的预览图像,预览图像对应的变焦倍率为第二倍率,该方法还可以包括:电子设备采用第二像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示目标对象的图像。
结合第一方面,另一种可能的实施方式中,若距离值不超过第一距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像。电子设备显示包括目标对象的图像之后,方法还包括:电子设备获取与目标对象的当前距离值;如果当前距离值超过第二距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像;其中,第二距离值大于第一距离值。
结合第一方面,另一种可能的实施方式中,若距离值超过第一距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像。电子设备显示包括目标对象的图像之后,电子设备还可以执行如下操作:电子设备再次获取与目标对象的当前距离值;如果当前距离值不超过第三距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像;其中,第三距离值小于第一距离值。
结合第一方面,另一种可能的实施方式中,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分。若距离值不超过第一距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像;其中,第一摄像头作为主摄像头,第二摄像头作为辅摄像头。
当电子设备显示包括目标对象的图像,电子设备具体执行:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;电子设备根据第一图像确定目标对象,确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分;电子设备根据第一图像和第二图像生成大光圈模式对应的预览图像,并显示。
结合第一方面,另一种可能的实施方式中,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分。若距离值超过第一距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像;其中,第一摄像头作为主摄像头,第三摄像头作为辅摄像头。
电子设备显示包括目标对象的图像,电子设备具体执行:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;主摄像头采用第一像素信号合并方式输出原始像素图像,根据原始像素图像得到第一图像;电子设备根据原始像素图像确定目标对象,以确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分;电子设备根据第一图像和第二图像生成大光圈模式对应的预览图像,并显示。
结合第一方面,另一种可能的实施方式中,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分。响应于第三操作,电子设备启动第四摄像头和第五摄像头,采集目标对象的影像,具体包括:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;主摄像头采用第一像素信号合并方式输出原始像素图像,根据原始像素图像得到第一图像;电子设备根据原始像素图像确定目标对象,以确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分。
电子设备根据第一图像和第二图像生成目标对象的影像;响应于第二操作,电子设备将第五摄像头调整为主摄像头,将第六摄像头作为辅摄像头,包括:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;主摄像头采用第二像素信号合并方式 输出原始像素图像,对原始像素图像进行裁切处理以得到第一图像;电子设备根据原始像素图像确定目标对象,以确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分;电子设备根据第一图像和第二图像生成目标对象的影像。
结合第一方面,另一种可能的实施方式中,第一像素信号合并方式包括:电子设备获取到目标对象的像素图像,对像素图像中的像素信息进行模拟合并操作,以输出包括目标对象的图像。
第二像素信号合并方式包括:电子设备获取到目标对象的像素图像,对像素图像中的像素进行重新排列操作,以输出包括目标对象的图像。
结合第一方面,另一种可能的实施方式中,电子设备采用第二像素信号合并方式生成目标对象的图像,并显示包括目标对象的图像,包括:电子设备采用第二像素信号合并方式输出原始图像;电子设备对原始图像进行裁切处理,生成目标对象的图像;电子设备显示包括目标对象的图像。
第二方面,本申请实施例提供一种电子设备,包括:第一摄像头,第二摄像头和第三摄像头,用于采集图像;显示屏,用于显示界面;一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个或多个计算机程序被存储在存储器中。当存储器执行计算机程序时,电子设备可以执行如下步骤:电子设备接收到第一操作,第一操作用于触发电子设备进入大光圈模式。可以理解的,大光圈模式是一种拍摄模式,第一操作可以触发电子设备以大光圈模式拍摄图像或视频。当电子设备进入大光圈模式拍摄,首先电子设备可以获取与被拍摄的目标对象的距离值。如果该距离值不超过第一距离值,则电子设备启动第一摄像头和第二摄像头,采集目标对象的影像,使得电子设备可以显示包括目标对象的预览图像。如果该距离值超过第一距离值,则电子设备可以启动第一摄像头和第三摄像头,采集目标对象的影像,使得电子设备可以显示包括目标对象的预览图像。可以理解的,该预览图像就是大光圈模式下对应的预览图像。
结合第二方面,在一种可能的实施方式中,电子设备还可以执行如下操作:电子设备启动第一摄像头,采集目标对象的影像。
上述电子设备获取与被拍摄的目标对象的距离值,具体包括:电子设备获取第一摄像头的自动对焦编码(AF code),自动对焦编码指示第一摄像头与目标对象的距离值。基于第一摄像头的自动对焦,电子设备可以讲自动对焦编码作为电子设备与目标对象的距离值。
结合第二方面,另一种可能的实施方式中,电子设备中还可以包括距离传感器。上述电子设备获取与被拍摄的目标对象的距离值,具体包括:电子设备启动距离传感器,以确定电子设备与目标对象的距离值。
结合第二方面,另一种可能的实施方式中,电子设备在显示包括目标对象的图像时,具体可以包括:电子设备采用第一像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
结合第二方面,另一种可能的实施方式中,如果距离值不超过第一距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像之后,电子设备显示包括目 标对象的图像,具体可以包括:电子设备采用第一像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
结合第二方面,另一种可能的实施方式中,在电子设备显示包括目标对象的图像之后,电子设备还可以执行:电子设备接收到第二操作,第二操作指示电子设备调整变焦倍率为第二倍率,第二倍率大于第一倍率。响应于第二操作,电子设备采用第二像素信号合并方式输出原始像素图像,以生成目标对象的图像,并显示包括目标对象的图像。
结合第二方面,另一种可能的实施方式中,第一摄像头、第二摄像头和第三摄像头设置在电子设备的第一面,如,这三个摄像头设置在手机背面。第一摄像头到第二摄像头的距离值,小于第一摄像头到第三摄像头的距离值。
结合第二方面,另一种可能的实施方式中,第一摄像头、第二摄像头和第三摄像头设置在电子设备的背面摄像头。第一摄像头可以是背面主摄像头,第二摄像头可以是广角摄像头,第三摄像头可以是长焦或深度摄像头。
结合第二方面,另一种可能的实施方式中,第一摄像头、第二摄像头和第三摄像头设置在电子设备的第一面。当电子设备获取图像时,开启设置在第一面的摄像头。电子设备中还可以包括第四摄像头和第五摄像头,第四摄像头和第五摄像头可以设置在电子设备的第二面,如,电子设备正面。
上述电子设备在接收到第一操作之后,电子设备还可以执行如下操作:电子设备接收到第三操作,第三操作用于触发电子设备开启在第二面的摄像头。响应于第三操作,电子设备启动第四摄像头和第五摄像头,采集目标对象的影像。其中,第四摄像头作为主摄像头,第五摄像头作为辅摄像头,主摄像头用于对焦目标对象,辅摄像头用于计算景深。电子设备显示包括目标对象的图像,该图像为大光圈模式对应的预览图像,该预览图像对应的变焦倍率为第一倍率。电子设备接收到第二操作,第二操作指示电子设备调整变焦倍率为第二倍率,第二倍率大于第一倍率。响应于第二操作,电子设备将第五摄像头调整为主摄像头,将第六摄像头作为辅摄像头。基于这些操作,电子设备显示包括目标对象的图像,该图像为大光圈模式对应的预览图像,预览图像对应的变焦倍率为第二倍率。
结合第二方面,另一种可能的实施方式中,若距离值不超过第一距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像。电子设备显示包括目标对象的图像之后,方法还包括:电子设备获取与目标对象的当前距离值;如果当前距离值超过第二距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像;其中,第二距离值大于第一距离值。
结合第二方面,另一种可能的实施方式中,若距离值超过第一距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像。电子设备显示包括目标对象的图像之后,电子设备还可以执行如下操作:电子设备再次获取与目标对象的当前距离值;如果当前距离值不超过第三距离值,电子设备启动第一摄像头和第二摄像头,采集目标对象的影像;其中,第三距离值小于第一距离值。
结合第二方面,另一种可能的实施方式中,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分。若距离值不超过第一距离值,电子设备启动第一摄像头和 第二摄像头,采集目标对象的影像;其中,第一摄像头作为主摄像头,第二摄像头作为辅摄像头。
当电子设备显示包括目标对象的图像,电子设备具体执行:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;电子设备根据第一图像确定目标对象,确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分;电子设备根据第一图像和第二图像生成大光圈模式对应的预览图像,并显示。
结合第一方面,另一种可能的实施方式中,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分。若距离值超过第一距离值,电子设备启动第一摄像头和第三摄像头,采集目标对象的影像;其中,第一摄像头作为主摄像头,第三摄像头作为辅摄像头。
电子设备显示包括目标对象的图像,电子设备具体执行:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;主摄像头采用第一像素信号合并方式输出原始像素图像,根据原始像素图像得到第一图像;电子设备根据原始像素图像确定目标对象,以确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分;电子设备根据第一图像和第二图像生成大光圈模式对应的预览图像,并显示。
结合第一方面,另一种可能的实施方式中,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分。响应于第三操作,电子设备启动第四摄像头和第五摄像头,采集目标对象的影像,具体包括:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;主摄像头采用第一像素信号合并方式输出原始像素图像,根据原始像素图像得到第一图像;电子设备根据原始像素图像确定目标对象,以确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分。
电子设备根据第一图像和第二图像生成目标对象的影像;响应于第二操作,电子设备将第五摄像头调整为主摄像头,将第六摄像头作为辅摄像头,包括:主摄像头采集得到第一图像,辅摄像头采集得到第二图像;主摄像头采用第二像素信号合并方式输出原始像素图像,对原始像素图像进行裁切处理以得到第一图像;电子设备根据原始像素图像确定目标对象,以确定目标对象为清晰显示部分;电子设备根据第二图像计算出景深,确定虚化显示部分;电子设备根据第一图像和第二图像生成目标对象的影像。
结合第二方面,另一种可能的实施方式中,第一像素信号合并方式包括:电子设备获取到目标对象的像素图像,对像素图像中的像素信息进行模拟合并操作,以输出包括目标对象的图像。
第二像素信号合并方式包括:电子设备获取到目标对象的像素图像,对像素图像中的像素进行重新排列操作,以输出包括目标对象的图像。
结合第二方面,另一种可能的实施方式中,电子设备采用第二像素信号合并方式生成目标对象的图像,并显示包括目标对象的图像,包括:电子设备采用第二像素信号合并方式输出原始图像;电子设备对原始图像进行裁切处理,生成目标对象的图像;电子设备显示包括目标对象的图像。
第三方面,本申请还提供一种电子设备,包括:摄像头,用于采集图像;显示屏,用于显示界面;一个或多个处理器;存储器;以及一个或多个计算机程序,其中一个 或多个计算机程序被存储在存储器中,一个或多个计算机程序包括指令,当指令被电子设备执行时,使得电子设备执行上述第一方面及其任一种可能的设计方式中的拍摄方法。
第四方面,本申请还提供一种计算机可读存储介质,其特征在于,包括计算机指令,当计算机指令在计算机上运行时,使得计算机执行第一方面及其任一种可能的设计方式中的拍摄方法。
第五方面,本申请实施例提供了一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述第一方面及其任一种可能的设计中电子设备执行的方法。
第六方面,本申请实施例提供了一种芯片系统,该芯片系统应用于电子设备。该芯片系统包括一个或多个接口电路和一个或多个处理器;接口电路和处理器通过线路互联;接口电路用于从电子设备的存储器接收信号,并向处理器发送信号,信号包括存储器中存储的计算机指令;当处理器执行计算机指令时,使得电子设备执行上述第一方面及其任一种可能的设计中的方法。
可以理解的是,上述本申请提供的第二方面的电子设备,第三方面的电子设备,第四方面的计算机可读存储介质,第五方面的计算机程序产品和第六方面的芯片系统所能达到的有益效果,可参考如第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1A为本申请实施例提供的一种像素信号合并方式的示意图;
图1B为本申请实施例提供的另一像素信号合并方式的示意图;
图2为本申请实施例提供的一种镜头成像示意图;
图3A为本申请实施例提供的一种电子设备拍摄目标对象的示意图;
图3B为本申请实施例提供的一种电子设备拍摄界面示意图;
图4为本申请实施例提供的一种电子设备硬件结构示意图;
图5为本申请实施例提供的另一电子设备示意图;
图6A为本申请实施例提供的一种拍摄场景示意图;
图6B为本申请实施例提供的另一拍摄界面示意图;
图7为本申请实施例提供的另一拍摄场景示意图;
图8A为本申请实施例提供的一种拍摄方法流程图;
图8B为本申请实施例提供的另一拍摄方法流程图;
图9为本申请实施例提供的一种芯片系统结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
为了便于理解本申请实施例所提供的方案,以下将对本申请实施例涉及的一些名词进行解释:
第一像素信号合并方式(Binning):电子设备在拍摄图像的过程中,目标对象反射的光线被摄像头采集,以使得该反射的光线传输至图像传感器。图像传感器上包括多个感光元件,每个感光元件采集到的电荷为一个像素,并对像素信息执行模拟并合(Binning)操作。具体地说,Binning可以将n×n个像素合并为一个像素。例如,Binning可以将相邻的3×3个像素合成为一个像素,也就是说,相邻3×3个像素的颜色以一个像素的形式呈现。
为了便于理解,第一像素信号合并方式可以被称为“第一像素排列方式”、“第一像素组合方式”、“第一图像读出模式”等。
示例性的,如图1A所示,如1A为电子设备获取图像后以Binning方式读出图像过程示意图。其中,如图1A中(a)为一个6×6的像素示意图,将相邻的3×3个像素合成为一个像素。图1A中(b)为Binning方式读出的像素示意图。如,采用Binning方式,将图1A中(a)的01区域中的3×3个像素,形成1A中(b)中的像素G;将图1A中(a)的02区域中的3×3个像素,形成图1A中(b)中的像素B;将图1A中(a)的03区域中的3×3个像素,形成图1A中(b)中的像素R;将图1A中(a)的04区域中的3×3个像素,形成图1A中(b)中的像素G。
其中,以输出图像格式是拜耳(Bayer)格式图像为例,Bayer格式的图像是指图像中仅包括红色、蓝色和绿色(即三原色)的图像。如,01区域中的3×3个像素形成的像素A为红色,02区域中的3×3个像素形成的像素B为绿色,03区域中的3×3个像素形成的像素C为绿色,04区域中的3×3个像素形成的像素D为蓝色。
第二像素信号合并方式(Remosaic):采用Remosaic方式读出图像时,将像素重新排列为Bayer格式图像。如,假设在图像中一个像素是由n×n个像素组成的,那么,采用Remosaic可以将该图像中的一个像素重新排列n×n个像素。为了便于理解,第二像素信号合并方式也可以被称为“第二像素排列方式”、“第二像素组合方式”、“第二图像读出模式”等。
示例性的,如图1B中(a)为一种像素示意图,每个像素是由相邻的3×3个像素合成的。如图1B中(b)为采用Remosaic方式读出的Bayer格式的图像示意。具体地说,图1B中(a)中像素A为红色,像素B和像素C为绿色,像素D为蓝色。将图1B中(a)中的每个像素分为3×3个像素,并分别重新排列。即采用Remosaic方式读出,读出的图像为图1B中(b)所示的Bayer格式的图像。
焦外成像(bokeh):在一幅图像中有清晰的部分,也有虚化的部分(或称为模糊的部分)。那么,虚化部分的成像被称为焦外成像。具体来说,图像中虚化的部分包括前景虚化、背景虚化。
焦点:电子设备中的镜头一般是由至少一个的透镜组成的,透镜包括凸透镜和凹透镜。以镜头是凸透镜为例,目标对象反射的光束(或发出的光束)投射到凸透镜后,光束逐渐汇聚至一个点,此点即为焦点。其中,当光出汇聚至一个点之后,随着光束的继续传播,光束又发散开。
弥散圆(circle of confusion):以镜头是凸透镜为例,如果像平面(或称为投影面)恰好包括凸透镜的焦点,在这种情况下,目标对象反射的光束在像平面上的像为一个清晰的点。如果像平面不包括焦点位置,不管像平面是位于焦点和凸透镜之间,或者, 像平面位于焦点之后。那么,目标对象反射的光束在像平面上的成像都不是一个点,而是一个圆形区域,该圆形区域就是弥散圆。弥散圆还可以被称为:弥散圈、弥散环、散光圈、模糊圈、散射圆盘等。
如图2所示,为镜头成像的示意图。O1为镜头L1的光轴,镜头L1与焦点F之间的距离为焦距。其中,在成像过程中,电子设备可以显示预览图像,预览图像上包括清晰的弥散圆区域,以人眼感觉到弥散圆中的图像是清晰的,这时的弥散圆被称为容许弥散圆。在焦点F与镜头L1之间存在弥散圆前C1(即位于镜头L1和焦点F之间的最大容许弥散圆),在焦点F后存在弥散圆后C2(即位于远离镜头L1和焦点F一侧的最大容许弥散圆)。弥散圆前C1的中心点到弥散圆后C2的中心点的距离成为焦深,即图2中S1所示的距离。
景深:电子设备在成像过程中,目标对象反射的光线传播至成像面,以便成像面采集到目标对象反射的光线。其中,成像面包括焦点时,电子设备可以获取清晰的图像。可以理解的,目标对象位于对焦点前后一定区域内,电子设备依然可以获取到目标对象清晰的图像。这个区域就被称为景深,其中,以镜头为近点,焦点到近点中间清晰的成像范围被称为前景深,焦点到最远成像清晰的点之间的成像范围被称为远景深。
示例性的,如图3A所示,为电子设备拍摄目标对象示意图。O1为镜头L1的光轴,假设成像面位于镜头L1的焦点F的位置。f表示镜头L1的对焦点,S2所示的距离为景深。也就是说,当目标对象位于M1到M2之间的范围内,目标对象反射的光线经过镜头L1可以反射至成像面,使得电子设备获取到目标对象清晰的图像。如图2所示,存在近点A和远点B,近点A和远点B均可以反射的光线可以通过镜头L1投射至成像面,即图2中A’为近点A的成像点,B’为远点B的成像点。
光圈:用于控制透过镜头的光束的装置,一般设置在镜头内。示例性的,光圈可以由数片卷叶型金属片构成。这些金属片可以形成一个大小可调的孔,当电子设备调节光圈的大小时,通过金属片的转动调节孔的大小,从而实现调节拍摄光圈大小的目的。
本申请实施例提供的拍摄方法可以应用于包括多个摄像头的电子设备,电子设备中可以设置多种拍摄模式,如,人像模式、大光圈模式、专业模式和夜拍模式等。用户使用电子设备拍摄图像时,为了突出拍摄图像中的主体,用户可以选择电子设备中的大光圈模式。其中,电子设备采用大光圈模式生成图像时,可以使得拍摄得到的图像中景深变浅,从而使得电子设备中镜头对焦的主体(或称为对焦对象)清晰,其他处在非对焦范围的物体(或称为目标对象)呈现在电子设备中的画面会被虚化,以突出对焦的主体。也就是说,在电子设备采用大光圈模式拍摄图像,得到的图像中拍摄主体清晰,且背景虚化。
示例性的,当手机采用大光圈模式拍摄目标对象,手机运行相机应用,响应于用户对拍摄模式的选择,手机采用大光圈模式拍摄图像。如图3B所示,为手机显示大光圈模式的拍照界面示意图。如图3B所示,301表示光圈大小,即生成的大光圈图像中清晰显示的部分。302表示光圈大小调节区域,图3B中显示当前的光圈大小为f4。如果用户想要修改光圈大小,则用户可以滑动调节302区域中的数字或圆点标识,当 手机接收到用户对光圈的调节,手机显示的拍摄界面中301所示的光圈大小可以发生改变。如图3B所示,手机中光圈的调整范围在f0.95至f16之间,光圈数值越小,则光圈越大。
需要说明的,具备多个摄像头的电子设备采用大光圈模式生成拍摄图像时,电子设备可以采用双摄(即两个摄像头同时工作)大光圈,生成拍摄图像。其中,两个摄像头一个为主摄像头(或称为主摄,以下用“主摄”指代主摄像头),另一个为辅摄像头(或称为辅摄,以下用“辅摄”指代辅摄像头)。电子设备基于大光圈模式下拍摄图像时,主摄和辅摄被开启,处于工作状态。此时,电子设备可以获取到主摄和辅摄采集的图像,由于主摄和辅摄是不同的摄像头,主摄和辅摄的焦点不同,主摄和辅摄采集图像的视场角(Field of View,FOV)也不同。
在具体生成图像的过程中,电子设备可以根据主摄采集的第一图像和辅摄采集的第二图像,采用图像裁切算法确定出第一图像和第二图像中重叠的部分。进一步的,电子设备根据主摄的对焦确定目标对象,进而确定出显示图像上清晰显示的部分;电子设备根据辅摄的对焦计算出景深,以便确定出显示图像上虚化的部分。电子设备可以基于虚化算法对需要虚化显示的部分进行虚化处理,使得显示图像上一部分为虚化显示效果。这样,电子设备形成大光圈背景虚化的显示图像。在该显示图像中,对焦的目标对象是清晰的,背景是虚化的。
以下将结合附图对本申请实施例的实施方式进行说明。
请参考图4,为本申请实施例提供的一种电子设备的结构示意图。
该电子设备100可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,移动通信模块150,无线通信模块160,按键170,马达171,指示器172,显示屏173,摄像头模块180以及传感器模块190等。其中,传感器模块190可以包括压力传感器,陀螺仪传感器,加速度传感器,距离传感器,接近光传感器,温度传感器,触摸传感器,环境光传感器等。
其中,摄像头模块180可以包括2-N个摄像头,如,摄像头模块180包括第一摄像头181和第二摄像头182。其中,第一摄像头181为主摄,第二摄像头182为辅摄。当电子设备采用生成图像,电子设备可以调用第一摄像头181和第二摄像头182,电子设备可以根据第二摄像头182采集的图像计算景深,根据第一摄像头181采集的图像生成预览图像(或拍摄图像)。
可以理解的是,本发明实施例示意的结构并不构成对电子设备100的具体限定。在本申请另一些实施例中,电子设备100可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件, 也可以集成在一个或多个处理器中。
其中,控制器可以是电子设备100的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器110中的存储器为高速缓冲存储器。该存储器可以保存处理器110刚用过或循环使用的指令或数据。如果处理器110需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器110的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备100的结构限定。在本申请另一些实施例中,电子设备100也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块140可以通过USB接口130接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块140可以通过电子设备100的无线充电线圈接收无线充电输入。充电管理模块140为电池142充电的同时,还可以通过电源管理模块141为电子设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块141接收电池142和/或充电管理模块140的输入,为处理器110,内部存储器121,外部存储器,显示屏173,摄像头193,和无线通信模块160等供电。电源管理模块141还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。
电子设备100的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。电子设备100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块150可以提供应用在电子设备100上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块150还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。
调制解调处理器可以包括调制器和解调器。其中,调制器用于将待发送的低频基 带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器170A,受话器170B等)输出声音信号,或通过显示屏173显示图像或视频。在一些实施例中,调制解调处理器可以是独立的器件。
无线通信模块160可以提供应用在电子设备100上的包括无线局域网(wireless local area networks,WLAN)(如Wi-Fi网络),蓝牙(blue tooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),NFC,红外技术(infrared,IR)等无线通信的解决方案。无线通信模块160可以是集成至少一个通信处理模块的一个或多个器件。
在一些实施例中,电子设备100的天线1和移动通信模块150耦合,天线2和无线通信模块160耦合,使得电子设备100可以通过无线通信技术与网络以及其他设备通信。
电子设备100通过GPU,显示屏173,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏173和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏173用于显示图像,视频等。显示屏173包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备100可以包括1个或N个显示屏173,N为大于1的正整数。
电子设备100可以通过ISP,摄像头模组180,视频编解码器,GPU,显示屏173以及应用处理器等实现拍摄功能。
ISP主要用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头模块180用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备100可以包括1个或N个摄像头193,N为大于1的正整数。
示例性的,第一摄像头181和第二摄像头182采集得到原始图像,摄像头模块180 对采集得到的原始图像进行图像处理,以得到第一图像和第二图像。其中,第一图像是根据第一摄像头181采集的原始图像生成的,第二图像是根据第二摄像头182采集的原始图像生成的。第二摄像头182作为辅摄,ISP可以对第二图像、以及第二摄像头182反馈的数据进行图像处理,计算出当前拍摄场景下的景深。第一摄像头182作为主摄,ISP可以根据计算得到的景深,确定出第一图像上虚化的部分。进一步的,ISP还可以根据第一图像和第二图像中的重叠部分,确定出主摄对焦的目标对象。这样,ISP可以基于预设算法处理第一图像,使得第一图像中的目标对象成像更清晰,背景部分更虚化。基于此,ISP处理第一图像和第二图像可以生成显示图像,并向显示屏传输显示图像,使得显示屏显示该显示图像。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备100可以支持一种或多种视频编解码器。这样,电子设备100可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备100的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行电子设备100的各种功能应用以及数据处理。内部存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
按键170包括开机键,音量键等。按键170可以是机械按键。也可以是触摸式按键。电子设备100可以接收按键输入,产生与电子设备100的用户设置以及功能控制有关的键信号输入。
马达171可以产生振动提示。马达171可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。
指示器172可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
需要说明的,本申请实施例中的电子设备可以是具有拍照功能的手机、运动相机 (GoPro)、数码相机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、车载设备、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等,本申请实施例对该电子设备的具体形态不作特殊限制。
以下将以具有图4所示结构的电子设备是手机为例,对本申请实施例提供的拍摄方法进行阐述。其中,手机的触摸屏可以包括显示面板和触摸面板。显示面板可以显示界面,触摸面板可以检测用户的触摸操作,并上报给手机处理器进行相应的处理。
其中,本申请实施例所提供的手机设置有不同焦段的摄像头。示例性的,手机的背面包括4个摄像头,手机正面包括2个摄像头。手机背面的4个摄像头中包括背面主摄像头、微距摄像头、广角摄像头和深度摄像头。手机背面的4个摄像头的位置关系如图5中(a)所示,第一后置摄像头501所示的是背面主摄像头,第二后置摄像头502所示的是微距摄像头,第三后置摄像头503所示的是广角摄像头,第四后置摄像头504所示的是深度摄像头。手机正面的2个摄像头中包括正面主摄像头和从摄像头,手机正面的2个摄像头的位置关系如图5中(b)所示,第一前置摄像头505所示的是正面主摄像头,第二前置摄像头506所示的是从摄像头。
需要说明的,手机在生成大光圈虚化效果的图像时,两个摄像头处于工作状态。具体来说,手机采用背面的摄像头生成图像,手机可以调用背面的4个摄像头中的任2个摄像头处于工作状态。如,采用背面主摄像头作为主摄,广角摄像头作为辅摄;或者,采用背面主摄像头作为主摄,深度摄像头作为辅摄。另外,手机还可以采用正面的两个摄像头生成大光圈背景虚化效果的图像,可以是将正面主摄像头作为主摄,从摄像头作为辅摄;或者,将从摄像头作为主摄,正面主摄像头作为辅摄。
示例性的,手机中安装相机应用(或者具有启动手机相机功能的其他应用),当手机运行相机应用生成图像。可以通过触摸操作、按键操作、手势操作或语音操作等方式,指示手机启动相机应用。手机运行相机应用,可以实时显示预览图像。其中,手机可以在多种拍摄模式下生成照片。例如,人像模式、大光圈模式、慢动作、全景模式等。相机应用处于大光圈模式下生成的图像,具有目标对象显示清晰,背景虚化效果。
其中,手机在大光圈模式下生成图像时,手机的两个摄像头处于工作状态。其中,由于每个摄像头的焦距是不同的,且镜头的视场角不同。如果这两个摄像头之间的距离越大,则两个摄像头采集得到的图像的视场角的差异越明显。需要说明的,在大光圈模式下生成图像时,手机需要根据主摄和辅摄分别采集的两张图像生成具有背景虚化效果的图像。在这种情况下,两个摄像头采集的图像上视场角的差异越大,则手机在计算中距离场景的景深就会越准确。但是,如果两个摄像头采集的图像视场角的差异越大,则手机在计算近距离场景的景深时可能会因为景深盲区,无法确定近距离场景下的景深,使得手机对近距离场景下计算出现错误。
基于此,当手机处于大光圈模式下生成图像时,手机可以先确定当前目标对象与手机的距离。这样,手机就可以确定当前的拍摄场景是近距离场景、中距离场景或远距离场景了。手机根据当前所处的拍摄场景确定使用的主摄和辅摄,以便手机根据主 摄和辅摄采集的图像生成具有背景虚化效果的图像。
在一些实施中,手机包括的图像传感器可以是Quadra Sensor(四传感器,即一种图像传感器,该图像传感器的尺寸更大)。Quadra Sensor可以与摄像头连接,用于对摄像头采集的图像进行图像处理,Quadra Sensor可以采用第一像素合并方式或者第二像素合并方式输出图像。手机在大光圈模式下生成图像的过程中,如果手机接收到用户输入的变焦操作,Quadra Sensor可以在保证生成背景虚化图像的情况下,输出清晰度较高的图像。
示例性的,手机的背面摄像头中背面主摄像头规格为108M(1.08亿像素),且该摄像头采用3x3Quadra sensor;广角摄像头(或称为超广角摄像头)规格为8M(800万像素);深度摄像头规格为2M(即成像最高分辨率为200万像素);微距摄像头规格为2M(200万像素)。手机的正面两个摄像头的配置可以是:正面主摄像头规格为32M,且该摄像头采用2x2Quadra sensor;从摄像头为广角摄像头,规格为12M。
如图5中(a)所示,手机背面的4个摄像头中,背面主摄像头与广角摄像头的距离为中等间距。相对于广角摄像头而言,深度摄像头和微距摄像头等距分布在广角摄像头的两侧。深度摄像头和微距摄像头与背面主摄像头的距离均较远,背面主摄像头到深度摄像头的距离与背面主摄像头到微距摄像头的距离相等。因此,背面主摄像头与深度摄像头(或微距摄像头)的距离可以称为远距离。
例如,以预设距离(或称为第一距离值)是50厘米(cm)为例。当手机与被拍摄的目标对象之间的距离为50cm以内(可以理解为近距离拍摄场景),则可以采用背面主摄像头和广角摄像头的组合。如图6A中(a)所示,手机601与被拍摄的目标对象602之间的距离为45cm,如图6A中(b)为手机生成的图像示意。具体地说,手机运行相机应用,且选择大光圈模式,确定即将生成大光圈图像。其中,手机显示拍摄界面时,61为光圈标识,图6A中(b)显示当前光圈大小为f4(默认光圈)。如图6A中(b)所示,手机的拍照界面中,手机当前的光圈大小为f4,变焦倍率为1×。此处可以理解为,当手机处于大光圈拍摄模式下,手机默认光圈为f4,变焦倍率为1×。另一些实现中,手机还可以设置默认光圈为f4,变焦倍率为2×。
具体地说,手机获取与目标对象的距离值,当手机确定与目标对象(被拍摄物体)之间的距离值为45cm,确定为近距离场景,则手机可以选择背面主摄像头作为主摄,广角摄像头作为辅摄。其中,背面主摄像头采集得到第一图像,广角摄像头采集得到第二图像。手机根据第二图像计算得到景深,手机根据第一图像和第二图像确定这两张图像的重叠区域,两张图像的重叠区域均包括被拍摄的目标对象602。主摄像头的聚焦为目标对象602,手机根据重叠区域,使得被拍摄的目标对象显示清晰,背景部分虚化显示。
可以理解的,当手机处于大光圈模式下,手机在显示大光圈模式的预览图像,用户可以通过手机显示界面查看大光圈图像效果,如果用户想要调整大光圈的显示效果。如,用户想要调整光圈大小,则用户可以点击图6A中(b)中的光圈标识61。手机响应于用户对光圈标识61的点击操作,手机可以显示如图6B所示的拍照预览界面。如图6B所示,手机拍摄界面上包括光圈调节轴,用户可以通过调节光圈调节轴,调整手机显示的大光圈图像中的光圈大小。
当手机与被拍摄的目标对象之间的距离超过50cm(可以理解为远距离拍摄场景),则可以采用背面主摄像头和深度摄像头的组合,或者,采用背面主摄像头和微距摄像头的组合。具体地说,如图7所示,为拍摄场景示意图,手机601与被拍摄的目标对象602之间的距离为90cm。具体地说,手机运行相机应用,且选择大光圈模式,确定采用大光圈生成图像。手机首先获取与目标对象的距离值,当手机确定与目标对象的距离为90cm(大于预设的距离值),确定是远距离场景。背面主摄像头作为主摄,深度摄像头作为辅摄,背面主摄像头采集得到第一图像,深度摄像头采集得到第二图像。手机可以根据第一图像和第二图像,生成具有背景虚化显示效果的图像。
在一些实现中,近距离表示50cm以内,在这个拍摄场景下,手机采集图像时不易产生景深盲区。背面主摄像头和广角摄像头的距离较近,则手机可以采用背面主摄像和广角摄像头的组合,可以避免不易产生景深的现象。远距离表示50cm以外,在这个拍摄场景下,手机采集图像时可能会出现两个摄像头视差明显的现象。背面主摄像头分别和深度摄像头和微距摄像头的距离相同。在这种拍摄场景下,背面主摄像头和深度摄像头(或微距摄像头)的距离相等,手机采用背面主摄像头和深度摄像头(或微距摄像头)的组合,可以改善两个摄像头视差明显的问题,景深计算准确。
基于此,手机根据不同的拍摄场景(即与目标对象不同的距离)下,手机中主摄和辅摄的选择,以及主摄的传感器的图像输出方式。具体可以参考如下表1中的对应关系。
表1:手机生成大光圈图像时拍摄场景与摄像头选择的对应关系表
Figure PCTCN2022081755-appb-000001
Figure PCTCN2022081755-appb-000002
需要说明的,手机确定拍摄场景的方式可以是:手机在的摄像头自动对焦生成图像,手机根据摄像头对焦的编码确定手机与目标对象的距离值。另一方面,手机还可以根据传感器(如,激光传感器)计算手机与目标对象的距离值。例如,当手机处于拍摄状态,且手机处于大光圈模式,就启动传感器计算手机与目标对象的距离值。
可以理解的,当电子设备在拍摄图像过程中,变焦倍数增大时,会影响图像的清晰度。在中等或低等亮度的拍摄场景中,如果采用Binning的方式可以将多个像素合成为一个像素,可以提升图像传感器的感光性能,增大信噪比。在高亮度的拍摄场景中,如果采用Remosaic的方式,将一个像素重新排列为Bayer格式的图像,可以提升图像的清晰度。
其中,电子设备在采用第一像素信号合并方式输出图像时,图像传感器采集像素图像,并传输至ISP,ISP可以对采集得到的像素图像以第一像素信号合并方式处理像素图像,并方便ISP对图像进行后期算法处理。如,采用Remosaic的方式,将一个像素重新排列为Bayer格式的图像,Bayer格式的图像也是一种像素图像。又如,采用Binning的方式可以将多个像素合成为一个像素,得到像素处理后的图像,这种格式图像也是一种像素图像。
在一种可能的实现中,手机的背面主摄像头采用第一像素信号合并方式处理像素图像,生成原始像素图像,手机可以进一步处理原始像素图像,以生成包括目标对象的大光圈模式图像。
在上述示例场景中,手机确定拍摄场景是50cm以内,手机采用背面主摄像头作为主摄,广角摄像头作为辅摄。手机的主摄在输出图像时,可以采用第一像素合并方式,也可以采用第二像素合并方式。
在一些实施中,如果手机当前拍摄场景的亮度小于预设亮度阈值,则手机的主摄采用第一像素合并方式输出图像。如果手机确定当前拍摄场景的亮度大于预设亮度阈值,则手机主摄可以采用第二像素合并方式输出图像。其中,当手机主摄采用Remosaic的方式输出图像,使得图像的像素增加,提高图像的清晰度。基于Remosaic的方式输出图像,手机还需要对输出的图像进行裁切(crop),使得手机显示的图像符合图像显示需要。
在上述说明中,如果手机处于变焦操作下,采用第二图像输出方式依然可以保证,输出图像具有良好的显示效果。因此,另一些实施中,如果手机在50cm以外,在没有变焦操作的场景中,手机可以将背面主摄像头作为主摄,深度摄像头作为辅摄,手机采用输出Binning方式输出图像。如果手机在50cm以外,手机接收到变焦操作(如,变焦为增大3倍“3×变焦”),手机可以将背面主摄像头作为主摄,微距摄像头作为辅摄,手机采用Remosaic的方式输出图像,且对输出图像进行裁切(crop)。
可以理解的,在大光圈模式下生成图像时,用户也有变焦调整FOV的需求。这种情况下,手机处于大光圈模式下,手机接收到用户的变焦操作,手机可以采用主摄像头的Remosaic的方式输出图像,提升主摄像头的有效像素,可以有效提高大光圈基础图像的清晰度。另外,在非bokeh的区域(即清晰显示的区域)更清晰,使得图像中 非bokeh的区域与虚化背景区域的反差更明显。
其中,手机在大光圈下生成图像时,手机会实时显示预览图像。在手机没有生成大光圈图像之前,手机可以实时检测当前场景,即手机与目标对象的距离值。需要说明的是,如果手机与目标对象的距离值靠近50cm,当手机显示预览图像的过程中,如果用户携带手机使得手机与目标对象之间的距离发生改变。这样一来,手机检测到的场景可能会在近距离场景(50cm以内)和中远距离场景(50cm以外)之间乒乓切换,会使得手机工作的主摄和辅摄乒乓切换。
基于此,手机可以设置阈值保护范围,如设置阈值保护范围是第三距离值~第二距离值。第三距离值小于第一距离值(即预设距离值),第一距离值小于第二距离值。示例性的,阈值保护范围可以是60cm-45cm之间。具体地说,当手机当前的场景为近距离场景,即手机与目标对象之间的距离为50cm以内,当手机检测到目标对象之间的距离值为60cm时,手机确定当前的场景发生改变,则切换工作的摄像头。如,在50cm距离以内,手机中的背面主摄像头作为主摄,广角摄像头作为辅摄。如果手机与目标对象的距离值变为60cm时,手机切换工作的摄像头,此时,手机设置背面主摄像头为主摄,深度摄像头为辅摄。也就是说,在这种情况下,手机当前的拍摄场景为中远距离场景。如果手机与目标对象的距离值从60cm变为45cm,手机可以再次切换当前工作的摄像头,手机中的背面主摄像头作为主摄,广角摄像头作为辅摄。
在上述示例中,从近距离拍摄场景切换为中远距离拍摄场景时,手机与目标对象的距离值高于预设距离阈值(即60cm高于50cm)。从中远拍摄场景切换为近距离拍摄场景时,手机与目标对象的距离值低于预设距离值(即45cm低于50cm)。
一般而言,手机中的镜头都是有多个透镜组成的,因此,常会设置有效焦距
(Effective Focal Length,EFL)表示手机中镜头的焦距。
在一些实施中,镜头自动对焦(Automatic Focus,AF)到60cm时,手机在生成图像对应的景深为53cm到70cm。镜头AF到50cm,手机在生成图像对应的景深为45cm到56cm。镜头AF到45cm,手机在生成图像对应的景深为41cm到50cm。也就是说,在预设距离50cm周围的数据中,当手机中的镜头AF为45cm和60cm,手机生成图像的景深范围没有重叠,基于此,手机可以设定60cm-45cm区间为阈值保护范围。
本申请实施例还提供一种拍摄方法,此处以该方法应用于手机为例。该方法实施流程如图8A所示,该方法可以包括步骤801-步骤805。
步骤801:手机启动相机应用。
步骤802:响应于模式选择操作,手机采用大光圈模式显示预览图像。
其中,大光圈模式下手机可以生成bokeh效果的图像。具体的说,手机显示的预览图像和响应于拍摄键生成的图像,都是bokeh效果的图像。
另外,手机中的“人像模式”生成的图像中“人像”部分是清晰地,人像背景部分被虚化,也就是说,相机应用中的“人像模式”也是本申请实施例提到的一种大光圈模式。
步骤803:手机确定目标对象与手机的距离值,判断该距离值是否大于预设距离值。如果该距离值大于预设距离值,执行步骤804;如果该距离值小于等于预设距离 值,执行步骤805。
由于手机中都设置多个摄像头,则由于摄像头的FOV不同,摄像头之间的距离会影响采集的图像的视差。根据手机与摄像头之间的距离值调整使用的摄像头,可以提高手机生成的图像的质量。
步骤804:手机将第一摄像头作为主摄,第二摄像头作为辅摄采集图像,并显示预览图像。
步骤805:手机将第一摄像头作为主摄,第三摄像头作为辅摄采集图像,并显示预览图像。
其中,第一摄像头和第二摄像头之间的距离值,大于第一摄像头和第三摄像头之间的距离值。
本申请实施例还提供一种拍摄方法,以手机实施该拍摄方法为例。如图8B所示,该方法包括步骤8-1-步骤8-6。
步骤8-1:手机接收到第一操作,第一操作用于触发手机进入大光圈模式。
在一些实现中,手机运行相机应用,手机显示拍照预览界面。其中,手机的拍照预览界面包括多种拍摄模式,拍摄模式包括大光圈模式。第一操作就是用户对拍摄模式中“大光圈模式”的点击操作,使得手机进行大光圈模式。
另一些实现中,手机正在运行第一应用(第一应用是非相机应用),第一应用具有启动摄像头的权限。响应于调用摄像头的操作,第一应用调用摄像头,手机显示摄像头对应的拍摄界面。该拍摄界面包括该手机提供的多个拍摄模式,拍摄模式包括大光圈模式。第一操作就是用户对拍摄模式中“大光圈模式”的点击操作,使得手机进行大光圈模式。
步骤8-2:响应于第一操作,获取手机与被拍摄的目标对象的距离值。
手机中一般都安装多个摄像头,在手机采用大光圈模式拍摄图像的过程中,手机与被拍摄的目标对象的距离值不同,则手机启动的摄像头也不同。因此,在手机显示大光圈的预览图像之前,手机响应于第一操作,获取与被拍摄的目标对象的距离值。
在一些实现中,手机中包括激光传感器,手机可以通过激光传感器获取手机与目标对象的距离值。另一些实现中,手机中的主摄像头被启动,手机获取主摄像头采集的图像,根据主摄像头采集的图像中的聚焦,使得手机可以根据主摄像头采集的图像计算得到与目标对象的距离值。
步骤8-3:判断距离值是否超过第一距离值。如果距离值超过第一距离值,执行步骤8-4;如果距离值不超过第一距离值,执行步骤8-5。
其中,拍摄距离与手机启动的摄像头相关,具体对应关系以及实施具体细节已经说明,此处不予赘述。
需要说明的,本申请实施例中,“超过”可以表示距离值大于第一距离值,也可以表示距离值大于等于第一距离值。例如,距离值大于第一距离值,手机执行步骤8-4;距离值小于等于第一距离值,手机执行步骤8-5。又例如,距离值大于等于第一距离值,手机执行步骤8-4;距离值小于第一距离值,手机执行步骤8-5。
步骤8-4:手机启动背面主摄像头和深度摄像头,采集目标对象的影像。
步骤8-5:手机启动背面主摄像头和广角摄像头,采集目标对象的影像。
步骤8-6:手机根据目标对象的影像生成包括目标对象的预览图像,并显示该预览图像。
其中,该预览图像为大光圈模式对应的预览图像。
需要说明的是,在手机显示大光圈模式对应的预览图像时,该预览图像中目标对象的影像是图像中清晰显示的部分,非目标对象为图像中虚化显示的部分。如图3B中,光圈301中是清晰显示的目标对象,光圈301以外的地方均为虚化显示区域。如果用户想要调整被拍摄的目标对象,用户可以调整手机的拍摄角度等。示例性的,用户点击预览图像中的某一区域,手机响应于用户的点击操作,调整目标对象为点击区域中事物的影像。这样一来,手机就需要重新计算与被拍摄的目标对象的距离值,并调整当前启动的摄像头。
在一些实现中,当手机确定目标对象发生改变,手机检测与目标对象的当前距离值。以便根据当前距离值调整启动的摄像头。拍摄过程中,频繁切换摄像头可能会使得手机显示图像闪烁,为了保证手机中大光圈模式正常稳定运行,手机中可以设置阈值保护范围。手机中阈值保护范围的设置在上述场景切换中已经进行详细说明,此处不予赘述。
以上是以电子设备是手机为例说明本申请实施例提供的方法,当电子设备为其他设备时,也可以采用上述的方法。此处不予赘述。
可以理解的是,本申请实施例提供的电子设备为了实现上述功能,其包含了执行各个功能相应的硬件结构和/或软件模块。本领域技术人员应该很容易意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,本申请实施例能够以硬件或硬件和计算机软件的结合形式来实现。某个功能究竟以硬件还是计算机软件驱动硬件的方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请实施例的范围。
本申请实施例可以根据上述方法示例对上述电子设备进行功能模块的划分,例如,可以对应各个功能划分各个功能模块,也可以将两个或两个以上的功能集成在一个处理模块中。上述集成的模块既可以采用硬件的形式实现,也可以采用软件功能模块的形式实现。需要说明的是,本申请实施例中对模块的划分是示意性的,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式。
本申请实施例还提供一种芯片系统,如图9所示,该芯片系统包括至少一个处理器901和至少一个接口电路902。处理器901和接口电路902可通过线路互联。例如,接口电路902可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路902可用于向其它装置(例如处理器901)发送信号。示例性的,接口电路902可读取存储器中存储的指令,并将该指令发送给处理器901。当所述指令被处理器901执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (19)

  1. 一种拍摄方法,其特征在于,应用于电子设备,所述电子设备包括第一摄像头、第二摄像头和第三摄像头;所述方法包括:
    所述电子设备接收到第一操作,所述第一操作用于触发所述电子设备进入大光圈模式;
    响应于所述第一操作,所述电子设备获取与被拍摄的目标对象的距离值;
    若所述距离值不超过第一距离值,所述电子设备启动所述第一摄像头和所述第二摄像头,采集所述目标对象的影像;
    若所述距离值超过所述第一距离值,所述电子设备启动所述第一摄像头和所述第三摄像头,采集所述目标对象的影像;
    所述电子设备显示包括所述目标对象的预览图像,其中,所述预览图像为大光圈模式对应的预览图像。
  2. 根据权利要求1所述的方法,其特征在于,所述电子设备接收到第一操作之前,所述方法还包括:
    所述电子设备启动第一摄像头,采集所述目标对象的影像;
    所述电子设备获取与被拍摄的目标对象的距离值,包括:
    所述电子设备获取所述第一摄像头的自动对焦编码,所述自动对焦编码指示所述第一摄像头与所述目标对象的距离值;
    将所述自动对焦编码作为所述电子设备与所述目标对象的距离值。
  3. 根据权利要求1所述的方法,其特征在于,所述电子设备还包括距离传感器;
    所述电子设备获取与被拍摄的目标对象的距离值,包括:
    所述电子设备启动所述距离传感器,以确定所述电子设备与所述目标对象的距离值。
  4. 根据权利要求1-3任一项所述的方法,其特征在于,所述电子设备的变焦倍率为第一倍率;
    所述电子设备显示包括所述目标对象的图像,包括:
    所述电子设备采用第一像素信号合并方式输出原始像素图像,以生成所述目标对象的图像,并显示包括所述目标对象的图像。
  5. 根据权利要求1-3任一项所述的方法,其特征在于,所述电子设备的变焦倍率为第一倍率;
    所述若所述距离值不超过第一距离值,所述电子设备启动所述第一摄像头和所述第二摄像头,采集所述目标对象的影像之后,所述电子设备显示包括所述目标对象的图像,包括:
    所述电子设备采用第一像素信号合并方式输出原始像素图像,以生成所述目标对象的图像,并显示包括所述目标对象的图像。
  6. 根据权利要求5所述的方法,其特征在于,所述电子设备显示包括所述目标对象的图像之后,所述方法还包括:
    所述电子设备接收到第二操作,所述第二操作指示所述电子设备调整变焦倍率为第二倍率,所述第二倍率大于所述第一倍率;
    响应于所述第二操作,所述电子设备采用第二像素信号合并方式输出原始像素图像,以生成所述目标对象的图像,并显示包括所述目标对象的图像。
  7. 根据权利要求1-6任一项所述的方法,其特征在于,所述第一摄像头、所述第二摄像头和所述第三摄像头设置在所述电子设备的第一面;
    所述第一摄像头到所述第二摄像头的距离值,小于所述第一摄像头到所述第三摄像头的距离值。
  8. 根据权利要求6所述的方法,其特征在于,所述第一摄像头、所述第二摄像头和所述第三摄像头都是所述电子设备的背面摄像头;
    所述第一摄像头为背面主摄像头,所述第二摄像头为广角摄像头,所述第三摄像头为长焦或深度摄像头。
  9. 根据权利要求1所述的方法,其特征在于,所述第一摄像头、所述第二摄像头和所述第三摄像头设置在所述电子设备的第一面,所述电子设备获取图像时,开启设置在第一面的摄像头;
    所述电子设备还包括第四摄像头和第五摄像头,所述第四摄像头和所述第二摄像头设置在所述电子设备的第二面;
    所述电子设备接收到第一操作,所述第一操作用于触发所述电子设备进入大光圈模式之后,所述方法还包括:
    所述电子设备接收到第三操作,所述第三操作用于触发电子设备开启设置在第二面的摄像头;
    响应于所述第三操作,所述电子设备启动所述第四摄像头和所述第五摄像头,采集所述目标对象的影像,其中,所述第四摄像头作为主摄像头,第五摄像头作为辅摄像头,所述主摄像头用于对焦所述目标对象,所述辅摄像头用于计算景深;
    所述电子设备显示包括所述目标对象的图像,所述图像为大光圈模式对应的预览图像,所述预览图像对应的变焦倍率为第一倍率;
    所述电子设备接收到第二操作,所述第二操作指示所述电子设备调整变焦倍率为第二倍率,所述第二倍率大于所述第一倍率;
    响应于所述第二操作,所述电子设备将所述第五摄像头调整为主摄像头,将所述第六摄像头作为辅摄像头;
    所述电子设备显示包括所述目标对象的图像,所述图像为大光圈模式对应的预览图像,所述预览图像对应的变焦倍率为第二倍率。
  10. 根据权利要求9所述的方法,其特征在于,
    所述响应于所述第二操作,所述电子设备启动所述第四摄像头和所述第五摄像头,采集所述目标对象的影像,所述方法还包括:所述电子设备采用第一像素信号合并方式输出原始像素图像,以生成所述目标对象的图像,并显示包括所述目标对象的图像;
    所述电子设备显示包括所述目标对象的图像,所述图像为大光圈模式对应的预览图像,所述预览图像对应的变焦倍率为第二倍率,包括:所述电子设备采用第二像素信号合并方式输出原始像素图像,以生成所述目标对象的图像,并显示包括所述目标对象的图像。
  11. 根据权利要求1-7任一项所述的方法,其特征在于,若所述距离值不超过第一 距离值,所述电子设备启动所述第一摄像头和所述第二摄像头,采集所述目标对象的影像;
    所述电子设备显示包括所述目标对象的图像之后,所述方法还包括:
    所述电子设备获取与所述目标对象的当前距离值;
    如果所述当前距离值超过第二距离值,所述电子设备启动所述第一摄像头和所述第三摄像头,采集所述目标对象的影像;其中,所述第二距离值大于所述第一距离值。
  12. 根据权利要求1-8任一项所述的方法,其特征在于,若所述距离值超过所述第一距离值,所述电子设备启动所述第一摄像头和所述第三摄像头,采集所述目标对象的影像;
    所述电子设备显示包括所述目标对象的图像之后,所述方法还包括:
    所述电子设备获取与所述目标对象的当前距离值;
    如果所述当前距离值不超过第三距离值,所述电子设备启动所述第一摄像头和所述第二摄像头,采集所述目标对象的影像;其中,所述第三距离值小于所述第一距离值。
  13. 根据权利要求1-9任一项所述的方法,其特征在于,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分;
    若所述距离值不超过第一距离值,所述电子设备启动所述第一摄像头和所述第二摄像头,采集所述目标对象的影像;其中,所述第一摄像头作为主摄像头,所述第二摄像头作为辅摄像头;
    所述电子设备显示包括所述目标对象的图像,包括:
    所述主摄像头采集得到第一图像,所述辅摄像头采集得到第二图像;
    所述电子设备根据所述第一图像确定目标对象,确定所述目标对象为所述清晰显示部分;
    所述电子设备根据所述第二图像计算出景深,确定所述虚化显示部分;
    所述电子设备根据所述第一图像和所述第二图像生成大光圈模式对应的预览图像,并显示。
  14. 根据权利要求1-9任一项所述的方法,其特征在于,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分;
    若所述距离值超过所述第一距离值,所述电子设备启动所述第一摄像头和所述第三摄像头,采集所述目标对象的影像;其中,所述第一摄像头作为主摄像头,所述第三摄像头作为辅摄像头;
    所述电子设备显示包括所述目标对象的图像,包括:
    所述主摄像头采集得到第一图像,所述辅摄像头采集得到第二图像;
    所述主摄像头采用第一像素信号合并方式输出原始像素图像,根据所述原始像素图像得到第一图像;
    所述电子设备根据所述原始像素图像确定目标对象,以确定所述目标对象为清晰显示部分;
    所述电子设备根据所述第二图像计算出景深,确定所述虚化显示部分;
    所述电子设备根据所述第一图像和所述第二图像生成大光圈模式对应的预览图像, 并显示。
  15. 根据权利要求9所述的方法,其特征在于,大光圈模式对应的预览图像包括虚化显示部分和清晰显示部分;
    所述响应于所述第三操作,所述电子设备启动所述第四摄像头和所述第五摄像头,采集所述目标对象的影像,包括:
    所述主摄像头采集得到第一图像,所述辅摄像头采集得到第二图像;所述主摄像头采用第一像素信号合并方式输出原始像素图像,根据所述原始像素图像得到第一图像;所述电子设备根据所述原始像素图像确定目标对象,以确定所述目标对象为清晰显示部分;所述电子设备根据所述第二图像计算出景深,确定所述虚化显示部分;所述电子设备根据所述第一图像和所述第二图像生成所述目标对象的影像;
    所述响应于所述第二操作,所述电子设备将所述第五摄像头调整为主摄像头,将所述第六摄像头作为辅摄像头,包括:
    所述主摄像头采集得到第一图像,所述辅摄像头采集得到第二图像;所述主摄像头采用第二像素信号合并方式输出原始像素图像,对所述原始像素图像进行裁切处理以得到第一图像;所述电子设备根据所述原始像素图像确定目标对象,以确定所述目标对象为清晰显示部分;所述电子设备根据所述第二图像计算出景深,确定所述虚化显示部分;所述电子设备根据所述第一图像和所述第二图像生成所述目标对象的影像。
  16. 根据权利要求4-6、13-15任一项所述的方法,其特征在于,
    第一像素信号合并方式包括:电子设备获取到所述目标对象的像素图像,对像素图像中的像素信息进行模拟合并操作,以输出包括目标对象的图像;
    第二像素信号合并方式包括:电子设备获取到所述目标对象的像素图像,对所述像素图像中的像素进行重新排列操作,以输出包括目标对象的图像。
  17. 根据权利要求6或10所述的方法,其特征在于,所述电子设备采用第二像素信号合并方式生成所述目标对象的图像,并显示包括所述目标对象的图像,包括:
    所述电子设备采用第二像素信号合并方式输出原始图像;
    所述电子设备对所述原始图像进行裁切处理,生成目标对象的图像;
    所述电子设备显示包括所述目标对象的图像。
  18. 一种电子设备,其特征在于,包括:
    一个或多个处理器;存储器,所述存储器中存储有代码;触摸屏,用于检测触摸操作以及显示界面;
    当所述代码被所述一个或多个处理器执行时,使得所述电子设备执行如权利要求1-17任一项所述的方法。
  19. 一种计算机可读存储介质,其特征在于,包括计算机指令,当所述计算机指令在电子设备上运行时,使得电子设备执行如权利要求1-17任一项所述的拍摄方法。
PCT/CN2022/081755 2021-06-15 2022-03-18 一种拍摄方法及电子设备 WO2022262344A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP22823835.8A EP4195650A1 (en) 2021-06-15 2022-03-18 Photographing method and electronic device
US18/043,373 US20230247286A1 (en) 2021-06-15 2022-03-18 Photographing Method and Electronic Device

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
CN202110662928 2021-06-15
CN202110662928.6 2021-06-15
CN202111081692.3 2021-09-15
CN202111081692.3A CN113747028B (zh) 2021-06-15 2021-09-15 一种拍摄方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022262344A1 true WO2022262344A1 (zh) 2022-12-22

Family

ID=78739107

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/081755 WO2022262344A1 (zh) 2021-06-15 2022-03-18 一种拍摄方法及电子设备

Country Status (4)

Country Link
US (1) US20230247286A1 (zh)
EP (1) EP4195650A1 (zh)
CN (1) CN113747028B (zh)
WO (1) WO2022262344A1 (zh)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113747028B (zh) * 2021-06-15 2024-03-15 荣耀终端有限公司 一种拍摄方法及电子设备
CN118138876A (zh) * 2022-01-25 2024-06-04 荣耀终端有限公司 切换摄像头的方法与电子设备
CN116709042B (zh) * 2022-02-24 2024-07-09 荣耀终端有限公司 一种图像处理方法和电子设备
US11871107B2 (en) * 2022-05-04 2024-01-09 Qualcomm Incorporated Automatic camera selection
CN115802158B (zh) * 2022-10-24 2023-09-01 荣耀终端有限公司 切换摄像头的方法与电子设备
CN115500740B (zh) * 2022-11-18 2023-04-18 科大讯飞股份有限公司 清洁机器人及清洁机器人控制方法
CN117354624B (zh) * 2023-12-06 2024-06-21 荣耀终端有限公司 摄像头切换方法、设备以及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295256A (zh) * 2017-06-23 2017-10-24 华为技术有限公司 一种图像处理方法、装置与设备
CN107977940A (zh) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备
CN110677621A (zh) * 2019-09-03 2020-01-10 RealMe重庆移动通信有限公司 摄像头调用方法、装置、存储介质及电子设备
CN110691193A (zh) * 2019-09-03 2020-01-14 RealMe重庆移动通信有限公司 摄像头切换方法、装置、存储介质及电子设备
CN111064895A (zh) * 2019-12-31 2020-04-24 维沃移动通信有限公司 一种虚化拍摄方法和电子设备
CN111183632A (zh) * 2018-10-12 2020-05-19 华为技术有限公司 图像捕捉方法及电子设备
WO2021031819A1 (zh) * 2019-08-22 2021-02-25 华为技术有限公司 一种图像处理方法和电子设备
CN113747028A (zh) * 2021-06-15 2021-12-03 荣耀终端有限公司 一种拍摄方法及电子设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101165810B1 (ko) * 2011-04-27 2012-07-16 주식회사 아이브이넷 스테레오 카메라에 의한 영상 깊이정보 추출방법 및 장치
KR102206866B1 (ko) * 2014-05-02 2021-01-25 삼성전자주식회사 전자 장치 및 전자 장치에서 촬영 방법
US20150334309A1 (en) * 2014-05-16 2015-11-19 Htc Corporation Handheld electronic apparatus, image capturing apparatus and image capturing method thereof
CN107948520A (zh) * 2017-11-30 2018-04-20 广东欧珀移动通信有限公司 图像处理方法和装置
CN109614909B (zh) * 2018-12-04 2020-11-20 北京中科虹霸科技有限公司 一种扩展采集距离的虹膜采集设备与方法
CN109993785B (zh) * 2019-03-27 2020-11-17 青岛小鸟看看科技有限公司 一种货箱内装载货物体积的测量方法和深度相机模组
CN114223192A (zh) * 2019-08-26 2022-03-22 三星电子株式会社 使用四色滤波阵列传感器进行内容增强的系统和方法
CN111031278B (zh) * 2019-11-25 2021-02-05 广州恒龙信息技术有限公司 一种基于结构光和tof的监控方法和系统
CN111274959B (zh) * 2019-12-04 2022-09-16 北京航空航天大学 一种基于可变视场角的加油锥套位姿精确测量方法

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107295256A (zh) * 2017-06-23 2017-10-24 华为技术有限公司 一种图像处理方法、装置与设备
CN107977940A (zh) * 2017-11-30 2018-05-01 广东欧珀移动通信有限公司 背景虚化处理方法、装置及设备
CN111183632A (zh) * 2018-10-12 2020-05-19 华为技术有限公司 图像捕捉方法及电子设备
WO2021031819A1 (zh) * 2019-08-22 2021-02-25 华为技术有限公司 一种图像处理方法和电子设备
CN110677621A (zh) * 2019-09-03 2020-01-10 RealMe重庆移动通信有限公司 摄像头调用方法、装置、存储介质及电子设备
CN110691193A (zh) * 2019-09-03 2020-01-14 RealMe重庆移动通信有限公司 摄像头切换方法、装置、存储介质及电子设备
CN111064895A (zh) * 2019-12-31 2020-04-24 维沃移动通信有限公司 一种虚化拍摄方法和电子设备
CN113747028A (zh) * 2021-06-15 2021-12-03 荣耀终端有限公司 一种拍摄方法及电子设备

Also Published As

Publication number Publication date
CN113747028A (zh) 2021-12-03
CN113747028B (zh) 2024-03-15
EP4195650A1 (en) 2023-06-14
US20230247286A1 (en) 2023-08-03

Similar Documents

Publication Publication Date Title
WO2022262344A1 (zh) 一种拍摄方法及电子设备
WO2022262260A1 (zh) 一种拍摄方法及电子设备
JP6945744B2 (ja) 撮影方法、装置、およびデバイス
WO2020186969A1 (zh) 一种多路录像方法及设备
US8823837B2 (en) Zoom control method and apparatus, and digital photographing apparatus
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
WO2023016025A1 (zh) 一种拍照方法及设备
WO2022057723A1 (zh) 一种视频的防抖处理方法及电子设备
JP5982601B2 (ja) 撮像装置及び合焦制御方法
WO2014045689A1 (ja) 画像処理装置、撮像装置、プログラム及び画像処理方法
WO2014083914A1 (ja) 撮像装置及び合焦制御方法
EP4329287A1 (en) Photographing method and electronic device
CN113364976B (zh) 一种图像的显示方法及电子设备
US20240259679A1 (en) Method for image or video shooting and related device
WO2013146041A1 (ja) 画像処理装置及び方法並びに撮像装置
CN115802158A (zh) 切换摄像头的方法与电子设备
US20230421889A1 (en) Photographing Method and Electronic Device
CN118138876A (zh) 切换摄像头的方法与电子设备
CN112751994A (zh) 一种拍摄设备
CN114979458A (zh) 一种图像的拍摄方法及电子设备
WO2022206629A1 (zh) 拍摄方法、终端设备及计算机可读存储介质
RU2789447C1 (ru) Способ и устройство многоканальной видеозаписи
CN116051368B (zh) 图像处理方法及其相关设备
RU2822535C2 (ru) Способ и устройство многоканальной видеозаписи
US20240179397A1 (en) Video processing method and electronic device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22823835

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022823835

Country of ref document: EP

Effective date: 20230306

NENP Non-entry into the national phase

Ref country code: DE