WO2024067071A1 - 一种拍摄方法、电子设备及介质 - Google Patents

一种拍摄方法、电子设备及介质 Download PDF

Info

Publication number
WO2024067071A1
WO2024067071A1 PCT/CN2023/118306 CN2023118306W WO2024067071A1 WO 2024067071 A1 WO2024067071 A1 WO 2024067071A1 CN 2023118306 W CN2023118306 W CN 2023118306W WO 2024067071 A1 WO2024067071 A1 WO 2024067071A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
fused image
fused
telephoto
Prior art date
Application number
PCT/CN2023/118306
Other languages
English (en)
French (fr)
Inventor
孙涛
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Publication of WO2024067071A1 publication Critical patent/WO2024067071A1/zh

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/74Circuitry for compensating brightness variation in the scene by influencing the scene brightness using illuminating means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present application relates to the field of photographing technology, and in particular to a photographing method, electronic equipment and medium.
  • the multiple cameras may include cameras with multiple focal lengths, for example, a short-focus (wide-angle) camera, a medium-focus camera (main camera) and a telephoto camera.
  • Different cameras correspond to different viewing ranges and zoom ratios.
  • users can switch cameras with different focal lengths to shoot by adjusting the zoom ratio.
  • the user can adjust the shooting focal length or shooting lens of the preview image in the magnification adjustment area 002 of the preview interface 001 of the camera application of the mobile phone.
  • the shooting focal length or shooting lens of the preview image in the magnification adjustment area 002 of the preview interface 001 of the camera application of the mobile phone.
  • “1.0 ⁇ ” it means that the preview image is obtained through the main camera of the mobile phone (such as a camera with a focal length of 27 mm)
  • "5.0 ⁇ ” means that the preview image is obtained through the telephoto lens of the mobile phone (such as a camera with a focal length of 125 mm).
  • the main camera is generally used for zoom shooting, but as shown in FIG2a, the clarity of the obtained image is poor, so some users use a telephoto camera for night scene shooting, but as shown in FIG2b, the sensitivity of the obtained image is poor.
  • a shooting method, an electronic device and a medium are provided in an embodiment of the present application.
  • an embodiment of the present application provides a shooting method for an electronic device, wherein the electronic device includes a first camera and a second camera, and the method includes: detecting that the first camera of the electronic device is turned on; when it is determined that the current shooting scene is a low-light shooting scene, in response to a user's shooting instruction, obtaining a first image shot based on the first camera and a second image shot based on the second camera; wherein the focal length of the first camera is greater than the focal length of the second camera; and generating a shot image based on brightness feature information corresponding to the first image and the second image.
  • the first camera can be a telephoto camera
  • the second camera can be a main camera.
  • the main camera image has good sensitivity, and the details in the telephoto image are clearer. Therefore, in the embodiment of the present application, after the brightness feature information in the main camera image and the telephoto image are fused, a clear image with good sensitivity can be obtained. That is, based on the above shooting method, the sensitivity of the image obtained in the telephoto low-light shooting scene can be effectively improved, and the image quality can be improved.
  • the method includes: upon detecting that a user selects a shooting parameter greater than or equal to a set magnification, the electronic device turns on the first camera.
  • a method for determining that a current shooting scene is a low-light shooting scene includes: obtaining ambient light brightness; and determining that the current shooting scene is a low-light shooting scene when the ambient light brightness is lower than a set value.
  • a method for determining whether a current shooting scene is a low-light shooting scene includes: acquiring a preview image based on the first camera; determining an average value of pixels in the preview image; and determining that the current shooting scene is a low-light shooting scene when the average value is less than a set value.
  • a method for determining that a current shooting scene is a low-light shooting scene includes: obtaining an exposure parameter value, where the exposure parameter value includes an exposure time value and a sensitivity value; and when the exposure time value is greater than a first set value and the sensitivity value is greater than a second set value, determining that the current shooting scene is a low-light shooting scene.
  • a first image captured by the first camera and a second image captured by the second camera are obtained; including: cropping the focal length of the second camera to the same focal length as that of the first camera; and acquiring multiple frames of first sub-images captured by the first camera and multiple frames of second sub-images captured by the cropped second camera.
  • the focal length of the main camera is digitally cropped to the same focal length as the telephoto camera, so that the main camera can obtain an image with the same field of view as the telephoto camera.
  • the method of generating a captured image based on the brightness feature information corresponding to the first image and the second image includes: performing image fusion on multiple frames of first sub-images to obtain a fused first fused image, performing image fusion on multiple frames of second sub-images to obtain a second fused image; performing image registration on the first fused image and the second fused image to obtain the registered first fused image and the second fused image; obtaining high-frequency detail information of the registered first fused image and the brightness feature information in the second fused image; and generating a captured image based on the high-frequency detail information of the registered first fused image and the brightness feature information in the second fused image.
  • the first fused image can be a telephoto fused image
  • the second fused image can be a main camera fused image. Aligning the main camera fused image and the telephoto fused image can effectively eliminate the image deviation caused by the difference in camera position.
  • the image fusion of multiple frames of first sub-images to obtain a fused first fused image, and the image fusion of multiple frames of second sub-images to obtain a second fused image include: performing registration processing on the multiple frames of first sub-images to obtain multiple frames of first sub-images after registration; performing fusion processing on the multiple frames of first sub-images after registration to obtain a first fused image. Performing registration processing on the multiple frames of second sub-images to obtain multiple frames of second sub-images after registration; performing fusion processing on the multiple frames of second sub-images after registration to obtain a second fused image.
  • the registering the second fused image and the first fused image to obtain the registered second fused image and the first fused image includes: registering the first fused image with the second fused image as a reference image to obtain the registered first fused image; or registering the second fused image with the first fused image as a reference image to obtain the registered second fused image.
  • the first fused image is used as a reference image
  • the second fused image is registered to obtain the registered second fused image; including: extracting features from both the first fused image and the second fused image to obtain multiple feature points in the first fused image and multiple feature points in the second fused image, and determining matching feature points of the multiple feature points in the first fused image among the multiple feature points in the second fused image; determining an image affine transformation matrix according to the offset between the multiple feature points in the first fused image and each matching feature point in the second fused image, and transforming the second fused image according to the image affine transformation matrix to obtain the registered second fused image.
  • the brightness feature information includes brightness information and color information
  • generating a captured image based on the high-frequency detail information of the first fused image after the alignment and the brightness feature information in the second fused image includes: superimposing the brightness information in the second fused image with the high-frequency detail information in the first fused image to obtain first fusion information; and merging the first fusion information with the color information of the second fused image to generate a captured image.
  • the present application provides an electronic device, comprising: a memory for storing instructions executed by one or more processors of the electronic device, and the processor, which is one of the one or more processors of the electronic device, for executing the shooting method mentioned in an embodiment of the present application.
  • the present application provides a readable storage medium, on which instructions are stored.
  • the electronic device executes the shooting method mentioned in the embodiments of the present application.
  • the present application provides a computer program product, comprising: execution instructions, the execution instructions are stored in a readable storage medium, at least one processor of an electronic device can read the execution instructions from the readable storage medium, and the at least one processor executes the execution instructions so that the electronic device executes the shooting method mentioned in an embodiment of the present application.
  • FIG1 shows a schematic diagram of a shooting scene according to some embodiments of the present application.
  • 2a-2b are schematic diagrams showing comparisons of images captured by different cameras according to some embodiments of the present application.
  • FIG3 shows a schematic structural diagram of an electronic device according to some embodiments of the present application.
  • FIG4 is a schematic diagram showing a photographing process of an electronic device according to some embodiments of the present application.
  • FIG5 is a schematic diagram showing a flow chart of a shooting method according to some embodiments of the present application.
  • FIG6a is a schematic diagram showing a method flow of fusing a telephoto image and a main camera image according to some embodiments of the present application;
  • FIG6b is a schematic diagram showing a process of fusing a telephoto image and a main camera image according to some embodiments of the present application.
  • FIG7 is a schematic diagram showing a flow chart of a shooting method according to some embodiments of the present application.
  • FIGS. 8a-8b are schematic diagrams showing a comparison of an image captured by a telephoto camera and a fused image obtained based on the shooting method of the present application according to some embodiments of the present application.
  • the illustrative embodiments of the present application include but are not limited to a photographing method, an electronic device, and a medium.
  • an embodiment of the present application provides a shooting method for an electronic device, the method comprising: detecting that the electronic device turns on the telephoto shooting mode (i.e., turns on the telephoto camera), determining whether the current shooting scene is a dark light shooting scene, and if so, turning on the main camera while maintaining the telephoto mode.
  • the telephoto shooting mode i.e., turns on the telephoto camera
  • a user's shooting command for example, the user triggers the shooting control
  • a main camera image shot by the main camera and a telephoto image shot by the telephoto camera fusing the telephoto image and the main camera image to obtain a final fused image, that is, obtaining brightness information and color information in the main camera image, obtaining high-frequency detail information in the telephoto image, and obtaining a fused image based on the brightness information and color information in the main camera image and the high-frequency detail information in the telephoto image.
  • the main camera image has good sensitivity and the details in the telephoto image are clearer. Therefore, in the embodiment of the present application, after the brightness information and color information in the main camera image and the high-frequency detail information in the telephoto image are fused, a clear image with good sensitivity can be obtained. That is, based on the above shooting method, the sensitivity of the image obtained in the telephoto low-light shooting scene can be effectively improved, thereby improving the image quality.
  • the method of obtaining whether the current scene is a dark-light shooting scene may include: obtaining the ambient light brightness based on an ambient light sensor, and when the ambient light brightness is lower than a set value, determining that the current shooting scene is a dark-light shooting scene.
  • a preview image in telephoto mode is obtained, and an average value of pixels in the preview image is determined.
  • the average value is less than a set value, it is determined that the current shooting scene is a low-light shooting scene.
  • exposure parameter values are obtained, for example, exposure time value and sensitivity value, and when the exposure time value is greater than a first set value and the sensitivity value is greater than a second set value, it is determined that the current shooting scene is a low-light shooting scene.
  • the focal length of the main camera after turning on the main camera, can be digitally cropped to the same focal length as the telephoto camera, so that the main camera can obtain an image with the same field of view as the telephoto camera.
  • the main image obtained by the main camera and the telephoto image obtained by the telephoto camera can both be multi-frame images.
  • the method of fusing the telephoto image and the main image to obtain the fused image may include:
  • the telephoto fusion image can be used as a reference image to register the main camera fusion image to obtain a registered main camera image. Extract brightness information and color information from the registered main camera image, as well as high-frequency detail information from the telephoto fusion image; obtain a fused image based on the brightness information and color information in the registered main camera image and the high-frequency detail information in the telephoto fusion image.
  • the main camera fusion image and the telephoto fusion image can also be registered by using the main camera fusion image as a reference image, registering the telephoto fusion image, and obtaining the registered telephoto image. Registering the main camera fusion image and the telephoto fusion image can effectively eliminate the image deviation caused by the difference in camera position.
  • the electronic device in the embodiment of the present application can be a tablet computer (portable android device, PAD), a personal digital assistant (personal digital assistant, PDA), a handheld device with wireless communication function, a computing device, a vehicle-mounted device or a wearable device, a virtual reality (virtual reality, VR) terminal device, an augmented reality (augmented reality, AR) terminal device, a wireless terminal in industrial control (industrial control), a wireless terminal in self-driving, a wireless terminal in remote medical, a wireless terminal in smart grid (smart grid), a wireless terminal in transportation safety (transportation safety), a wireless terminal in smart city (smart city), a wireless terminal in smart home (smart home) and other mobile terminals or fixed terminals.
  • the form of the electronic device in the embodiment of the present application is not specifically limited.
  • the mobile phone 10 can It includes a processor 110, a power module 140, a memory 180, a mobile communication module 130, a wireless communication module 120, a sensor module 190, an audio module 150, a camera 170, an interface module 160, a button 101 and a display screen 102, etc.
  • the structure illustrated in the embodiment of the present invention does not constitute a specific limitation on the mobile phone 10.
  • the mobile phone 10 may include more or fewer components than shown in the figure, or combine some components, or separate some components, or arrange the components differently.
  • the components shown in the figure may be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units, for example, a processing module or processing circuit including a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor DSP, a microprocessor (MCU), an artificial intelligence (AI) processor, or a programmable logic device (FPGA). Different processing units may be independent devices or integrated in one or more processors.
  • a storage unit may be provided in the processor 110 for storing instructions and data. In some embodiments, the storage unit in the processor 110 is a cache memory 180.
  • the processor can be used to execute the shooting method provided in the embodiment of the present application.
  • the power module 140 may include a power source, a power management component, etc.
  • the power source may be a battery.
  • the power management component is used to manage the charging of the power source and the power supply of the power source to other modules.
  • the power management component includes a charging management module and a power management module.
  • the charging management module is used to receive charging input from the charger; the power management module is used to connect the power source, the charging management module and the processor 110.
  • the power management module receives input from the power source and/or the charging management module, and supplies power to the processor 110, the display screen 102, the camera 170, and the wireless communication module 120.
  • the mobile communication module 130 may include, but is not limited to, an antenna, a power amplifier, a filter, an LNA (Low noise amplifier), etc.
  • the mobile communication module 130 may provide a solution for wireless communications including 2G/3G/4G/5G, etc., applied to the mobile phone 10.
  • the mobile communication module 130 may receive electromagnetic waves through an antenna, filter, amplify, and process the received electromagnetic waves, and transmit them to a modulation and demodulation processor for demodulation.
  • the mobile communication module 130 may also amplify the signal modulated by the modulation and demodulation processor, and convert it into electromagnetic waves for radiation through an antenna.
  • at least some of the functional modules of the mobile communication module 130 may be disposed in the processor 110.
  • Wireless communication technologies may include global system for mobile communications (GSM), general packet radio service (GPRS), code division multiple access (CDMA), wideband code division multiple access (WCDMA), time-division code division multiple access (TD-SCDMA), long term evolution (LTE), Bluetooth (BT), global navigation satellite system (GNSS), wireless local area networks (WLAN), near field communication (NFC), frequency modulation (FM) and/or field communication (NFC), infrared technology (IR), etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • Bluetooth Bluetooth
  • GNSS global navigation satellite system
  • WLAN wireless local area networks
  • NFC near field communication
  • FM frequency modulation
  • NFC infrared technology
  • IR infrared technology
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a Beidou navigation satellite system (BDS), a quasi-zenith satellite system (QZSS) and/or a satellite based augmentation system (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Beidou navigation satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite based augmentation system
  • the wireless communication module 120 may include an antenna, and transmit and receive electromagnetic waves via the antenna.
  • the wireless communication module 120 may provide wireless communication solutions including wireless local area networks (WLAN) (such as wireless fidelity (Wi-Fi) network), Bluetooth (BT), global navigation satellite system (GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR), etc., which are applied to the mobile phone 10.
  • WLAN wireless local area networks
  • BT Bluetooth
  • GNSS global navigation satellite system
  • FM frequency modulation
  • NFC near field communication
  • IR infrared
  • the mobile communication module 130 and the wireless communication module 120 of the mobile phone 10 may also be located in the same module.
  • the display screen 102 is used to display a human-computer interaction interface, images, videos, etc.
  • the display screen 102 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active-matrix organic light-emitting diode or an active-matrix organic light-emitting diode (AMOLED), a flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, a quantum dot light emitting diode (QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active-matrix organic light-emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed, a quantum dot light emitting diode (QLED), etc.
  • the sensor module 190 may include a proximity light sensor, a pressure sensor, a gyroscope sensor, an air pressure sensor, a magnetic sensor, an acceleration sensor, a distance sensor, a fingerprint sensor, a temperature sensor, a touch sensor, an ambient light sensor, a bone conduction sensor, etc.
  • the gyroscope sensor can be used to determine information such as mobile phone shaking.
  • the audio module 150 is used to convert digital audio information into analog audio signal output, or convert analog audio input into digital audio signal.
  • the audio module 150 can also be used to encode and decode audio signals.
  • the audio module 150 can be arranged in the processor 110, or some functional modules of the audio module 150 can be arranged in the processor 110.
  • the audio module 150 can include a speaker, an earpiece, a microphone, and an earphone interface.
  • the camera 170 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP (Image Signal Processing) to convert it into a digital image signal.
  • the mobile phone 10 can realize the shooting function through the ISP, the camera 170, the video codec, the GPU (Graphic Processing Unit), the display screen 102 and the application processor.
  • the camera 170 may include a main camera and a telephoto camera, and may also include other cameras, wherein the main camera generally has a lens with a focal length of about 27 mm, which is used to shoot medium-angle scenes; the telephoto camera generally has a lens with a focal length of more than 50 mm, which is used to shoot close-up scenes.
  • the interface module 160 includes an external memory interface, a universal serial bus (USB) interface, and a subscriber identification module (SIM) card interface.
  • the external memory interface can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the mobile phone 10.
  • the external memory card communicates with the processor 110 through the external memory interface to implement a data storage function.
  • the universal serial bus interface is used for the mobile phone 10 to communicate with other electronic devices.
  • the subscriber identification module card interface is used to communicate with the SIM card installed in the mobile phone 1010, for example, to read the phone number stored in the SIM card, or to write the phone number into the SIM card.
  • the mobile phone 10 further includes a button 101, a motor, and an indicator.
  • the button 101 may include a volume button, a power on/off button, etc.
  • the motor is used to make the mobile phone 10 vibrate, for example, vibrate when the user's mobile phone 10 is called, so as to prompt the user to answer the call.
  • the indicator may include a laser indicator, a radio frequency indicator, an LED indicator, etc.
  • the processor can control the telephoto camera to turn on when detecting the user's instruction to turn on the telephoto camera, for example, when detecting that the user adjusts the zoom ratio to be greater than or equal to the set ratio; and the processor can determine whether the shooting scene is a dark light shooting scene; if the judgment result is yes, the main camera is controlled to turn on while keeping the telephoto mode turned on; and the processor can control the telephoto camera and the main camera to shoot when detecting the user's shooting instruction, so as to obtain corresponding image data respectively, for example, including a telephoto image and a main camera image, and in addition, the processor can fuse the image data to obtain a fused image (i.e., a photo) and store it in the memory. In addition, the processor can also control the display to display the corresponding image based on the user's instruction.
  • FIG5 shows a flow chart of a shooting method in the embodiment of the present application.
  • the shooting method in FIG5 can be executed by the electronic device, and the shooting method includes:
  • the electronic device can turn on the telephoto camera when it is detected that the user has selected a shooting parameter with a zoom ratio greater than or equal to a set ratio.
  • the set ratio may be different in different electronic devices. For example, some electronic devices can turn on the telephoto camera when it is detected that the user has selected a zoom ratio greater than or equal to 2.0x; other electronic devices can turn on the telephoto camera when it is detected that the user has selected a zoom ratio greater than or equal to 3.0x.
  • the method of obtaining whether the current scene is a dark-light shooting scene may include: obtaining the ambient light brightness based on the ambient light sensor, and when the ambient light brightness lux is lower than a set value, for example, lower than 10 lux, it is determined that the current shooting scene is a dark-light shooting scene.
  • a preview image in telephoto mode is obtained, and an average value lum of pixels in the preview image is determined.
  • the average value lum is less than a set value, for example, less than 10, it is determined that the current shooting scene is a low-light shooting scene.
  • exposure parameter values are obtained, for example, exposure time (expo) value and sensitivity (ISO) value.
  • exposure value is greater than a first setting value, for example, expo>50ms
  • ISO value is greater than a second setting value, for example, ISO>5000, it is determined that the current shooting scene is a low-light shooting scene.
  • the telephoto camera and the main camera can be turned on at the same time to obtain images taken by the telephoto camera and the main camera respectively.
  • the focal length of the main camera can be digitally cropped to the same focal length or field of view as the telephoto camera, so as to obtain an image with the same field of view as the telephoto camera.
  • a shooting instruction of the user is detected, and a main image shot by the main camera and a telephoto image shot by the telephoto camera are acquired.
  • the user's shooting instruction can be the user clicking a photo control, or issuing a remote control instruction such as a voice command or body movement corresponding to the photo operation.
  • the main camera image obtained based on the main camera and the telephoto image obtained based on the telephoto camera can both be multi-frame images.
  • the telephoto image and the main camera image are fused to obtain the fused image in a manner as shown in FIG. 6a and FIG. 6b, including:
  • the method of fusing multiple main camera images to obtain the corresponding main camera fused image may be: using the first frame image of the multiple main camera images captured by the main camera as a reference image, registering the other frames to obtain the registered images corresponding to the other frames. Then, the first frame image is fused with the registered images corresponding to the other frames to obtain the final main camera fused image.
  • the fusion processing method can be: fusing the first frame image and the registered image corresponding to the second frame image, that is, averaging the corresponding pixel values in the registered image corresponding to the first frame image and the second frame image to obtain a first fused image, and then performing the same fusion processing on the first fused image and the registered image corresponding to the third frame image to obtain a second fused image, and continuing the above fusion processing until the last frame image is fused to obtain the final main camera fusion image.
  • the fusion processing method may also be: averaging the first frame image and the registered images corresponding to the other frame images at the same time to obtain the final main-photographed fusion image.
  • any one of the multiple frames of main camera images captured by the main camera can be used as a reference image to align the other frames of images to obtain the aligning images corresponding to the other frames of images.
  • aligning the multiple frames of images captured by the same camera can effectively eliminate the differences between the multiple frames of images caused by camera shaking and the like.
  • feature extraction is performed on both the first frame image and the second frame image to obtain multiple feature points in the first frame image and multiple feature points in the second frame image.
  • a surf operator can be used to extract features from the first frame image and the second frame image.
  • the image affine transformation (warp) matrix H is determined according to the offset between each feature point in the first frame image and each corresponding matching feature point in the second frame image.
  • the second frame image is warped according to the warp matrix H to obtain the registered second frame image.
  • the first frame image remains unchanged.
  • the fusion method can be as follows:
  • the second frame image I2 taking the first frame image I1 as a reference, the second frame image I2, the third frame image I3 and the fourth frame image I4 are registered to obtain the registered image I2’, the registered image I3’ and the registered image I4’ corresponding to the second frame image I2, the third frame image I3 and the fourth frame image I4 respectively.
  • the pixel values of the corresponding positions of the first frame image I1 and the registered image I2' corresponding to the second frame image I2 are averaged to obtain the A fused image.
  • I1(x, y) is the pixel value of the pixel point (x, y) in the first frame image I1
  • I2'(x, y) is the pixel value of the pixel point (x, y) in the registered image I2' corresponding to the second frame image I2.
  • I 01 (x, y) is the pixel value of the pixel point (x, y) in the first fused image I 01
  • I3' (x, y) is the pixel value of the pixel point (x, y) in the registered image I3' corresponding to the third frame image I3.
  • I 02 (x, y) is the pixel value of the pixel point (x, y) of the second fused image I 02
  • I4 '(x, y) is the pixel value of the pixel point (x, y) in the registered image I4 'corresponding to the fourth frame image I4.
  • the method of fusing multiple frames of main camera images may also be described as follows:
  • the second frame image I2, the third frame image I3 and the fourth frame image I4 are registered with the first frame image I1 as a reference to obtain the registered images I2', I3' and I4' corresponding to the second frame image I2, the third frame image I3 and the fourth frame image I4.
  • the pixel values of the corresponding positions of the first frame image I1, the registered image I2' corresponding to the second frame image I2, the registered image I3' corresponding to the third frame image I3 and the registered image I4' corresponding to the fourth frame image I4 are averaged to obtain the main camera fusion image.
  • I 03 (x, y) 0.25*I1(x, y) + 0.25*I2'(x, y) + 0.25*I3'(x, y) + 0.25*I4'(x, y).
  • I1(x, y) is the pixel value of the pixel point (x, y) in the first frame image I1
  • I2’(x, y) is the pixel value of the pixel point (x, y) in the registered image I2’ corresponding to the second frame image I2
  • I3’(x, y) is the pixel value of the pixel point (x, y) in the registered image I3’ corresponding to the third frame image I3
  • I4’(x, y) is the pixel value of the pixel point (x, y) in the registered image I4’ corresponding to the fourth frame image I4.
  • the image fusion of multiple frames of telephoto images to obtain a fused telephoto fusion image is similar to the above-mentioned method of fusing multiple frames of main camera images to obtain a corresponding main camera fusion image, and will not be repeated here.
  • the registration method of the telephoto fusion image and the main camera fusion image can be to use the telephoto fusion image as a reference image to register the main camera fusion image; or it can be to use the main camera fusion image as a reference image to register the telephoto fusion image. It can be understood that in the embodiment of the present application, registering the main camera fusion image and the telephoto fusion image can effectively eliminate the image deviation caused by the difference in camera position.
  • feature extraction is performed on both the telephoto fusion image and the main camera fusion image to obtain multiple feature points in the telephoto fusion image and multiple feature points in the main camera fusion image.
  • a surf operator can be used to extract features from the telephoto fusion image and the main camera fusion image.
  • the image affine transformation (warp) matrix H is determined according to the offset between each feature point in the telephoto fusion image and each corresponding matching feature point in the main camera fusion image.
  • the main camera fusion image is warped according to the warp matrix H to obtain the registered main camera fusion image.
  • the telephoto fusion image remains unchanged.
  • the brightness information may include a brightness component
  • the color information may include a chrominance component
  • the high-frequency detail information may be a high-frequency detail component (high-frequency signal).
  • the brightness component Y and the chrominance component UV in the main camera fusion image can be obtained according to the RGB information and YUV information in the main camera fusion image.
  • Component, I2a is the overall detail signal of the telephoto image.
  • the low-frequency detail component I2_lpf of the telephoto fusion image may be obtained by downsampling the telephoto fusion image to a 1/4 size and then upsampling it to the original size to obtain I2_lpf.
  • the fusion image can be generated in the following manner:
  • the brightness component and the color component are combined to obtain the final fused image.
  • the fused image Io YUV2RGB (Y', UV).
  • the final shot image can be generated based on the telephoto image taken by the telephoto camera.
  • the telephoto image captured by the telephoto camera may include multiple frames of images
  • the final shooting method based on the telephoto image may be: performing image fusion on multiple frames of telephoto images to obtain the final captured image.
  • the method of performing image fusion on multiple frames of telephoto images to obtain a fused telephoto fusion image is similar to the method of fusing multiple frames of main camera images to obtain the corresponding main camera fusion image, which will not be repeated here.
  • the sensitivity of images captured in telephoto low-light shooting can be effectively improved.
  • multiple frames of main camera images and telephoto images are collected at the same time, and then image registration and brightness and detail fusion are performed to effectively improve the brightness and clarity of night scene telephoto images.
  • the method may include:
  • 701 is similar to 501 and will not be repeated here.
  • the preview image can be a preview image in the telephoto mode.
  • 703 Determine whether the average pixel value of the preview image is less than the set value. If so, go to 704 to turn on the main camera while maintaining the telephoto mode; if not, go to 710 to obtain multiple frames of telephoto images taken by the telephoto camera.
  • 704 is similar to 503 and will not be repeated here.
  • Detecting a user shooting instruction obtaining a plurality of frames of main camera images shot by the main camera and a plurality of frames of telephoto images shot by the telephoto camera.
  • the user's shooting instruction can be the user clicking a photo control, or issuing a remote control instruction such as a voice command or body movement corresponding to the photo operation.
  • the main camera image obtained based on the main camera and the telephoto image obtained based on the telephoto camera can both be multi-frame images.
  • 705 is similar to 504 and will not be repeated here.
  • 705 is similar to 5051 and will not be repeated here.
  • 707 is similar to 5052 and will not be repeated here.
  • 708 is similar to 5053 and will not be repeated here.
  • 709 is similar to 5054 and will not be repeated here.
  • the image fusion of multiple frames of telephoto images to obtain a fused telephoto fusion image is similar to the above-mentioned method of fusing multiple frames of main camera images to obtain a corresponding main camera fusion image, and will not be repeated here.
  • Figures 8a and 8b respectively show an image shot using the telephoto mode alone in a dark light scene and an image shot using the shooting method of the present application in an embodiment of the present application. It can be seen from Figures 8 and 8b that when taking high-magnification dark light photos, multiple main camera frames and multiple telephoto frames are captured simultaneously, and then image registration and brightness and detail fusion are performed, which can effectively improve the brightness and clarity of the night scene telephoto images.
  • the various embodiments disclosed in the present application may be implemented in hardware, software, firmware, or a combination of these implementation methods.
  • the embodiments of the present application may be implemented as a computer program or program code executed on a programmable system, the programmable system comprising at least one processor, a storage system (including volatile and non-volatile memory and/or storage elements), at least one input device, and at least one output device.
  • Program code can be applied to input instructions to perform the functions described in this application and generate output information.
  • the output information can be applied to one or more output devices in a known manner.
  • a processing system includes any system having a processor such as, for example, a digital signal processor (DSP), a microcontroller, an application specific integrated circuit (ASIC), or a microprocessor.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • Program code can be implemented with high-level programming language or object-oriented programming language to communicate with the processing system.
  • program code can also be implemented with assembly language or machine language.
  • the mechanism described in this application is not limited to the scope of any specific programming language. In either case, the language can be a compiled language or an interpreted language.
  • the disclosed embodiments may be implemented in hardware, firmware, software, or any combination thereof.
  • the disclosed embodiments may also be implemented as instructions carried or stored on one or more temporary or non-temporary machine-readable (e.g., computer-readable) storage media, which may be read and executed by one or more processors.
  • instructions may be distributed over a network or through other computer-readable media.
  • machine-readable media may include any mechanism for storing or transmitting information in a machine (e.g., computer) readable form, including, but not limited to, floppy disks, optical disks, optical disks, read-only memories (CD-ROMs), magneto-optical disks, read-only memories (ROMs), random access memories (RAMs), erasable programmable read-only memories (EPROMs), electrically erasable programmable read-only memories (EEPROMs), magnetic or optical cards, flash memory, or a tangible machine-readable memory for transmitting information (e.g., carrier waves, infrared signals, digital signals, etc.) using the Internet in electrical, optical, acoustic, or other forms of propagation signals. Therefore, machine-readable media include any type of machine-readable media suitable for storing or transmitting electronic instructions or information in a machine (e.g., computer) readable form.
  • a machine-readable media include any type of machine-readable media suitable for storing or transmitting electronic instructions or information in a machine
  • a logical unit/module can be a physical unit/module, or a part of a physical unit/module, or can be implemented as a combination of multiple physical units/modules.
  • the physical implementation method of these logical units/modules themselves is not the most important.
  • the combination of functions implemented by these logical units/modules is the key to solving the technical problems proposed by the present application.
  • the above-mentioned device embodiments of the present application do not introduce units/modules that are not closely related to solving the technical problems proposed by the present application, which does not mean that there are no other units/modules in the above-mentioned device embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

本申请涉及拍摄技术领域,公开了一种拍摄方法、电子设备及介质。其中,拍摄方法包括:检测到所述电子设备开启第一摄像头;在确定当前拍摄场景为暗光拍摄场景的情况下,响应于用户拍摄指令,获取基于所述第一摄像头拍摄的第一图像以及基于所述第二摄像头拍摄的第二图像;其中,所述第一摄像头的焦距大于第二摄像头的焦距;基于所述第一图像和所述第二图像对应的亮度特征信息生成拍摄图像。其中第一摄像头可以为长焦摄像头,第二摄像头可以为主摄像头。基于上述拍摄方法,能够有效提升长焦暗光拍摄场景中获取的图像的感光度,提升图像质量。

Description

一种拍摄方法、电子设备及介质
本申请要求于2022年09月27日提交中国专利局、申请号为202211184777.9、申请名称为“一种拍摄方法、电子设备及介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及拍摄技术领域,特别涉及一种拍摄方法、电子设备及介质。
背景技术
目前,手机通常都配置有多个摄像头用于满足用户各种拍摄场景。该多个摄像头可以包括多个焦段的摄像头,例如可以包括短焦(广角)摄像头、中焦摄像头(主摄)和长焦摄像头。其中不同的摄像头对应不同的取景范围和变焦倍率。用户在拍摄时,可以通过调节倍率实现切换不同焦段的摄像头进行拍摄。
例如,如图1中所示,用户可以在手机的相机应用的预览界面001的倍率调节区域002进行预览图像的拍摄焦距或拍摄镜头的调节,例如,当调节至“1.0×”,则表示通过手机主摄像头(例如焦距27毫米的摄像头)获取预览图像,“5.0×”表示通过手机长焦镜头(例如焦距125毫米的摄像头)获取预览图像。
目前,在拍摄夜景时,一般会采用主摄像头变焦拍摄,但是如图2a所示,获得图像的清晰度较差,因此一些用户采用长焦摄像头进行夜景拍摄,但是如图2b中所示,获取的图像的感光度较差。
为了提升采用长焦摄像头拍摄夜景的效果,一些方案中通过加大长焦摄像头对应的传感器尺寸或加大长焦摄像头的光圈等方案提升拍摄图像的感光度,但是该方案增大了整机尺寸,且拍摄效果提升较小;还有一些方案中,在长焦拍摄模式下,采用长曝光的方法提升拍摄图像的感光度,但是长曝光拍摄的图像容易模糊。
发明内容
为了解决上述问题,本申请实施例中提供一种拍摄方法、电子设备及介质。
第一方面,本申请实施例提供一种拍摄方法,用于电子设备,所述电子设备包括第一摄像头和第二摄像头,方法包括:检测到所述电子设备开启第一摄像头;在确定当前拍摄场景为暗光拍摄场景的情况下,响应于用户拍摄指令,获取基于所述第一摄像头拍摄的第一图像以及基于所述第二摄像头拍摄的第二图像;其中,所述第一摄像头的焦距大于第二摄像头的焦距;基于所述第一图像和所述第二图像对应的亮度特征信息生成拍摄图像。
可以理解,本申请实施例中,第一摄像头可以为长焦摄像头,第二摄像头可以为主摄像头。主摄图像的感光度较好,长焦图像中的细节较为清晰,因此,本申请实施例中将主摄图像中的亮度特征信息以及长焦图像融合后,可以获取清晰且感光度较好的图像。即基于上述拍摄方法,能够有效提升长焦暗光拍摄场景中获取的图像的感光度,提升图像质量。
在一种可能的实现中,上述方法包括:检测到用户选择大于等于设定倍率的拍摄参数,所述电子设备开启所述第一摄像头。
在一种可能的实现中,确定当前拍摄场景为暗光拍摄场景的方式,包括:获取环境光亮度;在所述环境光亮度低于设定值的情况下,确定当前拍摄场景为暗光拍摄场景。
在一种可能的实现中,确定当前拍摄场景为暗光拍摄场景的方式,包括:基于所述第一摄像头获取预览图像;确定所述预览图像中像素的平均值;在所述平均值小于设定值的情况下,确定当前拍摄场景为暗光拍摄场景。
在一种可能的实现中,确定当前拍摄场景为暗光拍摄场景的方式,包括:获取曝光参数值,所述曝光参数值包括曝光时间值和感光度值;在所述曝光时间值大于第一设定值,且所述感光度值大于第二设定值的情况下,确定当前拍摄场景为暗光拍摄场景。
在一种可能的实现中,获取基于所述第一摄像头拍摄的第一图像以及基于所述第二摄像头拍摄的第二图像;包括:将所述第二摄像头的焦段裁切至与所述第一摄像头的焦段相同的焦段;获取基于所述第一摄像头拍摄的多帧第一子图像以及基于裁切后的第二摄像头拍摄的多帧第二子图像。
可以理解,本申请实施例中,开启主摄像头后,将主摄像头的焦段数码裁剪到和长焦相同的焦段,以便于主摄像头获取到和长焦摄像头相同视场角的图像。
在一种可能的实现中,所述基于所述第一图像和所述第二图像对应的亮度特征信息生成拍摄图像;包括:对多帧第一子图像进行图像融合,获取融合后的第一融合图像,对多帧第二子图像进行图像融合,获取第二融合图像;对所述第一融合图像和所述第二融合图像进行配准,获取配准后的第一融合图像和第二融合图像;获取所述配准后的所述第一融合图像的高频细节信息,以及所述第二融合图像中的亮度特征信息;基于所述配准后的所述第一融合图像的高频细节信息,以及所述第二融合图像中的亮度特征信息生成拍摄图像。
可以理解,本申请实施例中,第一融合图像可以为长焦融合图像,第二融合图像可以为主摄融合图像。将主摄融合图像和长焦融合图像进行配准能够有效消除摄像头位置差异带来的图像偏差。
在一种可能的实现中,所述对多帧第一子图像进行图像融合,获取融合后的第一融合图像,对多帧第二子图像进行图像融合,获取第二融合图像;包括:对所述多帧第一子图像进行配准处理,获取配准后的多帧第一子图像;对配准后的多帧第一子图像进行融合处理,获取第一融合图像。对所述多帧第二子图像进行配准处理,获取配准后的多帧第二子图像;对配准后的多帧第二子图像进行融合处理,获取第二融合图像。
在一种可能的实现中,所述对所述第二融合图像和第一融合图像进行配准,获取配准后的第二融合图像和第一融合图像;包括:以所述第二融合图像为参考图像,对所述第一融合图像进行配准,获取配准后的第一融合图像;或者,以所述第一融合图像为参考图像,对所述第二融合图像进行配准,获取配准后的第二融合图像。
在一种可能的实现中,所述以所述第一融合图像为参考图像,对所述第二融合图像进行配准,获取配准后的第二融合图像;包括:对第一融合图像和第二融合图像均进行特征提取,以获取所述第一融合图像中的多个特征点以及所述第二融合图像中的多个特征点,确定所述第一融合图像中的多个特征点在所述第二融合图像的多个特征点中的匹配特征点;根据所述第一融合图像中的多个特征点与所述第二融合图像中各匹配特征点之间的偏移量确定图像仿射变换矩阵,根据图像仿射变换矩阵对第二融合图像进行变换处理,获取配准后的第二融合图像。
在一种可能的实现中,所述亮度特征信息包括亮度信息和色彩信息,所述基于所述配准后的所述第一融合图像的高频细节信息,以及所述第二融合图像中的亮度特征信息生成拍摄图像,包括:将所述第二融合图像中的亮度信息与所述第一融合图像中的高频细节信息叠加处理,获取第一融合信息;将所述第一融合信息与所述第二融合图像的色彩信息进行合并处理,生成拍摄图像。
第二方面,本申请提供一种电子设备,包括:存储器,用于存储所述电子设备的一个或多个处理器执行的指令,以及所述处理器,是所述电子设备的一个或多个处理器之一,用于执行本申请实施例中提及的拍摄方法。
第三方面,本申请提供一种可读存储介质,所述可读介质上存储有指令,所述指令在电子设备上执行时使得所述电子设备执行本申请实施例中提及的拍摄方法。
第四方面,本申请提供一种计算机程序产品,包括:执行指令,所述执行指令存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质读取所述执行指令,所述至少一个处理器执行所述执行指令使得所述电子设备执行本申请实施例中提及的拍摄方法。
附图说明
图1根据本申请的一些实施例,示出了一种拍摄场景示意图;
图2a-2b根据本申请的一些实施例,示出了一种不同摄像头拍摄图像的对比示意图;
图3根据本申请的一些实施例,示出了一种电子设备的结构示意图;
图4根据本申请的一些实施例,示出了一种电子设备的拍照过程示意图;
图5根据本申请的一些实施例,示出了一种拍摄方法的流程示意图;
图6a根据本申请的一些实施例,示出了一种对长焦图像和主摄图像进行融合的方法流程示意图;
图6b根据本申请的一些实施例,示出了一种对长焦图像和主摄图像进行融合的过程示意图;
图7根据本申请的一些实施例,示出了一种拍摄方法的流程示意图;
图8a-8b根据本申请的一些实施例,示出了一种长焦摄像头拍摄图像和基于本申请的拍摄方法获取的融合图像的对比示意图。
具体实施方式
本申请的说明性实施例包括但不限于一种拍摄方法、电子设备及介质。
为解决上述问题,本申请实施例提供一种拍摄方法,用于电子设备,方法包括:检测到电子设备开启长焦拍摄模式(即开启长焦摄像头),判断当前拍摄场景是否为暗光拍摄场景,若是,则在保持长焦模式的同时,开启主摄像头。在检测到用户拍照指令后(例如,用户触发了拍照控件),分别获取基于主摄像头拍摄的主摄图像以及基于长焦摄像头拍摄的长焦图像,对长焦图像和主摄图像进行融合处理,以获取最终融合图像,即获取主摄图像中的亮度信息和色彩信息,获取长焦图像中的高频细节信息,基于主摄图像中的亮度信息和色彩信息和长焦图像中的高频细节信息获取融合图像。
可以理解,本申请实施例中,主摄图像的感光度较好,长焦图像中的细节较为清晰,因此,本申请实施例中将主摄图像中的亮度信息和色彩信息以及长焦图像中的高频细节信息融合后,可以获取清晰且感光度较好的图像。即基于上述拍摄方法,能够有效提升长焦暗光拍摄场景中获取的图像的感光度,提升图像质量。
其中,获取当前场景是否为暗光拍摄场景的方式可以包括:基于环境光传感器获取环境光亮度,当环境光亮度低于设定值,则确定当前拍摄场景为暗光拍摄场景。
或者,获取长焦模式下的预览图像,确定预览图像中像素的平均值,当平均值小于设定值,则确定当前拍摄场景为暗光拍摄场景。
或者,获取曝光参数值,例如,曝光时间值和感光度值,当曝光时间值大于第一设定值,且感光度值大于第二设定值,则确定当前拍摄场景为暗光拍摄场景。
可以理解,本申请实施例中,开启主摄像头后,可以将主摄像头的焦段数码裁剪到和长焦相同的焦段,以便于主摄像头获取到和长焦摄像头相同视场角的图像。
可以理解,本申请实施例中,基于主摄像头获取的主摄图像以及基于长焦摄像头获取的长焦图像均可以为多帧图像。对长焦图像和主摄图像进行融合,获取融合图像的方式可以包括:
对多帧主摄图像进行图像融合,以获取融合后的主摄融合图像,对多帧长焦图像进行图像融合,以获取融合后的长焦融合图像。对主摄融合图像和长焦融合图像进行配准,例如,可以以长焦融合图像为参考图像,对主摄融合图像进行配准,获取配准后的主摄图像。提取配准后的主摄图像中的亮度信息和色彩信息,以及长焦融合图像的高频细节信息;基于配准后的主摄图像中的亮度信息和色彩信息和长焦融合图像中的高频细节信息获取融合图像。
可以理解,本申请实施例中,对主摄融合图像和长焦融合图像进行配准的方式也可以为以主摄融合图像为参考图像,对长焦融合图像进行配准,获取配准后的长焦图像。将主摄融合图像和长焦融合图像进行配准能够有效消除摄像头位置差异带来的图像偏差。
下面在详细介绍本申请实施例中拍摄方法之前,首先对本申请实施例中的电子设备进行说明。可以理解,本申请实施例中的电子设备可以为平板电脑(portable android device,PAD)、个人数字处理(personal digital assistant,PDA)、具有无线通信功能的手持设备、计算设备、车载设备或可穿戴设备,虚拟现实(virtual reality,VR)终端设备、增强现实(augmented reality,AR)终端设备、工业控制(industrial control)中的无线终端、无人驾驶(self driving)中的无线终端、远程医疗(remote medical)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等移动终端或固定终端。本申请实施例中对电子设备的形态不做具体限定。
下面以手机为例对本申请实施例提供的电子设备的硬件结构进行说明。如图3所示,手机10可以 包括处理器110、电源模块140、存储器180,移动通信模块130、无线通信模块120、传感器模块190、音频模块150、摄像头170、接口模块160、按键101以及显示屏102等。
可以理解的是,本发明实施例示意的结构并不构成对手机10的具体限定。在本申请另一些实施例中,手机10可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如,可以包括中央处理器(Central Processing Unit,CPU)、图像处理器(Graphics Processing Unit,GPU)、数字信号处理器DSP、微处理器(Micro-programmed Control Unit,MCU)、人工智能(Artificial Intelligence,AI)处理器或可编程逻辑器件(Field Programmable Gate Array,FPGA)等的处理模块或处理电路。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。处理器110中可以设置存储单元,用于存储指令和数据。在一些实施例中,处理器110中的存储单元为高速缓冲存储器180。
其中,处理器可以用于执行本申请实施例提供的拍摄方法。
电源模块140可以包括电源、电源管理部件等。电源可以为电池。电源管理部件用于管理电源的充电和电源向其他模块的供电。在一些实施例中,电源管理部件包括充电管理模块和电源管理模块。充电管理模块用于从充电器接收充电输入;电源管理模块用于连接电源,充电管理模块与处理器110。电源管理模块接收电源和/或充电管理模块的输入,为处理器110,显示屏102,摄像头170,及无线通信模块120等供电。
移动通信模块130可以包括但不限于天线、功率放大器、滤波器、LNA(Low noise amplify,低噪声放大器)等。移动通信模块130可以提供应用在手机10上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块130可以由天线接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。移动通信模块130还可以对经调制解调处理器调制后的信号放大,经天线转为电磁波辐射出去。在一些实施例中,移动通信模块130的至少部分功能模块可以被设置于处理器110中。在一些实施例中,移动通信模块130至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),无线局域网(wireless local area networks,WLAN),近距离无线通信技术(near field communication,NFC),调频(frequency modulation,FM)和/或field communication,NFC),红外技术(infrared,IR)技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS)和/或星基增强系统(satellite based augmentation systems,SBAS)。
无线通信模块120可以包括天线,并经由天线实现对电磁波的收发。无线通信模块120可以提供应用在手机10上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。手机10可以通过无线通信技术与网络以及其他设备进行通信。
在一些实施例中,手机10的移动通信模块130和无线通信模块120也可以位于同一模块中。
显示屏102用于显示人机交互界面、图像、视频等。显示屏102包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
传感器模块190可以包括接近光传感器、压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,骨传导传感器等。其中,陀螺仪传感器可以用于判断手机抖动等信息。
音频模块150用于将数字音频信息转换成模拟音频信号输出,或者将模拟音频输入转换为数字音频信号。音频模块150还可以用于对音频信号编码和解码。在一些实施例中,音频模块150可以设置于处理器110中,或将音频模块150的部分功能模块设置于处理器110中。在一些实施例中,音频模块150可以包括扬声器、听筒、麦克风以及耳机接口。
摄像头170用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件把光信号转换成电信号,之后将电信号传递给ISP(Image Signal Processing,图像信号处理)转换成数字图像信号。手机10可以通过ISP,摄像头170,视频编解码器,GPU(Graphic Processing Unit,图形处理器),显示屏102以及应用处理器等实现拍摄功能。
可以理解,本申请实施例中可以摄像头170包括主摄像头和长焦摄像头,也可以包括其他摄像头,其中,主摄像头一般为焦距27mm左右的镜头,用于拍摄中等视角场景;长焦摄像头一般为焦距50mm以上的镜头,用于拍摄特写场景。
接口模块160包括外部存储器接口、通用串行总线(universal serial bus,USB)接口及用户标识模块(subscriber identification module,SIM)卡接口等。其中外部存储器接口可以用于连接外部存储卡,例如Micro SD卡,实现扩展手机10的存储能力。外部存储卡通过外部存储器接口与处理器110通信,实现数据存储功能。通用串行总线接口用于手机10和其他电子设备进行通信。用户标识模块卡接口用于与安装至手机1010的SIM卡进行通信,例如读取SIM卡中存储的电话号码,或将电话号码写入SIM卡中。
在一些实施例中,手机10还包括按键101、马达以及指示器等。其中,按键101可以包括音量键、开/关机键等。马达用于使手机10产生振动效果,例如在用户的手机10被呼叫的时候产生振动,以提示用户接听手机10来电。指示器可以包括激光指示器、射频指示器、LED指示器等。
为了便于理解本申请方案,本申请以下实施例将以图3中所示的电子设备的部分结构为例,简要介绍本申请实施例中一种电子设备的拍照过程。
如图4所示,处理器可以在检测到用户开启长焦摄像头的指令,例如检测到用户调节变焦倍率大于等于设定倍率时,控制长焦摄像头开启;且处理器可以判断拍摄场景是否为暗光拍摄场景;若判断结果为是,则在保持长焦模式开启的同时,控制主摄像头开启;且处理器可以在检测到用户的拍照指令时,控制长焦摄像头和主摄像头进行拍摄,以分别获取对应的图像数据,例如,包括长焦图像和主摄图像,另外,处理器可以对图像数据进行融合处理,以获取融合图像(即照片)并存储在存储器中。此外,处理器还可以基于用户指令控制显示器显示对应图像。
下面结合上述电子设备,对本申请实施例提及的拍摄方法进行说明。如图5所示,图5示出了本申请实施例一种拍摄方法的流程示意图,图5中的拍摄方法可以由电子设备执行,拍摄方法包括:
501:检测到电子设备开启长焦拍摄模式。
可以理解,本申请实施例中,电子设备可以在检测到用户选择了变焦倍率大于等于设定倍率的拍摄参数时,开启长焦摄像头。可以理解,设定倍率在不通电子设备中的可以有所不同,例如,一些电子设备中可以在检测到用户选择了变焦倍率大于等于2.0x时,开启长焦摄像头;另一些电子设备中可以在检测到用户选择了变焦倍率大于等于3.0x时,开启长焦摄像头。
可以理解,本申请实施例中,当电子设备检测到电子设备开启了长焦摄像头时,确定电子设备开启了长焦拍摄模式。
502:判断当前拍摄场景是否为暗光拍摄场景;若判断结果为是,转至503,在保持长焦模式的同时,开启主摄像头;若判断结果为否,则转至506,获取长焦摄像头拍摄的长焦图像,基于长焦图像生成拍摄图像。
可以理解,本申请实施例中,获取当前场景是否为暗光拍摄场景的方式可以包括:基于环境光传感器获取环境光亮度,当环境光亮度lux低于设定值,例如低于10lux,则确定当前拍摄场景为暗光拍摄场景。
或者,获取长焦模式下的预览图像,确定预览图像中像素的平均值lum,当平均值lum小于设定值,例如小于10,则确定当前拍摄场景为暗光拍摄场景。
或者,获取曝光参数值,例如,曝光时间(expo)值和感光度(ISO)值,当expo值大于第一设定值,例如expo>50ms,且ISO值大于第二设定值,例如ISO>5000,则确定当前拍摄场景为暗光拍摄场景。
503:在保持长焦模式的同时,开启主摄像头,将主摄像头的焦段进行数码裁切至和长焦摄像头的焦段相同的焦段。
可以理解,本申请实施例中,在确定拍摄场景为暗光拍摄场景时,可以同时开启长焦摄像头和主摄像头,以便分别获取长焦摄像头和主摄像头拍摄的图像。
可以理解,本申请实施例中,可以将主摄像头的焦段数码裁剪到和长焦相同的焦段或视场角,以便于获取到和长焦摄像头相同视场角的图像。
504:检测到用户拍摄指令,获取基于主摄像头拍摄的主摄图像以及基于长焦摄像头拍摄的长焦图像。
可以理解,本申请实施例中,用户拍摄指令可以为用户点击拍照控件,或者发出拍照操作对应的语音指令或肢体动作等远程控制指令。
可以理解,本申请实施例中,基于主摄像头获取的主摄图像以及基于长焦摄像头获取的长焦图像均可以为多帧图像。
505:对长焦图像和主摄图像进行融合,获取融合图像。
可以理解,本申请实施例中,对长焦图像和主摄图像进行融合,获取融合图像的方式可以如图6a和图6b中所示,包括:
5051:对多帧主摄图像进行图像融合,以获取融合后的主摄融合图像,对多帧长焦图像进行图像融合,以获取融合后的长焦融合图像;
其中,多帧主摄图像进行融合,获取对应的主摄融合图像的方式可以为:以主摄像头采集的多帧主摄图像中的第一帧图像为参考图像,对其他帧图像进行配准,以获取其他帧图像分别对应的配准图像。然后将第一帧图像与其他帧图像分别对应的配准图像进行融合处理,以获取最终主摄融合图像。
其中,融合处理的方式可以为:将第一帧图像和第二帧图像对应的配准图像进行融合处理,即将第一帧图像和第二帧图像对应的配准图像中的对应像素值进行平均处理,以获取第一融合图像,然后以第一融合图像和第三帧图像对应的配准图像进行同样的融合处理,获取第二融合图像,持续上述融合处理,直至将最后一帧图像进行融合,以获取最终主摄融合图像。
在一些实施例中,融合处理的方式还可以为:将第一帧图像和其他帧图像分别对应的配准图像同时进行平均处理,获取最终主摄融合图像。
可以理解,本申请实施例中,可以以主摄像头采集的多帧主摄图像中的任意一帧图像为参考图像,对其他帧图像进行配准,以获取其他帧图像分别对应的配准图像。本申请实施例中,对同一摄像头获取的多帧图像进行配准,能够有效消除多帧图像之间由于摄像头抖动等情况带来的差异。
下面以首先以第一帧图像为参考,对第二帧图像进行配准为例,说明本申请实施例中的配准方法:
首先,对第一帧图像和第二帧图像均进行特征提取,以获取第一帧图像中的多个特征点以及第二帧图像中的多个特征点。在一些实施例中,可以采用surf算子对第一帧图像和第二帧图像进行特征提取。
根据第一帧图像中的多个特征点在第二帧图像中找到对应的匹配特征点。
根据第一帧图像中的各特征点与第二帧图像中各对应的匹配特征点之间的偏移量确定出图像仿射变换(warp)矩阵H,
根据warp矩阵H对第二帧图像进行warp,获取配准后的第二帧图像。第一帧图像保持不变。
下面以主摄像头采集的多帧主摄图像依次为第一帧图像I1、第二帧图像I2、第三帧图像I3、第四帧图像I4,对多帧主摄图像进行融合,以获取融合后的主摄融合图像的方式进行说明。一种可实施的方式中,融合的方式可以如下所述:
首先以第一帧图像I1为参考,对第二帧图像I2、第三帧图像I3和第四帧图像I4进行配准,以获取第二帧图像I2、第三帧图像I3和第四帧图像I4分别对应的配准图像I2’和配准图像I3’和配准图像I4’。
将第一帧图像I1和第二帧图像I2对应的配准图像I2’的对应位置的像素值进行平均处理,以获取第 一融合图像。其中,第一融合图像I01中各像素点,例如像素点(x,y)的像素值I01(x,y)的计算方式如下:I01(x,y)=0.5*I1(x,y)+0.5*I2’(x,y)。其中,I1(x,y)为第一帧图像I1中的像素点(x,y)的像素值,I2’(x,y)为第二帧图像I2对应的配准图像I2’中的像素点(x,y)的像素值。
然后将第一融合图像I01与第三帧图像I3对应的配准图像I3’对应位置的像素进行平均处理,以获取第二融合图像I02。其中,第二融合图像I02中各像素点,例如像素点(x,y)的像素值I01(x,y)的计算方式如下:I02(x,y)=0.5*I01(x,y)+0.5*I3’(x,y)。其中,I01(x,y)为第一融合图像I01中的像素点(x,y)的像素值,I3’(x,y)为第三帧图像I3对应的配准图像I3’中的像素点(x,y)的像素值。
然后将第二融合图像I02与第四帧图像I4对应的配准图像I4’对应位置的像素进行平均处理,以获取最终的主摄融合图像I03。其中,主摄融合图像I03中各像素点,例如像素点(x,y)的像素值I03(x,y)的计算方式如下:I03(x,y)=0.5*I02(x,y)+0.5*I4’(x,y)。其中,I02(x,y)为第二融合图像I02的像素点(x,y)的像素值,I4’(x,y)为第四帧图像I4对应的配准图像I4’中的像素点(x,y)的像素值。
另一种可实施的方式中,多帧主摄图像进行融合的方式还可以如下所述:
首先以第一帧图像I1为参考,对第二帧图像I2、第三帧图像I3和第四帧图像I4进行配准,以获取第二帧图像I2、第三帧图像I3和第四帧图像I4分别对应的配准图像I2’和配准图像I3’和配准图像I4’。将第一帧图像I1、第二帧图像I2对应的配准图像I2’、第三帧图像I3对应的配准图像I3’、第四帧图像I4对应的配准图像I4’的对应位置的像素值进行平均处理,以获取主摄融入和图像,主摄融合图像I03中各像素点,例如像素点(x,y)的像素值I03(x,y)的计算方式如下:I03(x,y)==0.25*I1(x,y)+0.25*I2’(x,y)+0.25*I3’(x,y)+0.25*I4’(x,y)。
其中,I1(x,y)为第一帧图像I1中的像素点(x,y)的像素值,I2’(x,y)为第二帧图像I2对应的配准图像I2’中的像素点(x,y)的像素值,I3’(x,y)为第三帧图像I3对应的配准图像I3’中的像素点(x,y)的像素值,I4’(x,y)为第四帧图像I4对应的配准图像I4’中的像素点(x,y)的像素值。
可以理解,本申请实施例中,对多帧长焦图像进行图像融合,以获取融合后的长焦融合图像与上述对多帧主摄图像进行融合,获取对应的主摄融合图像的方式类似,此处不再赘述。
5052:以长焦融合图像为参考图像,对主摄融合图像进行配准,获取配准后的主摄融合图像。
可以理解,本申请实施例中,对长焦融合图像和主摄融合图像进行配准的方式可以为长焦融合图像为参考图像,对主摄融合图像进行配准;也可以为以主摄融合图像为参考图像,对长焦融合图像进行配准。可以理解,本申请实施例中,将主摄融合图像和长焦融合图像进行配准能够有效消除摄像头位置差异带来的图像偏差。
下面以长焦融合图像为参考图像,对主摄融合图像进行配准说明本申请实施例中的配准方式:
首先,对长焦融合图像和主摄融合图像均进行特征提取,以获取长焦融合图像中的多个特征点以及主摄融合图像中的多个特征点。在一些实施例中,可以采用surf算子对长焦融合图像和主摄融合图像进行特征提取。
根据长焦融合图像中的多个特征点在主摄融合图像中找到对应的匹配特征点。
根据长焦融合图像中的各特征点与主摄融合图像中各对应的匹配特征点之间的偏移量确定出图像仿射变换(warp)矩阵H。
根据warp矩阵H对主摄融合图像进行warp,获取配准后的主摄融合图像。长焦融合图像保持不变。
5053:获取配准后的主摄融合图像中的亮度信息和色彩信息,以及长焦融合图像的高频细节信息。
可以理解,本申请实施例中,亮度信息可以包括亮度分量,色彩信息可以包括色度分量,高频细节信息可以为高频细节分量(高频信号)。
本申请实施例中,可以根据主摄融合图像中的RGB信息和YUV信息获取主摄融合图像中的亮度分量Y和色度分量UV。
对于长焦融合图像,获取图像的高频细节分量I2_hpf的方式可以为通过长焦图像整体信号减去长焦图像的低频细节分量(低频信号),即I2_hpf=I2a-I2_lpf,其中,I2_lpf为长焦融合图像的低频细节 分量,I2a为长焦图像整体细节信号。
其中,获取长焦融合图像的低频细节分量I2_lpf的方式可以为,对长焦融合图像进行下采样到1/4尺寸,然后上采样到原始尺寸,以获取I2_lpf。
5054:基于配准后的主摄融合图像中的亮度信息和色彩信息,以及长焦融合图像的高频细节信息,生成融合图像。
可以理解,本申请实施例中,基于配准后的主摄融合图像中的亮度信息和色彩信息,以及长焦融合图像的高频细节信息,生成融合图像的方式可以为:
将主摄融合图像的亮度分量和长焦融合图像的高频细节分量进行叠加,以获取融合亮度分量;例如,融合亮度分量Y’的计算方式为:Y’=Y+I2_hpf,其中,Y是主摄融合图像中的亮度分量,I2_hpf为长焦融合图像的高频细节分量。
合并亮度分量与颜色分量,获取最终的融合图像。例如融合图像Io=YUV2RGB(Y’,UV)。
406:获取长焦摄像头拍摄的长焦图像,基于长焦图像生成拍摄图像。
可以理解,本申请实施例中,若当前拍摄场景不是暗光拍摄场景时,则可以基于长焦摄像头拍摄的长焦图像生成最终拍摄图像。
可以理解,在一些实施例中,长焦摄像头的拍摄的长焦图像可以包括多帧图像,基于长焦图像生成最终拍摄方式可以为:对多帧长焦图像进行图像融合,以获取最终拍摄图像。其中,对多帧长焦图像进行图像融合,以获取融合后的长焦融合图像与上述对多帧主摄图像进行融合,获取对应的主摄融合图像的方式类似,此处不再赘述。
基于上述方案,能够有效提升长焦暗光拍摄下获取图像的感光度。在高倍率暗光拍照时,同时采集主摄多帧图像和长焦多帧图像,然后进行图像配准和亮度及细节融合,有效提升夜景长焦拍摄图像的亮度和清晰度。
下面以基于预览图像判断当前拍摄场景为例说明本申请实施例中一种拍摄方法。如图7所示,方法可以包括:
701:检测到电子设备开启长焦拍摄模式。
可以理解,本申请实施例中,701与501类似,此处不再赘述。
702:获取预览图像。
可以理解,本申请实施例中,预览图像可以为长焦模式下的预览图像。
703:判断预览图像的平均像素值是否小于设定值。若是,转至704,在保持长焦模式的同时,开启主摄像头;若否,则转至710,获取长焦摄像头拍摄的多帧长焦图像。
704:在保持长焦模式的同时,开启主摄像头,并将主摄像头的焦段进行数码裁切至和长焦摄像头相同的焦段。
可以理解,本申请实施例中,704与503类似,此处不再赘述。
705:检测到用户拍摄指令,获取基于主摄像头拍摄的多帧主摄图像以及基于长焦摄像头拍摄的多帧长焦图像。
可以理解,本申请实施例中,用户拍摄指令可以为用户点击拍照控件,或者发出拍照操作对应的语音指令或肢体动作等远程控制指令。
可以理解,本申请实施例中,基于主摄像头获取的主摄图像以及基于长焦摄像头获取的长焦图像均可以为多帧图像。
可以理解,本申请实施例中,705与504类似,此处不再赘述。
706:对多帧主摄图像进行图像融合,以获取融合后的主摄融合图像;对多帧长焦图像进行图像融合,以获取融合后的长焦融合图像。
可以理解,本申请实施例中,705与5051类似,此处不再赘述。
707:以长焦融合图像为参考图像,对主摄融合图像进行配准,获取配准后的主摄融合图像。
可以理解,本申请实施例中,707与5052类似,此处不再赘述。
708:获取配准后的主摄融合图像中的亮度信息和色彩信息,以及长焦融合图像的高频细节信息。
可以理解,本申请实施例中,708与5053类似,此处不再赘述。
709:基于配准后的主摄融合图像中的亮度信息和色彩信息,以及长焦融合图像的高频细节信息,生成拍摄图像。
可以理解,本申请实施例中,709与5054类似,此处不再赘述。
710:获取长焦摄像头拍摄的多帧长焦图像。
711:对多帧长焦图像进行图像融合,获取拍摄图像。
可以理解,本申请实施例中,对多帧长焦图像进行图像融合,以获取融合后的长焦融合图像与上述对多帧主摄图像进行融合,获取对应的主摄融合图像的方式类似,此处不再赘述。
图8a和图8b中分别示出了本申请实施例中一种在暗光场景下,单独使用长焦模式拍摄的图像和采用本申请的拍摄方法拍摄的图像,从图8和图8b中可以看出,在高倍率暗光拍照时,同时采集主摄多帧图像和长焦多帧图像,然后进行图像配准和亮度及细节融合,能够有效提升夜景长焦拍摄图像的亮度和清晰度。
本申请公开的各实施例可以被实现在硬件、软件、固件或这些实现方法的组合中。本申请的实施例可实现为在可编程系统上执行的计算机程序或程序代码,该可编程系统包括至少一个处理器、存储系统(包括易失性和非易失性存储器和/或存储元件)、至少一个输入设备以及至少一个输出设备。
可将程序代码应用于输入指令,以执行本申请描述的各功能并生成输出信息。可以按已知方式将输出信息应用于一个或多个输出设备。为了本申请的目的,处理系统包括具有诸如例如数字信号处理器(DSP)、微控制器、专用集成电路(ASIC)或微处理器之类的处理器的任何系统。
程序代码可以用高级程序化语言或面向对象的编程语言来实现,以便与处理系统通信。在需要时,也可用汇编语言或机器语言来实现程序代码。事实上,本申请中描述的机制不限于任何特定编程语言的范围。在任一情形下,该语言可以是编译语言或解释语言。
在一些情况下,所公开的实施例可以以硬件、固件、软件或其任何组合来实现。所公开的实施例还可以被实现为由一个或多个暂时或非暂时性机器可读(例如,计算机可读)存储介质承载或存储在其上的指令,其可以由一个或多个处理器读取和执行。例如,指令可以通过网络或通过其他计算机可读介质分发。因此,机器可读介质可以包括用于以机器(例如,计算机)可读的形式存储或传输信息的任何机制,包括但不限于,软盘、光盘、光碟、只读存储器(CD-ROMs)、磁光盘、只读存储器(ROM)、随机存取存储器(RAM)、可擦除可编程只读存储器(EPROM)、电可擦除可编程只读存储器(EEPROM)、磁卡或光卡、闪存、或用于利用因特网以电、光、声或其他形式的传播信号来传输信息(例如,载波、红外信号数字信号等)的有形的机器可读存储器。因此,机器可读介质包括适合于以机器(例如,计算机)可读的形式存储或传输电子指令或信息的任何类型的机器可读介质。
在附图中,可以以特定布置和/或顺序示出一些结构或方法特征。然而,应该理解,可能不需要这样的特定布置和/或排序。而是,在一些实施例中,这些特征可以以不同于说明性附图中所示的方式和/或顺序来布置。另外,在特定图中包括结构或方法特征并不意味着暗示在所有实施例中都需要这样的特征,并且在一些实施例中,可以不包括这些特征或者可以与其他特征组合。
需要说明的是,本申请各设备实施例中提到的各单元/模块都是逻辑单元/模块,在物理上,一个逻辑单元/模块可以是一个物理单元/模块,也可以是一个物理单元/模块的一部分,还可以以多个物理单元/模块的组合实现,这些逻辑单元/模块本身的物理实现方式并不是最重要的,这些逻辑单元/模块所实现的功能的组合才是解决本申请所提出的技术问题的关键。此外,为了突出本申请的创新部分,本申请上述各设备实施例并没有将与解决本申请所提出的技术问题关系不太密切的单元/模块引入,这并不表明上述设备实施例并不存在其它的单元/模块。
需要说明的是,在本专利的示例和说明书中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
虽然通过参照本申请的某些优选实施例,已经对本申请进行了图示和描述,但本领域的普通技术人员应该明白,可以在形式上和细节上对其作各种改变,而不偏离本申请的范围。

Claims (14)

  1. 一种拍摄方法,用于电子设备,所述电子设备包括第一摄像头和第二摄像头,其特征在于,包括:
    检测到所述电子设备开启第一摄像头;
    在确定当前拍摄场景为暗光拍摄场景的情况下,响应于用户拍摄指令,获取基于所述第一摄像头拍摄的第一图像以及基于所述第二摄像头拍摄的第二图像;其中,所述第一摄像头的焦距大于第二摄像头的焦距;
    基于所述第一图像和所述第二图像对应的亮度特征信息生成拍摄图像。
  2. 根据权利要求1所述的方法,其特征在于,检测到用户选择大于等于设定倍率的拍摄参数,所述电子设备开启所述第一摄像头。
  3. 根据权利要求1-2任一项所述的方法,其特征在于,确定当前拍摄场景为暗光拍摄场景的方式,包括:
    获取环境光亮度;
    在所述环境光亮度低于设定值的情况下,确定当前拍摄场景为暗光拍摄场景。
  4. 根据权利要求1-2任一项所述的方法,其特征在于,确定当前拍摄场景为暗光拍摄场景的方式,包括:
    基于所述第一摄像头获取预览图像;
    确定所述预览图像中像素的平均值;
    在所述平均值小于设定值的情况下,确定当前拍摄场景为暗光拍摄场景。
  5. 根据权利要求1-2任一项所述的方法,其特征在于,确定当前拍摄场景为暗光拍摄场景的方式,包括:
    获取曝光参数值,所述曝光参数值包括曝光时间值和感光度值;
    在所述曝光时间值大于第一设定值,且所述感光度值大于第二设定值的情况下,确定当前拍摄场景为暗光拍摄场景。
  6. 根据权利要求1-2任一项所述的方法,其特征在于,获取基于所述第一摄像头拍摄的第一图像以及基于所述第二摄像头拍摄的第二图像;包括:
    将所述第二摄像头的焦段裁切至与所述第一摄像头的焦段相同的焦段;
    获取基于所述第一摄像头拍摄的多帧第一子图像以及基于裁切后的第二摄像头拍摄的多帧第二子图像。
  7. 根据权利要求6所述的方法,其特征在于,所述基于所述第一图像和所述第二图像对应的亮度特征信息生成拍摄图像;包括:
    对多帧第一子图像进行图像融合,获取融合后的第一融合图像,对多帧第二子图像进行图像融合,获取第二融合图像;
    对所述第一融合图像和所述第二融合图像进行配准,获取配准后的第一融合图像和第二融合图像;
    获取所述配准后的所述第一融合图像的高频细节信息,以及所述第二融合图像中的亮度特征信息;
    基于所述配准后的所述第一融合图像的高频细节信息,以及所述第二融合图像中的亮度特征信息生成拍摄图像。
  8. 根据权利要求7所述的方法,其特征在于,所述对多帧第一子图像进行图像融合,获取融合后的第一融合图像,对多帧第二子图像进行图像融合,获取第二融合图像;包括:
    对所述多帧第一子图像进行配准处理,获取配准后的多帧第一子图像;对配准后的多帧第一子图像进行融合处理,获取第一融合图像;
    对所述多帧第二子图像进行配准处理,获取配准后的多帧第二子图像;对配准后的多帧第二子图像进行融合处理,获取第二融合图像。
  9. 根据权利要求7所述的方法,其特征在于,所述对所述第二融合图像和第一融合图像进行配准,获取配准后的第二融合图像和第一融合图像;包括:
    以所述第二融合图像为参考图像,对所述第一融合图像进行配准,获取配准后的第一融合图像;
    或者,以所述第一融合图像为参考图像,对所述第二融合图像进行配准,获取配准后的第二融合图像。
  10. 根据权利要求9所述的方法,其特征在于,所述以所述第一融合图像为参考图像,对所述第二融合图像进行配准,获取配准后的第二融合图像;包括:
    对第一融合图像和第二融合图像均进行特征提取,以获取所述第一融合图像中的多个特征点以及所述第二融合图像中的多个特征点;
    确定所述第一融合图像中的多个特征点在所述第二融合图像的多个特征点中的匹配特征点;
    根据所述第一融合图像中的多个特征点与所述第二融合图像中各匹配特征点之间的偏移量确定图像仿射变换矩阵;
    根据图像仿射变换矩阵对第二融合图像进行变换处理,获取配准后的第二融合图像。
  11. 根据权利要求7所述的方法,其特征在于,所述亮度特征信息包括亮度信息和色彩信息,所述基于所述配准后的所述第一融合图像的高频细节信息,以及所述第二融合图像中的亮度特征信息生成拍摄图像,包括:
    将所述第二融合图像中的亮度信息与所述第一融合图像中的高频细节信息叠加处理,获取第一融合信息;
    将所述第一融合信息与所述第二融合图像的色彩信息进行合并处理,生成拍摄图像。
  12. 一种电子设备,其特征在于,包括:存储器,用于存储所述电子设备的一个或多个处理器执行的指令,以及所述处理器,是所述电子设备的一个或多个处理器之一,用于执行权利要求1-11任一项所述的拍摄方法。
  13. 一种可读存储介质,其特征在于,所述可读介质上存储有指令,所述指令在电子设备上执行时使得所述电子设备执行权利要求1-11任一项所述的拍摄方法。
  14. 一种计算机程序产品,包括:执行指令,所述执行指令存储在可读存储介质中,电子设备的至少一个处理器可以从所述可读存储介质读取所述执行指令,所述至少一个处理器执行所述执行指令使得所述电子设备执行权利要求1-11任一项所述的拍摄方法。
PCT/CN2023/118306 2022-09-27 2023-09-12 一种拍摄方法、电子设备及介质 WO2024067071A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202211184777.9 2022-09-27
CN202211184777.9A CN117835077A (zh) 2022-09-27 2022-09-27 一种拍摄方法、电子设备及介质

Publications (1)

Publication Number Publication Date
WO2024067071A1 true WO2024067071A1 (zh) 2024-04-04

Family

ID=90476045

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/118306 WO2024067071A1 (zh) 2022-09-27 2023-09-12 一种拍摄方法、电子设备及介质

Country Status (2)

Country Link
CN (1) CN117835077A (zh)
WO (1) WO2024067071A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001091820A (ja) * 1999-09-17 2001-04-06 Olympus Optical Co Ltd 測距装置
CN110198418A (zh) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN111314603A (zh) * 2018-03-27 2020-06-19 华为技术有限公司 拍照方法、拍照装置和移动终端
CN112087580A (zh) * 2019-06-14 2020-12-15 Oppo广东移动通信有限公司 图像采集方法和装置、电子设备、计算机可读存储介质
CN112565589A (zh) * 2020-11-13 2021-03-26 北京爱芯科技有限公司 一种拍照预览方法、装置、存储介质和电子设备
CN113810598A (zh) * 2021-08-11 2021-12-17 荣耀终端有限公司 一种拍照方法及设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001091820A (ja) * 1999-09-17 2001-04-06 Olympus Optical Co Ltd 測距装置
CN111314603A (zh) * 2018-03-27 2020-06-19 华为技术有限公司 拍照方法、拍照装置和移动终端
CN112087580A (zh) * 2019-06-14 2020-12-15 Oppo广东移动通信有限公司 图像采集方法和装置、电子设备、计算机可读存储介质
CN110198418A (zh) * 2019-06-28 2019-09-03 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN112565589A (zh) * 2020-11-13 2021-03-26 北京爱芯科技有限公司 一种拍照预览方法、装置、存储介质和电子设备
CN113810598A (zh) * 2021-08-11 2021-12-17 荣耀终端有限公司 一种拍照方法及设备

Also Published As

Publication number Publication date
CN117835077A (zh) 2024-04-05

Similar Documents

Publication Publication Date Title
US11765463B2 (en) Multi-channel video recording method and device
CN114092364B (zh) 图像处理方法及其相关设备
WO2022262260A1 (zh) 一种拍摄方法及电子设备
WO2020073959A1 (zh) 图像捕捉方法及电子设备
CN113810600B (zh) 终端的图像处理方法、装置和终端设备
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
CN113810601B (zh) 终端的图像处理方法、装置和终端设备
CN113810598B (zh) 一种拍照方法、电子设备及存储介质
US20240119566A1 (en) Image processing method and apparatus, and electronic device
WO2022057723A1 (zh) 一种视频的防抖处理方法及电子设备
US10187566B2 (en) Method and device for generating images
CN113810590A (zh) 图像处理方法、电子设备、介质和系统
US20140210941A1 (en) Image capture apparatus, image capture method, and image capture program
WO2023077939A1 (zh) 摄像头的切换方法、装置、电子设备及存储介质
CN112637481B (zh) 图像缩放方法和装置
WO2024067071A1 (zh) 一种拍摄方法、电子设备及介质
CN115706869A (zh) 终端的图像处理方法、装置和终端设备
CN116437198A (zh) 图像处理方法与电子设备
US20230076534A1 (en) Image processing method and device, camera component, electronic device and storage medium
CN115696067B (zh) 终端的图像处理方法、终端设备和计算机可读存储介质
CN116347212B (zh) 一种自动拍照方法及电子设备
CN115705663B (zh) 图像处理方法与电子设备
WO2023160220A1 (zh) 一种图像处理方法和电子设备
CN115631250B (zh) 图像处理方法与电子设备
CN115150543B (zh) 拍摄方法、装置、电子设备及可读存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23870309

Country of ref document: EP

Kind code of ref document: A1