WO2020073959A1 - Procédé de capture d'image et dispositif électronique - Google Patents

Procédé de capture d'image et dispositif électronique Download PDF

Info

Publication number
WO2020073959A1
WO2020073959A1 PCT/CN2019/110387 CN2019110387W WO2020073959A1 WO 2020073959 A1 WO2020073959 A1 WO 2020073959A1 CN 2019110387 W CN2019110387 W CN 2019110387W WO 2020073959 A1 WO2020073959 A1 WO 2020073959A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
electronic device
wide
image
color camera
Prior art date
Application number
PCT/CN2019/110387
Other languages
English (en)
Chinese (zh)
Inventor
苏蔚
罗巍
李远友
杜成
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN201980004268.3A priority Critical patent/CN111183632A/zh
Publication of WO2020073959A1 publication Critical patent/WO2020073959A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals

Definitions

  • This embodiment relates to the field of electronic technology, and in particular, to an image capturing method and an electronic device.
  • Dual-camera systems can improve imaging quality and imaging function diversity while maintaining a low module height due to their complementary advantages of differentiated advantages.
  • mobile phone manufacturers they can not only maintain the thin and light characteristics of mobile phones, but also improve the quality of photos and develop a variety of interesting applications. Therefore, the dual camera system is more and more popular with mobile phone manufacturers and is gradually becoming the current. Standard for all mini phones.
  • an electronic device such as a mobile phone takes an image through a dual camera
  • the image obtained by the two cameras is simply fused to obtain the final image.
  • the shooting scenes are diverse and changeable, and the method of simply fusing the images obtained by the two cameras cannot capture high-quality images under the diverse and changing shooting scenes.
  • This embodiment provides an image capturing method and an electronic device, which can switch to different cameras for shooting under different shooting scenes to improve the shooting effect of the image.
  • this embodiment uses the following technical solutions:
  • the technical solution provides an image capturing method, which is applied to an electronic device with a touch screen.
  • the electronic device includes a wide-angle color camera, an ultra-wide-angle color camera, and a telephoto color camera.
  • the method includes: after the electronic device detects the first operation of the user for turning on the camera, in response to the first operation, the electronic device displays the shooting interface on the touch screen.
  • the electronic device captures the image through the wide-angle color camera, and displays the image captured by the wide-angle color camera on the touch screen.
  • the electronic device After the electronic device detects the second operation of the user for instructing zooming, in response to the second operation, the electronic device switches to capturing an image through the ultra-wide-angle color camera or telephoto color camera, and displays the ultra-wide-angle color camera or Image captured by a defocused color camera.
  • the user can make a zoom instruction according to the current shooting scene, and the electronic device can automatically switch between the wide-angle color camera, the ultra-wide-angle color camera, and the telephoto color camera according to the user's zoom instruction, so as to use the matching ,
  • the camera that can adapt to the current shooting scene can capture and shoot images to obtain better shooting results.
  • the shooting interface includes at least one first control for indicating a zoom magnification
  • the second operation is a user's preset operation for the first control. Therefore, it is convenient for the user to instruct zooming through the first control.
  • the shooting interface includes at least one second control for instructing the camera, and the second operation is a user's preset operation for the second control. Therefore, it is convenient for the user to instruct zooming through the second control.
  • the electronic device in response to the second operation, switches to capturing an image through an ultra-wide-angle color camera or a telephoto color camera, including: in response to the second operation, the electronic device determines the zoom magnification K. Among them, if M ⁇ K ⁇ 1, the electronic device switches to capturing images through the ultra-wide-angle color camera. If K ⁇ N, the electronic device switches to capturing the image through the telephoto color camera. In addition, if 1 ⁇ K ⁇ N, the electronic device captures the image through the wide-angle color camera.
  • M tan (B / 2) / tan (A / 2)
  • N tan (B / 2) / tan (C / 2)
  • A is the angle of view of the ultra-wide-angle color camera
  • B is the wide-angle color camera
  • C is the angle of view of the telephoto color camera
  • A is greater than B
  • B is greater than C.
  • the electronic device may determine the zoom magnification according to the second operation of the user, and determine a corresponding camera according to the zoom magnification, so as to switch to the camera to shoot.
  • the electronic device in response to the second operation, switches to capturing an image through an ultra-wide-angle color camera or a telephoto color camera, including: in response to the second operation, the electronic device determines the zoom magnification K. If M ⁇ K ⁇ 1, the electronic device switches to capturing images through the ultra-wide-angle color camera and capturing images through the wide-angle color camera and / or telephoto color camera. If K ⁇ N, the electronic device switches to capturing images through a telephoto color camera, and capturing images through a wide-angle color camera and / or ultra-wide-angle color camera.
  • the electronic device may determine the zoom magnification according to the second operation of the user, and determine the corresponding multiple cameras according to the zoom magnification, so as to shoot through the multiple cameras.
  • the method before the electronic device detects the second operation of the user to instruct zooming, the method further includes: if the camera currently used to capture the image is a wide-angle color camera or a telephoto color camera, wait If the distance between the shooting object and the electronic device is less than or equal to the first preset value, the electronic device automatically switches to capturing the image through the ultra-wide-angle color camera. If the current camera for capturing images is a wide-angle color camera or an ultra-wide-angle color camera, and the distance between the object to be photographed and the electronic device is greater than or equal to the second preset value, the electronic device automatically switches to capturing the image through the telephoto color camera.
  • the second preset value is greater than the first preset value.
  • the electronic device can automatically switch to the camera that matches the current distance for shooting according to the distance between the object to be photographed and the electronic device, so as to obtain a better shooting effect.
  • the distance between the object to be photographed and the electronic device when the distance between the object to be photographed and the electronic device is less than or equal to the first preset value, it may currently be in macro mode; the distance between the object to be photographed and the electronic device is less than the second preset value, which may be currently in the distant view mode.
  • the method before the electronic device detects the second operation for the user to instruct zooming, the method further includes: if the camera currently used to capture the image is a wide-angle color camera or a telephoto color camera, and The captured image displayed on the touch screen includes a part of the object to be photographed, and the electronic device automatically switches to capturing the image through the ultra-wide-angle color camera.
  • the electronic device can Automatically switch to an ultra-wide-angle color camera with a larger field of view to capture images.
  • the method before the electronic device detects the second operation for the user to instruct zooming, the method further includes: the electronic device generating the first composition according to the first image captured by the camera currently used to capture the image . If the camera currently used to capture the image is a wide-angle color camera or a telephoto color camera, the electronic device also captures the second image through the ultra-wide-angle color camera. The electronic device generates a second composition based on the second image. If the first composition does not match the second composition, the electronic device prompts the user whether to switch the camera. If an operation for instructing the user to switch the camera is detected, the electronic device prompts the user to shoot according to the second image and the second composition.
  • the electronic device may use the ultra-wide-angle color camera with the largest field of view of the captured screen to generate a composition, and verify the first composition. If the first composition matches the second composition, it indicates that the first composition is reasonable, and the user can be prompted to shoot according to the first composition; if the second composition matches the second composition, it indicates that the first composition is unreasonable, and the electronic device can The second composition prompts the user to shoot.
  • the method further includes: if the camera currently used to capture the image is a wide-angle color camera or a telephoto color camera, and the object to be photographed is a landscape, then the electronic device The camera and long wide-angle color camera capture images.
  • the electronic device synthesizes the image captured by the camera currently used to capture the image and the image captured by the long wide-angle color camera, and the electronic device displays the synthesized image on the touch screen.
  • the electronic device can capture a larger field of view while using the current wide-angle color camera's wider field of view feature while capturing images using the camera currently used to capture the image Landscape pictures within the range, thus helping users record pictures of more open landscapes.
  • the method further includes: after the electronic device detects the third operation used by the user to instruct recording, displaying the shooting interface in the recording mode in response to the third operation.
  • the electronic device automatically zooms according to the size of the object to be photographed on the shooting interface, or according to the distance between the object to be photographed and the electronic device, and the electronic device automatically switches the camera used to capture the image according to the zoom result.
  • the electronic device can automatically zoom in real time according to the object to be photographed during the recording process, and switch the camera used to capture the image in real time.
  • the method further includes: the electronic device prompts the user on the shooting interface of the camera currently used to capture the image, the current zoom magnification, or the current shooting mode.
  • the technical solution provides an image capturing method, which is applied to an electronic device with a touch screen.
  • the electronic device includes a wide-angle color camera, an ultra-wide-angle color camera, and a telephoto color camera.
  • the method includes: after detecting the first operation for opening the camera by the user, the electronic device displays a shooting interface on the touch screen in response to the first operation.
  • Electronic devices use wide-angle color cameras to capture images.
  • the electronic device displays the image captured by the wide-angle color camera on the touch screen.
  • a second operation of the user for instructing the ultra-wide-angle color camera is detected, in response to the second operation, the electronic device switches to the ultra-wide-angle color camera to capture an image.
  • the electronic device displays the image captured by the ultra-wide-angle color camera on the touch screen. If a third operation by the user for instructing the telephoto color camera is detected, in response to the third operation, the electronic device switches to the telephoto color camera to capture the image. The electronic device displays the image captured by the telephoto color camera on the touch screen.
  • the user can use the camera suitable for the current shooting scene according to the current shooting scene instruction, and the electronic device can switch between the wide-angle color camera, the ultra-wide-angle color camera, and the telephoto color camera according to the user's instruction, so that The camera that shoots the scene captures and shoots images to obtain better shooting results.
  • the technical solution provides an image capturing method.
  • the electronic device includes a wide-angle color camera, an ultra-wide-angle color camera, and a telephoto color camera.
  • the method includes: the electronic device detects a first operation by the user for turning on the camera.
  • the electronic device displays the shooting interface on the touch screen in response to the first operation. If the distance between the object to be photographed and the electronic device is less than or equal to the first preset value, the electronic device captures the image through the ultra-wide-angle color camera.
  • the electronic device displays the image captured by the ultra-wide-angle color camera on the touch screen.
  • the electronic device captures the image through the telephoto color camera, and the second preset value is greater than the first preset value.
  • the electronic device displays the image captured by the telephoto color camera on the touch screen. If the distance between the object to be photographed and the electronic device is greater than the first preset value and less than the second preset value, the electronic device captures the image through the ultra-wide-angle color camera. The electronic device displays the image captured by the wide-angle color camera on the touch screen.
  • the electronic device can automatically switch between the wide-angle color camera, the ultra-wide-angle color camera, and the telephoto color camera according to the distance between the object to be photographed in the current shooting scene and the electronic device, thereby utilizing the The camera performs image capture and shooting to obtain better shooting results.
  • the technical solution provides an image capturing method, which is applied to an electronic device with a touch screen.
  • the electronic device includes a wide-angle color camera, an ultra-wide-angle color camera, and a telephoto color camera.
  • the method includes: the electronic device detects a first operation by the user for turning on the camera. In response to the first operation, the electronic device displays the shooting interface on the touch screen. The electronic device captures the image through the wide-angle color camera, and displays the image captured by the wide-angle color camera on the touch screen.
  • the electronic device automatically switches to capturing the image through the ultra-wide-angle color camera. If the current camera for capturing images is a wide-angle color camera or an ultra-wide-angle color camera, and the distance between the object to be photographed and the electronic device is greater than or equal to the second preset value, the electronic device automatically switches to capturing the image through the telephoto color camera.
  • the second preset value is greater than the first preset value.
  • the electronic device automatically switches to capturing the image through the ultra-wide-angle color camera.
  • the electronic device generates the first composition according to the first image captured by the camera currently used to capture the image. If the camera currently used to capture the image is a wide-angle color camera or a telephoto color camera, the electronic device also captures the second image through the ultra-wide-angle color camera.
  • the electronic device generates a second composition based on the second image. If the first composition does not match the second composition, the electronic device prompts the user whether to switch the camera.
  • the electronic device prompts the user to shoot according to the second image and the second composition.
  • the electronic device detects the second operation of the user to instruct zooming.
  • the electronic device determines the zoom magnification K. If M ⁇ K ⁇ 1, the electronic device switches to capturing images through the ultra-wide-angle color camera; the electronic device displays the images captured by the ultra-wide-angle color camera on the touch screen. If K ⁇ N, the electronic device switches to capturing the image through the telephoto color camera; the electronic device displays the image captured by the telephoto color camera on the touch screen. If 1 ⁇ K ⁇ N, the electronic device continues to capture images through the wide-angle color camera.
  • M tan (B / 2) / tan (A / 2)
  • N tan (B / 2) / tan (C / 2)
  • A is the angle of view of the ultra-wide-angle color camera
  • B is the wide-angle color camera
  • C is the angle of view of the telephoto color camera
  • A is greater than B
  • B is greater than C.
  • the electronic device displays the image captured by the ultra-wide-angle color camera or the telephoto color camera on the touch screen.
  • the electronic device detects a third operation that the user uses to instruct recording.
  • the electronic device displays the shooting interface in the recording mode in response to the third operation.
  • the electronic device automatically zooms according to the size of the object to be photographed on the shooting interface, or according to the distance between the object to be photographed and the electronic device.
  • the electronic device automatically switches the camera used to capture the image according to the zoom result.
  • the electronic device can switch between the wide-angle color camera, the ultra-wide-angle color camera and the telephoto color camera according to the user's instruction or according to the current shooting scene, so as to use the camera that can adapt to the current shooting scene for image capturing and shooting, Get better shooting results.
  • the present technical solution provides an apparatus for capturing an image, which is included in an electronic device, and the apparatus has a function of realizing the above aspect and the possible implementation manners of the above aspect.
  • the function can be realized by hardware, and can also be realized by hardware executing corresponding software.
  • the hardware or software includes one or more modules or units corresponding to the above functions. For example, a detection module or unit, a display module or unit, a processing module or unit, etc.
  • the technical solution provides an electronic device, including: a touch screen, wherein the touch screen includes a touch-sensitive surface and a display; a wide-angle color camera, an ultra-wide-angle color camera, and a telephoto color camera; one or more processors; a memory ; Multiple applications; and one or more computer programs.
  • one or more computer programs are stored in the memory, and the one or more computer programs include instructions.
  • the electronic device is caused to execute the above aspect and various possible implementation manners of the above aspect.
  • the technical solution provides an electronic device, including one or more processors and one or more memories.
  • the one or more memories are coupled to one or more processors.
  • the one or more memories are used to store computer program code.
  • the computer program codes include computer instructions.
  • the one or more processors execute the computer instructions, the electronic device is executed.
  • An image capture method in any possible implementation of any of the above aspects.
  • the present technical solution provides a computer storage medium, including computer instructions, which, when the computer instructions run on the electronic device, cause the electronic device to execute the image capturing method in any possible implementation of any of the above aspects.
  • the technical solution provides a computer program product that, when the computer program product runs on an electronic device, causes the electronic device to perform any image capture method in any possible design of any of the above aspects.
  • FIG. 1 is a schematic diagram of a hardware structure of an electronic device provided by this embodiment
  • FIG. 3 is a schematic diagram of a software structure of an electronic device provided by this embodiment.
  • 4A-4E are schematic diagrams of a set of interfaces of the electronic device provided by this embodiment.
  • 5A-5F are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • 6A-6B are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • FIGS. 7A-7B are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • FIG. 8 is a diagram of the correspondence between the image collected by the camera provided by this embodiment and the cropped part
  • FIGS. 9A-9B are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • FIGS. 10A-10B are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • FIGS. 11A-11B are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • FIGS. 13A-13D are schematic diagrams of another set of interfaces of the electronic device provided by this embodiment.
  • FIG. 16 is another schematic structural diagram of an electronic device provided by this embodiment.
  • first and second are used for description purposes only, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features.
  • the features defined as “first” and “second” may explicitly or implicitly include one or more of the features.
  • the meaning of “plurality” is two or more.
  • the image capturing method provided in this embodiment can be applied to mobile phones, tablet computers, cameras, wearable devices, in-vehicle devices, augmented reality (augmented reality (AR) / virtual reality (VR) devices, notebook computers, and ultra-mobile individuals
  • augmented reality augmented reality (AR) / virtual reality (VR) devices
  • notebook computers notebook computers
  • ultra-mobile individuals on electronic devices such as ultra-mobile personal computers (UMPCs), netbooks, personal digital assistants (PDAs), etc.
  • this embodiment does not limit the specific types of electronic devices.
  • FIG. 1 shows a schematic structural diagram of an electronic device 100.
  • the electronic device 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (USB) interface 130, a charging management module 130, a power management module 141, a battery 142, an antenna 1, an antenna 2 , Mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, headphone jack 170D, sensor module 180, key 190, motor 191, indicator 192, camera 193, display 194, and Subscriber identification module (SIM) card interface 195, etc.
  • SIM Subscriber identification module
  • the sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity light sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, and ambient light Sensor 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in this embodiment does not constitute a specific limitation on the electronic device 100.
  • the electronic device 100 may include more or fewer components than illustrated, or combine certain components, or split certain components, or arrange different components.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), and an image signal processor. (image) signal processor (ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural-network processing unit (NPU) Wait.
  • the processor 110 may be one or more processors; different processing units may be independent devices, or may be integrated in one or more processors.
  • the controller may be the nerve center and command center of the electronic device 100.
  • the controller can generate the operation control signal according to the instruction operation code and the timing signal to complete the control of fetching instructions and executing instructions.
  • the processor 110 may also be provided with a memory for storing instructions and data.
  • the memory in the processor 110 is a cache memory.
  • the memory may store instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to use the instruction or data again, it can be directly called from the memory. The repeated access is avoided, and the waiting time of the processor 110 is reduced, thereby improving the efficiency of the system.
  • the processor 110 may include one or more interfaces.
  • Interfaces can include integrated circuit (inter-integrated circuit, I2C) interface, integrated circuit built-in audio (inter-integrated circuit, sound, I2S) interface, pulse code modulation (pulse code modulation (PCM) interface, universal asynchronous transceiver (universal asynchronous) receiver / transmitter, UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and / Or universal serial bus (USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a bidirectional synchronous serial bus, including a serial data line (serial data line, SDA) and a serial clock line (derail clock line, SCL).
  • the processor 110 may include multiple sets of I2C buses.
  • the processor 110 may be coupled to the touch sensor 180K, the charger, the flash, the camera 193, etc. through different I2C bus interfaces.
  • the processor 110 may couple the touch sensor 180K through the I2C interface, so that the processor 110 and the touch sensor 180K communicate through the I2C bus interface to realize the touch function of the electronic device 100.
  • the I2S interface can be used for audio communication.
  • the processor 110 may include multiple sets of I2S buses.
  • the processor 110 may be coupled to the audio module 170 through an I2S bus to implement communication between the processor 110 and the audio module 170.
  • the audio module 170 can transmit audio signals to the wireless communication module 160 through the I2S interface, to realize the function of answering the phone call through the Bluetooth headset.
  • the PCM interface can also be used for audio communication, sampling, quantizing and encoding analog signals.
  • the audio module 170 and the wireless communication module 160 may be coupled through a PCM bus interface.
  • the audio module 170 can also transmit audio signals to the wireless communication module 160 through the PCM interface to realize the function of answering the call through the Bluetooth headset. Both I2S interface and PCM interface can be used for audio communication.
  • the UART interface is a universal serial data bus used for asynchronous communication.
  • the bus may be a bidirectional communication bus. It converts the data to be transmitted between serial communication and parallel communication.
  • the UART interface is generally used to connect the processor 110 and the wireless communication module 160.
  • the processor 110 communicates with the Bluetooth module in the wireless communication module 160 through the UART interface to implement the Bluetooth function.
  • the audio module 170 may transmit audio signals to the wireless communication module 160 through the UART interface, so as to realize the function of playing music through the Bluetooth headset.
  • the MIPI interface can be used to connect the processor 110 to peripheral devices such as the display screen 194 and the camera 193.
  • MIPI interface includes camera serial interface (camera serial interface, CSI), display serial interface (display serial interface, DSI) and so on.
  • the processor 110 and the camera 193 communicate through a CSI interface to implement the shooting function of the electronic device 100.
  • the processor 110 and the display screen 194 communicate through the DSI interface to realize the display function of the electronic device 100.
  • the GPIO interface can be configured via software.
  • the GPIO interface can be configured as a control signal or a data signal.
  • the GPIO interface may be used to connect the processor 110 to the camera 193, the display screen 194, the wireless communication module 160, the audio module 170, the sensor module 180, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 is an interface that conforms to the USB standard, and may specifically be a Mini USB interface, a Micro USB interface, a USB Type C interface, etc.
  • the USB interface 130 can be used to connect a charger to charge the electronic device 100, and can also be used to transfer data between the electronic device 100 and peripheral devices. It can also be used to connect headphones and play audio through the headphones.
  • the interface can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in this embodiment is only a schematic description, and does not constitute a limitation on the structure of the electronic device 100.
  • the electronic device 100 may also use different interface connection methods in the foregoing embodiments, or a combination of multiple interface connection methods.
  • the charging management module 130 is used to receive charging input from the charger.
  • the charger can be a wireless charger or a wired charger.
  • the charging management module 130 may receive the charging input of the wired charger through the USB interface 130.
  • the charging management module 130 may receive wireless charging input through the wireless charging coil of the electronic device 100. While the charging management module 130 charges the battery 142, it can also supply power to the electronic device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charging management module 130 and the processor 110.
  • the power management module 141 receives input from the battery 142 and / or the charging management module 130, and supplies power to the processor 110, the internal memory 121, the external memory, the display screen 194, the camera 193, and the wireless communication module 160.
  • the power management module 141 can also be used to monitor battery capacity, battery cycle times, battery health status (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charging management module 130 may also be set in the same device.
  • the wireless communication function of the electronic device 100 can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor, and the baseband processor.
  • Antenna 1 and antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in the electronic device 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be reused to improve antenna utilization.
  • the antenna 1 can be multiplexed as a diversity antenna of a wireless local area network. In other embodiments, the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 150 can provide a wireless communication solution including 2G / 3G / 4G / 5G and the like applied to the electronic device 100.
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (LNA), and the like.
  • the mobile communication module 150 can receive the electromagnetic wave from the antenna 1, filter and amplify the received electromagnetic wave, and transmit it to the modem processor for demodulation.
  • the mobile communication module 150 can also amplify the signal modulated by the modulation and demodulation processor and convert it to electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the mobile communication module 150 may be provided in the processor 110.
  • at least part of the functional modules of the mobile communication module 150 and at least part of the modules of the processor 110 may be provided in the same device.
  • the modem processor may include a modulator and a demodulator.
  • the modulator is used to modulate the low-frequency baseband signal to be transmitted into a high-frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs a sound signal through an audio device (not limited to a speaker 170A, a receiver 170B, etc.), or displays an image or video through a display screen 194.
  • the modem processor may be an independent device.
  • the modem processor may be independent of the processor 110, and may be set in the same device as the mobile communication module 150 or other functional modules.
  • the wireless communication module 160 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wi-Fi) networks), Bluetooth (Bluetooth, BT), and global navigation satellites that are applied to the electronic device 100 Wireless communication solutions such as global navigation (satellite system, GNSS), frequency modulation (FM), near field communication (NFC), infrared (IR), etc.
  • the wireless communication module 160 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 160 receives the electromagnetic wave via the antenna 2, frequency-modulates and filters the electromagnetic wave signal, and sends the processed signal to the processor 110.
  • the wireless communication module 160 may also receive the signal to be transmitted from the processor 110, frequency-modulate it, amplify it, and convert it to electromagnetic waves through the antenna 2 to radiate it out.
  • Wireless communication technology may include global mobile communication system (global system for mobile communications, GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband code division Multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (LTE), BT, GNSS, WLAN, NFC, FM , And / or IR technology, etc.
  • GSM global system for mobile communications
  • general packet radio service general packet radio service
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband code division Multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time division code division multiple access
  • LTE long term evolution
  • GNSS can include global positioning system (GPS), global navigation satellite system (global navigation system (GLONASS), Beidou satellite navigation system (beidou navigation, satellite system, BDS), quasi-zenith satellite system (quasi-zenith satellite system (QZSS) and / or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • Beidou satellite navigation system beidou navigation, satellite system, BDS
  • quasi-zenith satellite system quasi-zenith satellite system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the electronic device 100 realizes a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, connecting the display screen 194 and the application processor.
  • the GPU is used to perform mathematical and geometric calculations, and is used for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the display panel may use a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light-emitting diode or an active matrix organic light-emitting diode (active-matrix organic light) emitting diode, AMOLED), flexible light-emitting diode (FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diode (QLED), etc.
  • the electronic device 100 may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the electronic device 100 can realize a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen 194, an application processor, and the like.
  • the ISP processes the data fed back by the camera 193. For example, when taking a picture, the shutter is opened, the light is transmitted to the camera photosensitive element through the lens, the optical signal is converted into an electrical signal, and the camera photosensitive element transmits the electrical signal to the ISP for processing, which is converted into a visible image.
  • ISP can also optimize the algorithm of image noise, brightness and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, the ISP may be set in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • the object generates an optical image through the lens and projects it onto the photosensitive element.
  • the photosensitive element may be a charge coupled device (charge coupled device, CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CCD charge coupled device
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device 100 may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • N is greater than or equal to 3
  • the camera 193 may specifically include an ultra-wide-angle camera, a wide-angle camera, and a telephoto camera.
  • the wide-angle camera may be a color camera
  • the ultra-wide-angle camera may be a color camera or a black-and-white camera
  • the telephoto camera may be a color camera or a black-and-white camera.
  • the camera 193 includes an ultra-wide-angle color camera, a wide-angle color camera, and a telephoto color camera.
  • each camera may be concentratedly arranged at a certain position on the electronic device 100, or may be distributed at different positions on the electronic device 100.
  • FIG. 2 shows some possible arrangements between the three cameras.
  • the position of each camera is not limited to the several arrangements listed in FIG. 2 and can be determined according to the actual situation.
  • multiple cameras may be located on the same horizontal plane, that is, there is no height difference between the multiple cameras. In this way, the electronic device 100 can be prevented from being too thick, which helps to improve the user experience.
  • the digital signal processor is used to process digital signals. In addition to digital image signals, it can also process other digital signals. For example, when the electronic device 100 is selected at a frequency point, the digital signal processor is used to perform Fourier transform on the energy at the frequency point.
  • Video codec is used to compress or decompress digital video.
  • the electronic device 100 may support one or more video codecs. In this way, the electronic device 100 can play or record videos in various encoding formats, for example: moving picture experts group (MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • MPEG moving picture experts group
  • NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • the NPU can realize applications such as intelligent recognition of the electronic device 100, such as image recognition, face recognition, voice recognition, and text understanding.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 100.
  • the external memory card communicates with the processor 110 through the external memory interface 120 to realize the data storage function. For example, save music, video and other files in an external memory card.
  • the internal memory 121 may be used to store one or more computer programs, which may include instructions.
  • the processor 110 causes the electronic device 100 to execute various functional applications and data processing by executing instructions stored in the internal memory 121.
  • the internal memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, and may also store one or more application programs (such as a camera, Facebook, etc.).
  • the storage data area may store data (such as photos, contacts, etc.) created during use of the electronic device 100 and the like.
  • the internal memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • a non-volatile memory such as at least one disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and so on.
  • the electronic device 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. For example, music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and also used to convert analog audio input into digital audio signal.
  • the audio module 170 can also be used to encode and decode audio signals.
  • the audio module 170 may be disposed in the processor 110, or some functional modules of the audio module 170 may be disposed in the processor 110.
  • the speaker 170A also called “speaker” is used to convert audio electrical signals into sound signals.
  • the electronic device 100 can listen to music through the speaker 170A, or listen to a hands-free call.
  • the receiver 170B also known as "handset" is used to convert audio electrical signals into sound signals.
  • the voice can be received by bringing the receiver 170B close to the ear.
  • Microphone 170C also known as “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound by approaching the microphone 170C through a person's mouth, and input a sound signal to the microphone 170C.
  • the electronic device 100 may be provided with at least one microphone 170C. In other embodiments, the electronic device 100 may be provided with two microphones 170C. In addition to collecting sound signals, it may also implement a noise reduction function. In other embodiments, the electronic device 100 may also be provided with three, four, or more microphones 170C to collect sound signals, reduce noise, identify sound sources, and implement directional recording functions.
  • the headset interface 170D is used to connect wired headsets.
  • the earphone interface 170D may be a USB interface 130, or a 3.5mm open mobile electronic device (open mobile terminal) platform (OMTP) standard interface, the American Telecommunications Industry Association (cellular telecommunications industry association of the United States, CTIA) standard interface.
  • OMTP open mobile electronic device
  • CTIA cellular telecommunications industry association of the United States
  • the pressure sensor 180A is used to sense the pressure signal and can convert the pressure signal into an electrical signal.
  • the pressure sensor 180A may be provided on the display screen 194.
  • the capacitive pressure sensor may be a parallel plate including at least two conductive materials. When force is applied to the pressure sensor 180A, the capacitance between the electrodes changes.
  • the electronic device 100 determines the intensity of the pressure according to the change in capacitance.
  • the electronic device 100 detects the intensity of the touch operation according to the pressure sensor 180A.
  • the electronic device 100 may also calculate the touched position based on the detection signal of the pressure sensor 180A.
  • touch operations that act on the same touch position but have different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity less than the first pressure threshold acts on the short message application icon, an instruction to view the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold acts on the short message application icon, an instruction to create a new short message is executed.
  • the gyro sensor 180B may be used to determine the movement posture of the electronic device 100.
  • the angular velocity of the electronic device 100 around three axes ie, x, y, and z axes
  • the gyro sensor 180B can be used for shooting anti-shake.
  • the gyro sensor 180B detects the shaking angle of the electronic device 100, calculates the distance that the lens module needs to compensate based on the angle, and allows the lens to counteract the shaking of the electronic device 100 through reverse movement to achieve anti-shake.
  • the gyro sensor 180B can also be used for navigation and somatosensory game scenes.
  • the air pressure sensor 180C is used to measure air pressure.
  • the electronic device 100 calculates the altitude by using the air pressure value measured by the air pressure sensor 180C to assist positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the electronic device 100 can detect the opening and closing of the flip holster using the magnetic sensor 180D.
  • the electronic device 100 may detect the opening and closing of the clamshell according to the magnetic sensor 180D.
  • characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of acceleration of the electronic device 100 in various directions (generally three axes). When the electronic device 100 is stationary, the magnitude and direction of gravity can be detected. It can also be used to recognize the posture of electronic devices, and be used in applications such as horizontal and vertical screen switching and pedometers.
  • the distance sensor 180F is used to measure the distance.
  • the electronic device 100 can measure the distance by infrared or laser. In some embodiments, when shooting scenes, the electronic device 100 may use the distance sensor 180F to measure distance to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode.
  • the electronic device 100 emits infrared light outward through the light emitting diode.
  • the electronic device 100 uses a photodiode to detect infrared reflected light from nearby objects. When sufficient reflected light is detected, it may be determined that there is an object near the electronic device 100. When insufficient reflected light is detected, the electronic device 100 may determine that there is no object near the electronic device 100.
  • the electronic device 100 can use the proximity light sensor 180G to detect that the user holds the electronic device 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor 180G can also be used in leather case mode, pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense the brightness of ambient light.
  • the electronic device 100 can adaptively adjust the brightness of the display screen 194 according to the perceived brightness of the ambient light.
  • the ambient light sensor 180L can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor 180L can also cooperate with the proximity light sensor 180G to detect whether the electronic device 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the electronic device 100 can use the collected fingerprint characteristics to realize fingerprint unlocking, access to application locks, fingerprint taking pictures, fingerprint answering calls, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the electronic device 100 uses the temperature detected by the temperature sensor 180J to execute a temperature processing strategy. For example, when the temperature reported by the temperature sensor 180J exceeds a threshold, the electronic device 100 performs performance reduction of the processor located near the temperature sensor 180J in order to reduce power consumption and implement thermal protection. In other embodiments, when the temperature is lower than another threshold, the electronic device 100 heats the battery 142 to avoid abnormal shutdown of the electronic device 100 due to low temperature. In some other embodiments, when the temperature is below another threshold, the electronic device 100 performs boosting on the output voltage of the battery 142 to avoid abnormal shutdown due to low temperature.
  • the touch sensor 180K may also be called a touch panel or a touch-sensitive surface.
  • the touch sensor 180K may be provided on the display screen 194, and the touch sensor 180K and the display screen 194 constitute a touch screen, also called a "touch screen”.
  • the touch sensor 180K is used to detect a touch operation acting on or near it.
  • the touch sensor can pass the detected touch operation to the application processor to determine the type of touch event.
  • the visual output related to the touch operation can be provided through the display screen 194.
  • the touch sensor 180K may also be disposed on the surface of the electronic device 100, which is different from the location where the display screen 194 is located.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor 180M can acquire the vibration signal of the vibrating bone mass of the human body part.
  • the bone conduction sensor 180M can also contact the pulse of the human body and receive a blood pressure beating signal.
  • the bone conduction sensor 180M may also be provided in the earphone and combined into a bone conduction earphone.
  • the audio module 170 may parse out the voice signal based on the vibration signal of the vibrating bone block of the voice part acquired by the bone conduction sensor 180M to realize the voice function.
  • the application processor can analyze the heart rate information based on the blood pressure beating signal acquired by the bone conduction sensor 180M, and realize the heart rate detection function.
  • the key 190 includes a power-on key, a volume key, and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the electronic device 100 can receive key input and generate key signal input related to user settings and function control of the electronic device 100.
  • the motor 191 may generate a vibration prompt.
  • the motor 191 can be used for vibration notification of incoming calls and can also be used for touch vibration feedback.
  • touch operations applied to different applications may correspond to different vibration feedback effects.
  • the motor 191 can also correspond to different vibration feedback effects.
  • Different application scenarios for example: time reminder, receiving information, alarm clock, game, etc.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 may be an indicator light, which may be used to indicate a charging state, a power change, and may also be used to indicate a message, a missed call, a notification, and the like.
  • the SIM card interface 195 is used to connect a SIM card.
  • the SIM card can be inserted into or removed from the SIM card interface 195 to achieve contact and separation with the electronic device 100.
  • the electronic device 100 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • the SIM card interface 195 can support Nano SIM cards, Micro SIM cards, SIM cards, etc.
  • the same SIM card interface 195 can insert multiple cards at the same time. The types of multiple cards can be the same or different.
  • the SIM card interface 195 can also be compatible with different types of SIM cards.
  • the SIM card interface 195 can also be compatible with external memory cards.
  • the electronic device 100 interacts with the network through a SIM card to realize functions such as call and data communication.
  • the electronic device 100 uses eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device 100 and cannot be separated from the electronic device 100.
  • the software system of the electronic device 100 may adopt a layered architecture, event-driven architecture, micro-core architecture, micro-service architecture, or cloud architecture.
  • This embodiment takes a layered architecture Android system as an example to exemplarily explain the software structure of the electronic device 100.
  • FIG. 3 is a block diagram of the software structure of the electronic device 100 of this embodiment.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor.
  • the layers communicate with each other through a software interface.
  • the Android system is divided into four layers, from top to bottom are the application layer, the application framework layer, the Android runtime and the system library, and the kernel layer.
  • the application layer may include a series of application packages.
  • the application package may include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface) and programming framework for applications at the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and so on.
  • the window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, intercept the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Data can include videos, images, audio, calls made and received, browsing history and bookmarks, phone book, etc.
  • the view system includes visual controls, such as controls for displaying text and controls for displaying pictures.
  • the view system can be used to build applications.
  • the display interface can be composed of one or more views.
  • a display interface including an SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide the communication function of the electronic device 100. For example, the management of the call state (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables applications to display notification information in the status bar, which can be used to convey notification-type messages, and can disappear after a short stay without user interaction.
  • the notification manager is used to notify the completion of downloading, message reminders, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • the text message is displayed in the status bar, a prompt sound is emitted, the electronic device vibrates, and the indicator light flashes.
  • Android Runtime includes core library and virtual machine. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one part is the function function that Java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in the virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer into binary files.
  • the virtual machine is used to perform functions such as object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library may include multiple functional modules. For example: surface manager (surface manager), media library (Media library), 3D graphics processing library (for example: OpenGL ES), 2D graphics engine (for example: SGL), etc.
  • surface manager surface manager
  • media library Media library
  • 3D graphics processing library for example: OpenGL ES
  • 2D graphics engine for example: SGL
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio, video format playback and recording, and still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to realize 3D graphics drawing, image rendering, synthesis, and layer processing.
  • the 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least the display driver, camera driver, audio driver, and sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps and other information of touch operations).
  • the original input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch click operation, and the control corresponding to the click operation is a control of the camera application icon as an example, the camera application calls an interface of the application framework layer, starts the camera application, and determines one or more cameras 193 to be used Then, the driver of the one or more cameras 193 to be used is started by calling the kernel layer, and the still image or video is captured by the one or more cameras 193.
  • the electronic device 100 can focus through the camera and preview the image based on the information captured by the camera. After receiving the user's instruction to "shoot", the electronic device 100 can generate the captured image according to the information captured by the camera.
  • the electronic device 100 can select a camera suitable for the current shooting scene from the three cameras to shoot, in order to obtain better in the current shooting scene Shooting effect.
  • the images obtained by the two cameras of the electronic device are simply fused to obtain the captured images.
  • the method provided in this embodiment can be used in various and variable shooting In the scene, the image obtained by the camera suitable for the current shooting scene is used to generate the image obtained by shooting, so that a high-quality shot image can be obtained in each shooting scene.
  • the electronic device 100 can switch between different cameras, and different cameras can be applied to different shooting scenes, so the electronic device 100 is suitable for more shooting scenes, and the functions of the electronic device More powerful and diverse.
  • the mobile phone having the structure shown in FIG. 1 and FIG. 3 will be used as the electronic device 100.
  • the electronic device 100 including an ultra-wide-angle color camera, a wide-angle color camera, and a telephoto color camera as examples, the technical solution provided in this embodiment will be specifically described. .
  • the wide-angle color camera is referred to as a first camera
  • the ultra-wide-angle color camera is referred to as a second camera
  • the telephoto color camera is referred to as a third camera.
  • the pixels and resolutions of different cameras can be the same or different.
  • the first camera may have the highest pixel and resolution.
  • the second camera has the largest angle of view and the largest field of view possible.
  • the field of view of the first camera is smaller than the field of view of the second camera and larger than the field of view of the third camera, that is, the field of view of the first camera is smaller than the second camera but larger than the third camera, which can be used to shoot larger Scenery within the field of view; and, compared with the other two cameras, the first camera has the best imaging quality (or picture quality).
  • the field of view of the third camera is smaller than that of the second camera and the first camera, but the focal length of the third camera is larger than that of the second camera and the first camera, and is suitable for capturing distant information and shooting distant objects.
  • the field of view angle is used to indicate the maximum angle range that the camera can shoot during the process of capturing an image on the mobile phone, and the scene within the angle range can be captured by the camera. If the object to be photographed is within this angle range, the object to be photographed will be collected by the mobile phone. If the object to be photographed is outside this angle range, the photographed device will not be collected by the mobile phone.
  • the larger the field of view of the mobile phone the larger the shooting range and the shorter the focal length. The smaller the field of view of the mobile phone, the smaller the shooting range and the longer the focal length.
  • angle of view can also be replaced with words such as “field of view range”, “field of view range”, “field of view area”, “imaging range” or “imaging field of view”.
  • this article does not limit the name of the "angle of view”, as long as the above concept is expressed.
  • “angle of view” is just a word used in this embodiment, the meaning of its representative has been recorded in this embodiment, and its name does not constitute any limitation to this embodiment; in addition, in some other implementations
  • “angle of view” may also be referred to as other names such as “field of view range”, “field of view range”, “field of view area”, “imaging range” or “imaging field of view”.
  • the first camera has the strongest resolution.
  • Resolution can be understood as the ability to distinguish the details of the original object. For example, if the object being photographed is a piece of paper covered with many lines, an image captured by a mobile phone with strong resolution can recognize 100 lines, while an image captured by a mobile phone with weak resolution can only recognize 10 lines. line.
  • the greater the resolution of the mobile phone the stronger the ability of the mobile phone to restore the details of the object to be captured after capturing the image of the object to be captured. For example, if the user wants to enlarge the captured image, the sharpness of the enlarged image is higher.
  • the resolution has a certain relationship with pixels, resolution, etc., the higher the resolution of the pixel or the higher the resolution of the mobile phone. Imaging quality or picture quality may include such aspects as sharpness, sharpness, resolution, color gamut range, color purity, color balance, and so on.
  • the imaging quality of the first camera is higher than that of the other two cameras, and the field of view angle of the first camera is centered, a larger field of view can be captured, and the resolution of the first camera is strong.
  • the comprehensive ability of the camera is strong, so the first camera can be used as the main camera, and the other two cameras can be used as auxiliary cameras.
  • the position of the first camera in the mobile phone may be between the second camera and the third camera. In this way, the first camera as the main camera can capture the information of the object to be photographed in the main field of view.
  • the mobile phone can use different cameras in these cameras to shoot in different shooting scenarios. That is, the mobile phone can select a camera suitable for the current shooting scene from the second camera, the first camera, and the third camera to shoot, so that a better shooting effect can be obtained in the current shooting scene.
  • the mobile phone can be photographed using / through different cameras in different shooting scenarios.
  • the camera application can be started to display a graphical user interface (GUI) as shown in FIG. 4A, which can be called a shooting interface.
  • GUI graphical user interface
  • a viewfinder frame 400 and controls for indicating the camera are displayed on the shooting interface.
  • a preview image collected by the camera (the preview image is not shown in the drawing) can be displayed in the framing frame 400.
  • the control for indicating the camera may include a control 401 for indicating the second camera, a control 402 for indicating the first camera, and a control 403 for indicating the third camera.
  • the user may determine the camera to be used according to the current shooting scene, click on the control corresponding to the camera to be used on the shooting interface, and then aim the camera at the object to be shot to shoot. After the mobile phone detects that the user clicks the control of the camera on the shooting interface, the camera is selected by the user to shoot. In other embodiments, the camera of the mobile phone is aimed at the object to be photographed, and a preview image of the object to be photographed is displayed in the framing frame 400 of the mobile phone. The user determines the camera to be used according to the current shooting scene, the size of the preview image displayed in the finder frame 400, and the field of view of the preview image, and clicks on the control corresponding to the camera to be used on the shooting interface.
  • the camera is selected by the user to shoot.
  • the user can manually select the camera to be used according to the current shooting scene, and the mobile phone shoots according to the camera selected by the user.
  • the mobile phone uses the first camera to shoot by default, and the volume of the object to be photographed is large.
  • the object to be photographed is a tall building, a group of people, or a long bridge.
  • the imaging range of the camera is also large when shooting.
  • the mobile phone uses the second camera to shoot.
  • the imaging range of the second camera is the largest, the user can actively designate to use the second camera suitable for the current shooting scene to shoot, so as to use the second camera to complete the shooting of the large-volume object to be photographed as much as possible.
  • the above description is based on the example that the first camera is used for shooting by default after the mobile phone is turned on, and the second camera or the third camera can also be used for shooting after the mobile phone is turned on. Alternatively, after the camera is turned on by the mobile phone, the camera used to capture the image when the camera was last exited may be used for shooting.
  • the embodiments of the present application are not limited.
  • a control 404 for indicating the second camera, a control 405 for indicating the first camera, and a control 406 for indicating the third camera are displayed on the shooting interface.
  • the object to be photographed is closer to the mobile phone and the volume of the object to be photographed is smaller.
  • the object to be photographed is a person near the mobile phone.
  • the user can click the control 405 according to the current shooting scene to select the first camera for shooting.
  • the user since the imaging range of the first camera is centered and the imaging quality is the best, the user can actively designate the first camera suitable for the current shooting scene to shoot, so as to obtain a better shooting effect.
  • a control 407 for indicating the second camera, a control 408 for indicating the first camera, and a control 409 for indicating the third camera are displayed on the shooting interface.
  • the object to be photographed is far away from the mobile phone.
  • the object to be photographed is a car in the distance.
  • the user can click the control 409 according to the current shooting scene to select the third camera for shooting.
  • the third camera since the third camera has the largest focal length and is suitable for capturing distant information, the user can actively designate the third camera suitable for the current shooting scene to shoot distant objects, so as to obtain a better shooting effect.
  • the user can also click on the control 409 and the control 408 according to the current shooting scene to select the two cameras of the third camera and the first camera to shoot .
  • the mobile phone can fuse the images collected by the third camera suitable for shooting distant scenes and the first camera with the best imaging quality, thereby obtaining a better shooting effect.
  • a control 410 for indicating shooting options is displayed on the shooting interface.
  • the mobile phone may display a GUI as shown in FIG. 4E.
  • the mobile phone uses the at least one camera selected by the user To shoot.
  • the user can select at least one camera suitable for the shooting scene to shoot according to the current specific shooting scene, so as to improve the shooting effect in the current shooting scene.
  • the images obtained by the two cameras of the mobile phone are simply merged to obtain the final captured image.
  • the method provided in this embodiment can be used in a variety of and variable shooting In the scene, the image obtained by the camera or the combination of cameras suitable for the current shooting scene is switched in real time to obtain the image obtained by the final shooting, so that the shot image with better effect can be obtained in each shooting scene.
  • controls for indicating the second camera, the first camera, and the third camera may be different icons, or may be the same icon.
  • the above controls 401, 402, 403 may be different icons or the same icon.
  • the shooting interface may also include a control for instructing to automatically switch the camera.
  • the control for instructing to automatically switch the camera may be the control 414 in FIG. 4A.
  • the mobile phone may automatically determine the camera suitable for the current shooting scene and automatically switch to the determined camera Shooting.
  • the above is mainly based on the example that the user actively designates different cameras to shoot according to different shooting scenarios to illustrate that the mobile phone can turn on different cameras to shoot in different shooting scenarios.
  • the mobile phone can be switched to different cameras for shooting in different shooting scenarios.
  • the zoom magnification K is the reduction / expansion ratio of the first focal length relative to the second focal length.
  • the second focal length is the focal length of the first camera, that is, the focal length of the first camera.
  • the angle of view imaged on the photosensitive elements of different cameras can be converted into the focal length of the lens corresponding to the same imaging angle of view on the first camera, that is, the focal length of the non-first camera can be converted into the focal length of the first camera, and the converted focal length This is the equivalent focal length.
  • the zoom magnification is K
  • the zoom magnification K it can be equivalent to that the first camera of the mobile phone shoots with the first focal length corresponding to the zoom magnification K.
  • the zoom magnification K can be used to shoot, which is equivalent to the effect of the user moving the mobile phone to a position 5 meters away from the object to shoot the object, which is equivalent to the user to the second focal length Reduced by 2 times, which means that the first focal length is equivalent to 0.5 times the second focal length.
  • the zoom magnification K when the zoom magnification K is less than 1, the zoom magnification K can be understood as the reduction ratio of the first focal length relative to the second focal length; when the zoom magnification K is greater than 1, the zoom magnification K can be understood as the first focal length relative to the first Magnification of two focal lengths. When the zoom magnification K is equal to 1, the first focal length can be understood as equal to the second focal length.
  • the value range of the zoom magnification K of the mobile phone may be M ⁇ K ⁇ N.
  • M tan (B / 2) / tan (A / 2)
  • N tan (B / 2) / tan (C / 2)
  • A is the angle of view of the second camera
  • B is the Angle of view
  • C is the angle of view of the third camera
  • A is greater than B
  • B is greater than C.
  • A may be 120 °
  • B may be 75 °
  • C may be 30 °.
  • the zoom magnification K may be less than 1, equal to 1, or greater than 1, and the numerical range of the zoom magnification K is large.
  • the zoom magnification K 1
  • it may be referred to as a 1 ⁇ shooting scene, that is, a shooting scene of 1 ⁇ focal length (that is, 1 ⁇ main shooting focal length).
  • the mobile phone uses the first camera to shoot without zooming.
  • the zoom magnification K is not equal to 1, it can be called a zoom shooting scene.
  • Mobile phones often need to change the focal length during the image capture process. For example, when a user shoots a distant object (such as people, flowers, buildings, etc.) in a 1 ⁇ shooting scene through a mobile phone, the object displayed in the preview image displayed on the mobile phone will be smaller.
  • the mobile phone can perform zoom shooting with a zoom magnification K greater than 1.
  • zoom shooting can be performed, and the zoom magnification K is less than 1. That is to say, the zoom magnification K can be understood as the zoom-in / out magnification K (or zoom-in / out ratio K) of the image. The larger the zoom magnification K, the larger the zoom-in / out magnification K of the image, and the larger the size of the object in the preview image.
  • the zoom magnification K Larger; the smaller the zoom magnification K, the smaller the image reduction / magnification factor K, and the smaller the size of the object in the preview image.
  • the zoomable / zoomable range of the image is also large.
  • each camera of the mobile phone has a focal length, and each camera uses its own focal length to collect images.
  • the focal length of the second camera may be 16 mm
  • the focal length of the first camera may be 27 mm
  • the focal length of the third camera may be 80 mm.
  • the second camera uses a 16mm focal length to capture images, the zoom factor K is equal to M, and the equivalent focal length is M * 27mm; when the mobile phone uses the first camera to collect images, the first camera uses 27mm
  • the zoom magnification K is equal to 1
  • the mobile phone uses the third camera to capture images uses an 80mm focal length to capture images, the zoom magnification K is equal to N, and the equivalent focal length is N * 27mm.
  • the image collected by the mobile phone using the second camera may be shown in FIG. 5A; the image collected by the mobile phone using the first camera may be shown in FIG. 5B; the image collected by the mobile phone using the third camera may be shown in FIG.
  • the mobile phone can crop and enlarge the image collected by the second camera (such as FIG. 5A) to obtain an image corresponding to the zoom factor K greater than M and less than 1 (such as FIG. 5D); the image collected by the first camera ( For example, in Figure 5B), crop and zoom in to obtain an image with a zoom factor K greater than 1 and less than N (for example, Figure 5E); crop and zoom in on the image collected by the third camera to obtain a zoom factor K greater than An image of N (for example, Figure 5F).
  • the second camera such as FIG. 5A
  • the image collected by the first camera For example, in Figure 5B
  • crop and zoom in to obtain an image with a zoom factor K greater than 1 and less than N for example, Figure 5E
  • crop and zoom in on the image collected by the third camera to obtain a zoom factor K greater than An image of N (for example, Figure 5F).
  • the mobile phone in a 1 ⁇ shooting scene where the zoom magnification K is equal to 1, the mobile phone can use the first camera to shoot, so that the image collected by the first camera with the highest imaging quality can be used to generate a preview image and the captured image to obtain Better shooting results.
  • the mobile phone can use the second camera to perform zoom shooting. Therefore, the mobile phone can crop and enlarge the image collected by the second camera according to the zoom magnification K to generate an image corresponding to the zoom magnification K (for example, a preview image or an image obtained by shooting). In this way, the mobile phone can use the second camera with the largest imaging range among the three cameras to shoot to meet the user's shooting requirements for the imaging range.
  • the mobile phone may use the first camera to shoot. Therefore, the mobile phone can crop and enlarge the image collected by the first camera according to the zoom magnification K to generate an image corresponding to the zoom magnification K. In this way, the mobile phone can use the first camera with the highest imaging quality, the highest resolution, and the highest pixels among the three cameras to shoot to obtain the best image quality.
  • the phone can use the third camera to shoot. Therefore, the mobile phone can crop and enlarge the image collected by the third camera according to the zoom magnification K to generate an image corresponding to the zoom magnification K. In this way, the mobile phone can use a third camera suitable for capturing distant information among the three cameras to shoot, so as to meet the user's need for shooting long-distance and large-size images.
  • the mobile phone uses a certain camera to shoot means that the mobile phone turns on the camera and uses the information collected by the camera to generate an image (for example, a preview image or an image obtained by shooting).
  • the mobile phone can also turn on other cameras than the camera, but does not use the information collected by the other cameras to generate images. In this way, when the mobile phone wants to switch from the camera to another for shooting, it can achieve fast switching without delay.
  • the mobile phone can turn off cameras other than the camera to save power consumption of the mobile phone.
  • the mobile phone uses the first camera to shoot means that the mobile phone turns on the first camera and uses the information collected by the first camera to generate an image.
  • the mobile phone can also turn on the second camera and the third camera, but does not use the information collected by the second camera and the third camera to generate an image. In this way, when the mobile phone wants to switch from the second camera to the second camera or to the third camera for shooting, it can realize a fast switching without delay. In another case, the mobile phone can turn off the second camera and the third camera to save power consumption of the mobile phone.
  • Table 1 above illustrates the case where the mobile phone uses a camera to shoot under the shooting scenarios corresponding to different zoom magnifications.
  • the mobile phone can also use a combination of different cameras to shoot.
  • image fusion such as wavelet change fusion method, pyramid change fusion method, etc., which is not specifically limited in this embodiment.
  • the mobile phone when the zoom magnification satisfies M ⁇ K ⁇ 1, the mobile phone can also use the first camera and / or the third camera to shoot on the basis of shooting with the second camera, so that the image quality is generated by image fusion. High image. That is, the mobile phone can use at least the second camera to shoot.
  • the mobile phone when the zoom magnification satisfies 1 ⁇ K ⁇ N, the mobile phone can also use the second camera and / or the third camera to shoot on the basis of shooting with the first camera, so that the image quality is generated through image fusion. High image. That is, the mobile phone can use at least the first camera to shoot.
  • the mobile phone when the zoom magnification satisfies K ⁇ N, the mobile phone can also use the second camera and / or the first camera to shoot on the basis of shooting with the third camera, thereby generating a higher imaging quality through image fusion image. That is, the mobile phone can use at least the third camera to shoot.
  • the mobile phone uses only one camera suitable for the current shooting scene to shoot in different shooting scenarios, it can save The power consumption and power of the mobile phone; when the camera is used to determine the camera and shoot using the corresponding relationship between the zoom magnification and the camera shown in Table 2, since the mobile phone needs to use multiple cameras to collect information, it is generated by fusing the information collected by the multiple cameras The image, therefore the imaging quality is higher and the shooting effect is better.
  • the mobile phone may determine the target zoom magnification according to the user's operation for indicating the zoom magnification, determine the corresponding target camera according to the target zoom magnification, and use the target camera corresponding to the target zoom magnification to shoot.
  • the shooting interface displays controls for indicating the zoom magnification (or zoom in / out magnification), such as the 0.6 ⁇ control 601, 1 ⁇ control 602, 2 ⁇ control, 5 ⁇ control, and 10 ⁇ control shown in FIG. 6A. Wait, the current zoom magnification is 1 ⁇ . As shown in FIG.
  • the mobile phone when the user clicks on the control 601 to indicate the target zoom magnification of 0.6 ⁇ (or the target zoom magnification of 0.6), the mobile phone can determine the relationship between the zoom magnification and the camera as shown in Table 1 or Table 2.
  • the target zoom magnification of 0.6 ⁇ corresponds to the target camera (for example, determined as the second camera according to Table 1), and then an image can be generated based on the target zoom magnification and the information collected by the target camera.
  • a zoom magnification scale (or zoom / zoom scale) 701 is displayed on the shooting interface, and the current zoom magnification is 2 ⁇ .
  • the mobile phone detects that the user made a gesture (for example, pinch) on the shooting interface (or on the zoom magnification scale) with a finger, in response to the gesture, the mobile phone can determine a target zoom magnification according to the gesture (For example, 1 ⁇ ), the zoom magnification scale can be used to display the target zoom magnification currently determined by the mobile phone according to the gesture on the shooting interface.
  • a gesture for example, pinch
  • the zoom magnification scale can be used to display the target zoom magnification currently determined by the mobile phone according to the gesture on the shooting interface.
  • the mobile phone can determine the target camera corresponding to the target zoom magnification according to the corresponding relationship between the zoom magnification and the camera shown in Table 1 or Table 2 (for example, the first camera is determined according to Table 1), so that it can be based on the target zoom magnification and the target
  • the information collected by the camera generates an image.
  • the correspondence between the zoom magnification K and the camera is preset in the mobile phone.
  • the mobile phone may select different cameras or a combination of different cameras corresponding to the zoom magnification K to shoot according to the correspondence shown in Table 1 or Table 2.
  • the mobile phone can determine the target camera corresponding to the target zoom magnification according to the correspondence between the zoom magnification and the camera shown in Table 1.
  • the mobile phone can use the penultimate
  • the corresponding relationship between the zoom magnification and the camera shown in the second column and the penultimate column determines the target camera corresponding to the target zoom magnification; in the super shooting mode, the mobile phone can use the three cameras shown in the last column of Table 2 to shoot.
  • the mobile phone in the high-quality shooting mode, can determine the target camera corresponding to the target zoom magnification according to the corresponding relationship between the zoom magnification and the camera shown in the penultimate column and the penultimate column in Table 2, which can be displayed according to the viewfinder.
  • the content on the preview image determines the target camera corresponding to the target zoom magnification.
  • the mobile phone may determine that the target camera is the second camera and the third camera. That is, on the basis of shooting with the second camera, the mobile phone can perform image fusion in conjunction with the third camera.
  • the field of view of the third camera is small, which can be used to capture the information of the person in the middle position, so that the image of the person part is clearer.
  • the mobile phone may determine that the target camera is the second camera and the first camera. That is, on the basis of shooting with the second camera, the mobile phone can perform image fusion in conjunction with the first camera with a central field of view. Due to the high image quality of the fusion part, the field of view of the first camera is larger than the field of view of the third camera. Therefore, compared to shooting with the second camera and the third camera, shooting with the second camera and the first camera makes it possible to perform The field of view of the fused image is larger, so the image in the larger field of view can be made clearer.
  • the mobile phone may determine that the target camera is the first camera and the second camera. That is, on the basis of shooting with the first camera, the mobile phone can also take advantage of the wide field of view characteristic of the second camera to acquire a landscape image in a larger range and merge it with the image acquired by the second camera to make the entire landscape image Imaging is clearer.
  • the mobile phone can determine that the target camera is the first camera and the third camera, and the third camera with a smaller field angle can be used for Capturing the character information in the middle position makes the image of the character part clearer.
  • the mobile phone may determine that the target camera is a third camera and a second camera. That is, on the basis of shooting with the third camera, the mobile phone can also take advantage of the large field of view characteristic of the second camera to acquire landscape images in a larger range.
  • the mobile phone may determine that the target camera is the third camera and the first camera. That is, on the basis of shooting with the third camera, the mobile phone can also take advantage of the high imaging quality of the first camera to obtain a clearer image after fusion.
  • the mobile phone can also automatically determine the target zoom magnification at the time of shooting according to the object to be photographed, the imaging size of the object to be photographed, or the distance of the object to be photographed, and then according to the zoom magnifications shown in Table 1 or Table 2
  • the corresponding relationship of the cameras determines the target camera corresponding to the target zoom magnification, so that an image can be generated according to the information collected by the target zoom magnification and the target camera.
  • the mobile phone when taking pictures, can automatically determine the target zoom magnification according to the distance between the object to be photographed and the mobile phone, and then determine the target camera according to the target zoom magnification, and generate an image based on the target zoom magnification and the information collected by the target camera.
  • the mobile phone there are many ways for the mobile phone to determine the distance of the object to be photographed.
  • the mobile phone can determine the distance of the object to be shot through the parallax of the images collected by the two cameras.
  • the mobile phone determines that the target zoom magnification is greater than M and less than 1; when the distance between the object to be photographed and the mobile phone is greater than or equal to the preset value 2 and less than the preset value 3, the mobile phone determines that the target zoom magnification is greater than or equal to 1 and less than N; when the distance between the object to be photographed and the mobile phone is greater than or equal to the preset value 3, the mobile phone determines that the target zoom magnification is greater than or equal to N .
  • the preset value 1 is less than the preset value 2
  • the preset value 3 is less than the preset value 3.
  • the mobile phone when shooting video (ie, video), can also identify the object to be shot according to the image collected by the camera, determine the zoom ratio according to the size of the object in the image, determine the camera according to the zoom ratio, and automatically switch to zoom Shoot with the camera corresponding to the magnification.
  • the mobile phone generates a composition according to the image collected by the switched camera, and crops and reduces / expands the image collected by the camera according to the composition and zoom magnification, thereby generating the next frame of video image.
  • the mobile phone in the process of shooting video, the mobile phone can automatically zoom, and switch to the corresponding camera in real time to shoot according to the changed zoom magnification.
  • the size of the object to be photographed on the image is larger; if the object to be photographed is As the distance of the mobile phone becomes longer, the size of the object to be photographed on the image is smaller, and the user may not be able to clearly see the object to be photographed.
  • the distance between the object to be photographed in motion and the mobile phone is usually near and far, and the size of the object to be photographed on the image is also sometimes large and small, which is not convenient for the user to observe and record the object to be photographed.
  • the mobile phone can set the reference size of the object to be photographed on the screen (for example, the object to be photographed is a person, the reference size of the person on the image is 1/3 of the entire image size), the reference size is The process is fixed. It can be understood that the reference sizes corresponding to different objects to be photographed may be different.
  • the mobile phone can determine the zoom magnification according to the ratio between the actual size of the object to be captured in the image captured in the current frame and the reference size, and the camera to be used to capture the next frame of image according to the zoom magnification.
  • the mobile phone collects images according to the determined camera, generates a composition according to the collected image, and performs cropping and reduction / expansion processing on the collected image according to the composition and the determined zoom magnification, so that the actual size of the object to be captured on the image is
  • the reference size is basically the same.
  • the actual size of the object to be photographed on the image collected by the camera is larger, and the ratio of the reference size to the actual size is less than 1.
  • the image reduction / expansion ratio K that is, the zoom magnification K is less than 1.
  • the mobile phone can perform image cropping according to the zoom magnification K, so that the size of the object to be photographed on the image is basically unchanged, and the size of the object to be photographed on the image is basically the reference size.
  • the actual size of the object to be photographed collected by the camera on the image is small, and the ratio of the reference size to the actual size, that is, the zoom magnification K is greater than 1.
  • the mobile phone can cut the screen according to the zoom magnification K, so that the size of the object to be photographed on the image is basically unchanged.
  • the mobile phone determines the new zoom magnification in real time, switches to the camera corresponding to the new zoom magnification in real time to collect the image, and processes the collected image in real time according to the new zoom magnification to obtain the object to be photographed
  • the size of the video image is basically the same, which is convenient for users to observe and record.
  • the zoom magnification range may be M to N, the zoom range is larger, and the zoom freedom is greater.
  • this embodiment cuts the screen based on the composition. Since the composition may change dynamically, the picture to be cut requires a large field of view to meet the needs of the picture cut according to the composition.
  • the second camera in this embodiment can provide a picture to be cropped with a sufficiently large field of view to meet the cropping requirement.
  • the mobile phone may keep the second camera turned on.
  • the second camera can capture information in a larger field of view, so the second camera can The subject to be captured after changing to the left or right is captured. Therefore, the mobile phone can perform cropping according to the picture information captured by the second camera, so that the picture recorded during the video shooting process can always include the object to be shot. It can be seen that the second camera can provide a larger cutting range and freedom for the mobile phone.
  • the mobile phone when the mobile phone records the little girl playing football, the field of view that the second camera can shoot is relatively large.
  • the second camera can collect the image information of the little girl.
  • the mobile phone can cut a part of the image collected by the second camera to perform an enlargement process, thereby generating an image during recording.
  • the images obtained by the two cameras of the mobile phone are simply fused to obtain the final captured image.
  • the method provided in this embodiment can be diverse and changeable.
  • the images collected by one or more cameras suitable for the current shooting scene are used to obtain the final captured image, so that a high-quality captured image can be obtained in each shooting scene. Therefore, when the shooting scene changes, the mobile phone can switch to the image captured by one or more cameras suitable for the changed shooting scene in real time to obtain the final captured image, so that in any shooting scene, a higher quality can be obtained Take an image.
  • the above is mainly from the perspective of zoom magnification to illustrate that the mobile phone can use different cameras to shoot in different shooting scenarios.
  • the following mainly focuses on automatically identifying the shooting scene from the mobile phone, and automatically switching or prompting the user to switch to a camera suitable for the current shooting scene according to the identified shooting scene to take an example to illustrate that the mobile phone can use different cameras to shoot in different shooting scenarios.
  • the mobile phone When the volume of the object to be photographed is large, for example, the object to be photographed is a tall building, a long bridge, a group of people or a group of buildings, etc., the mobile phone needs a large imaging field of view when photographing.
  • the second camera since the second camera has the largest imaging field of view, when the mobile phone determines that it needs a large imaging field of view, or needs to expand the imaging field of view, it can automatically switch to the second camera for ultra-wide-angle shooting, or can prompt The user switches to the second camera for ultra-wide-angle shooting.
  • the zoom magnification K in the ultra-wide-angle shooting mode is less than 1.
  • the object to be photographed is the Eiffel Tower
  • the control 901 is used to indicate that the mobile phone is currently shooting in the standard mode, that is, the 1 ⁇ shooting mode of the first camera, and the mobile phone recognizes that the Eiffel Tower does not Full shot, the collected image only shows a part of the Eiffel Tower, not including the complete Eiffel Tower. Need to expand the imaging field of view, so the mobile phone can automatically switch to the second camera for shooting, and, referring to FIG. 9B, the mobile phone can also prompt the user to "have automatically switched to the ultra-wide-angle shooting mode", the control 902 is used to indicate that the mobile phone is currently using the ultra-wide-angle mode To shoot. As can be seen from FIG. 9B, when the second camera is used for shooting, the imaging field of view is larger, and the entire Eiffel Tower can be shot completely.
  • the object to be photographed is a plurality of people
  • the control 1001 is used to indicate that the mobile phone is currently shooting in the standard mode (that is, the 1 ⁇ shooting mode of the first camera), and the mobile phone recognizes the image collected by the first camera
  • the mobile phone may prompt the user, "The current field of view is small. Will you switch to a second camera with a larger field of view?". When the user clicks "Yes", the mobile phone switches to the second camera to shoot.
  • the mobile phone may prompt the user through the prompt box 1002 on the shooting interface “whether the current field of view is small, whether to switch to the ultra-wide-angle shooting mode”.
  • the mobile phone switches to the second camera for ultra-wide-angle shooting, and, referring to FIG. 10B, the mobile phone indicates through the control 1003 that it has currently switched to the ultra-wide-angle shooting mode.
  • the control 1003 may disappear automatically after being displayed for a period of time.
  • the mobile phone automatically displays the ultra-wide-angle shooting mode control on the shooting interface, so that the user can switch to the second camera for shooting by clicking the ultra-wide-angle shooting mode control.
  • the mobile phone automatically displays a zoom magnification control on the shooting interface.
  • the zoom magnification control indicates that the zoom magnification K is less than 1, for example, it can be 0.6 ⁇ , so that the user can easily switch to the Two cameras to shoot.
  • the mobile phone can prompt the user by voice "whether to switch to the second camera to take a larger range of pictures", if the user voice input "Yes", or the user voice input "switch” and other instructions to switch Sentence, the phone switches to the second camera to shoot.
  • the mobile phone turns on the first camera to shoot.
  • the ultra-wide-angle shooting mode control is displayed on the shooting interface.
  • click the ultra-wide-angle shooting mode and the mobile phone can switch to the second camera to shoot.
  • the mobile phone can also automatically switch back to using the first camera for shooting. For example, when the camera of the mobile phone moves away from the Eiffel Tower and the object to be photographed is switched to a smaller object, the mobile phone determines that it does not need a large imaging field of view. A camera to shoot. Or, when the mobile phone uses the second camera for shooting this time, it can automatically switch back to using the first camera for shooting.
  • the mobile phone uses the second camera to shoot in the ultra-wide-angle shooting scene.
  • the mobile phone may also shoot with the second camera and other cameras.
  • the mobile phone can at least use the second camera to shoot.
  • the mobile phone detects that the distance between the object to be photographed and the mobile phone is less than or equal to a preset value 4 (for example, the preset value 4 may be 10 cm), the distance between the object to be photographed and the mobile phone is very close, and the current shooting scene is a macro shooting scene.
  • the macro shooting scene the size of the object to be photographed on the preview image is large, and the mobile phone can capture information in a larger field of view through the second camera, so as to capture the complete information of the object to be photographed as much as possible.
  • the mobile phone can determine the distance between the object to be photographed and the mobile phone through the distance sensor 180F.
  • the mobile phone may determine the distance between the object to be photographed and the mobile phone through a preset focusing algorithm. When the distance is less than or equal to the preset value 4, the mobile phone can determine that the distance between the object to be photographed and the mobile phone is short, so it can automatically switch to the second camera to shoot, or prompt the user to switch to the second camera to shoot.
  • the object to be photographed is a flower
  • the control 1101 is used to indicate that the mobile phone is currently shooting in the standard mode (that is, the 1 ⁇ shooting mode of the first camera), and the mobile phone determines that the distance between the object to be photographed and the mobile phone is less than Or equal to the preset value 4, the mobile phone displays "the shooting distance is very close, switch to macro mode?"
  • the prompt box 1102 if the user selects "Yes", as shown in FIG. 11B, the mobile phone switches to the micro In the distance mode, the second camera is used for shooting.
  • the control 1103 is used to indicate that it is currently in the macro shooting mode.
  • a close control 1104 is also displayed on the shooting interface. When the user clicks the close control 1104, the mobile phone no longer displays the control 1103.
  • the mobile phone can use the first camera to shoot.
  • a macro shooting mode control is displayed on the shooting interface.
  • the mobile phone can switch to the second camera for macro shooting.
  • the mobile phone can switch back to use the first camera to shoot. Or, when the mobile phone uses the second camera for shooting this time, it can automatically switch back to using the first camera for shooting.
  • the mobile phone can move the position or movement of the photosensitive element in the second camera through the built-in motor (or motor)
  • the position of the lens of the second camera is to increase the image distance, so that the second camera can focus, so that the macro shooting scene can be imaged clearly.
  • the mobile phone uses a second camera to shoot in a macro shooting scene.
  • the mobile phone may also shoot in combination with the second camera and other cameras. That is to say, in the macro shooting scene, the mobile phone can at least use the second camera to shoot, which will not be repeated here.
  • the mobile phone determines that the distance to the object to be photographed is greater than or equal to a preset value of 5 (for example, 5 meters), it can be determined that the current scene is a distant scene, so a smaller field of view and a longer focal length can be used.
  • the third camera for distant shooting is used for shooting. Specifically, the mobile phone may automatically switch when it is determined that the distance to the object to be photographed is greater than or equal to the preset value 5, or prompt the user to switch, or switch to the third camera for remote shooting after receiving the user's instruction.
  • the zoom magnification K is large, for example, the zoom magnification K may be greater than N.
  • the object to be shot is a football on the other end of the football field.
  • the mobile phone determines that the distance between the object to be shot and the mobile phone is far, the mobile phone can automatically switch to use the third camera for distant shooting.
  • the control 1201 is used to indicate that the current shooting scene is a distant shooting mode.
  • the mobile phone After the camera of the mobile phone moves away from the object to be photographed far away, the mobile phone can switch back to adopt the first camera to shoot. Or, when the mobile phone uses the third camera for shooting this time, it can automatically switch back to using the first camera for shooting.
  • the mobile phone uses a third camera to shoot in a distant shooting scene.
  • the mobile phone may also shoot with the third camera and other cameras. That is to say, in the distant shooting scene, the mobile phone can at least use the third camera to shoot, which will not be repeated here.
  • the above is an example of switching between a 1 ⁇ normal shooting scene and a super wide-angle shooting scene, or a case of switching between a 1 ⁇ normal shooting scene and a distant shooting scene, or a 1 ⁇ normal shooting Switching between the scene and the macro shooting scene is explained as an example.
  • the mobile phone can also switch between 1 ⁇ normal shooting scene, super wide-angle shooting scene, distant shooting scene and macro shooting scene, and switch to the camera suitable for the current shooting scene in real time for shooting, which will not be repeated here.
  • the images obtained by the two cameras of the mobile phone are simply merged to obtain the final captured image.
  • the method provided in this embodiment can be used in a variety of and variable shooting In the scene, the camera or a combination of cameras suitable for the current shooting scene is switched in real time to obtain an image, thereby generating an image obtained by the final shooting, so that a high-quality shot image can be obtained in each shooting scene.
  • the mobile phone when taking a picture, can automatically recognize the object to be photographed, compose a composition according to the object to be photographed, and prompt the user to perform a shooting operation according to the composition instruction to obtain a better shooting effect.
  • the mobile phone can instruct the user to pan the phone up, down, left, right, front, and back according to the composition, or prompt the user to rotate the hand phone up, down, left, or right, or to prompt the user to flip the phone forward and back.
  • the mobile phone may prompt the user to move the mobile phone to the target position according to the prompt; referring to FIG. 13B, the mobile phone may prompt the user to turn the mobile phone forward.
  • the mobile phone if the mobile phone currently uses the camera 1 for shooting, a first composition is generated based on the first image collected by the second camera and a preset composition algorithm, and an image preview is performed, and the camera 1 is not the second camera, the mobile phone also The second camera can be turned on. After the second camera is turned on, the mobile phone can check the first composition according to the image of the larger field of view collected by the second camera to determine whether the first composition is reasonable.
  • the field of view of the second image collected by the second camera since the field of view of the second image collected by the second camera is larger, it may include more information than the image collected by the camera 1, and the second composition generated based on more information is more accurate, that is, the mobile phone according to The composition generated by the second image is more accurate.
  • the mobile phone determines that the part of the second composition that corresponds to the first image is consistent with the first composition, the mobile phone determines that the first composition is reasonable, the first composition matches the second composition, and the mobile phone uses the first composition to prompt the user to shoot operating.
  • the mobile phone determines that the part of the composition corresponding to the first image in the second composition is inconsistent with the first composition, determines that the first composition is unreasonable, and the mobile phone may prompt the user to perform a shooting operation according to the second composition. Or, if the mobile phone determines that the first composition is unreasonable, the mobile phone automatically switches to use the second camera for composition and shooting. Or, if the mobile phone determines that the first composition is unreasonable, the mobile phone may prompt the user whether to switch to the second camera with a larger field of view for composition and shooting. For example, referring to FIG. 13C, the mobile phone may prompt the user through a prompt box 1301 whether to switch to a camera with a larger field of view for composition and shooting.
  • the phone switches to the second camera for composition and shooting, so that the range of the field of view can be expanded without changing the position; if the user selects "No", the phone uses the camera 1 for composition And shooting. That is to say, using the feature of the second camera having a large field of view, the mobile phone can obtain a more reasonable composition, thereby prompting the user to shoot according to a more reasonable composition to obtain a better shooting effect.
  • the second composition generated from the image collected by the second camera includes the small Girls, flowers, moms and kittens are their respective positions, and the position of the little girl, flowers, mom in the image is more to the right, so the mobile phone determines that the first composition generated from the image collected by the second camera is unreasonable, and the mobile phone switches To the second camera, the second composition generated based on the image collected by the second camera prompts the user to perform a shooting operation, and finally shoots to obtain the image as shown in FIG. 13D.
  • the mobile phone may acquire the image collected by the camera, and process the acquired image to obtain the image of the object to be photographed.
  • the process may include:
  • the mobile phone determines the number of frames captured by each of the three cameras.
  • One method is that the mobile phone controls each camera to collect images corresponding to the number of frames according to the brightness of the environment in which the object to be photographed is located. For example, when the brightness is low, the mobile phone can control each camera to collect more frames of images. When the brightness is large, the mobile phone can control each camera to collect images with fewer frames. This is because the brightness of the object to be photographed can affect the signal-to-noise ratio of the image collected by the camera. If the brightness is small, the signal-to-noise of the collected image is relatively low, that is, the ratio of effective signal to noise in the collected image is low, and the noise accounted for a large component.
  • the mobile phone can control the camera to collect more frames of images. If the brightness is large, the signal-to-noise of the collected image is relatively large, that is, the ratio of effective signal to noise in the collected image is large, that is, the component of the noise is small. At this time, there is no need to collect more images to get enough effective signals, so the mobile phone can control the camera to collect fewer frames of images.
  • the mobile phone can also determine the number of frames captured by each camera in other ways, which is not specifically limited in this embodiment.
  • each camera collects images corresponding to the number of frames.
  • the mobile phone can preprocess the images collected by each camera separately.
  • the pretreatment process is as follows:
  • the mobile phone can perform noise reduction processing on the images collected by each camera, and fuse all the frame images after the noise reduction processing to obtain an image with a relatively high signal-to-noise ratio.
  • each camera corresponds to an image with a relatively high signal-to-noise ratio.
  • the mobile phone can fuse the image corresponding to each camera again to initially obtain an image of the object to be photographed, and the image can be directly used as the final image of the object to be photographed.
  • the mobile phone may further process the image of the object to be photographed initially obtained.
  • the mobile phone may perform image enhancement processing on the initially obtained image of the object to be captured according to the scene depth information, so as to improve the quality of the final image of the object to be captured.
  • the mobile phone may determine the scene depth information in various ways.
  • the mobile phone can determine the depth information of the scene through the parallax of the images collected by the two cameras. For example, when K is greater than or equal to 1 and less than N, the phone can generate depth information based on the images collected by the second camera and the first camera; when K is greater than or equal to N, the phone can use the second camera and the first camera The acquired images are used to generate depth information.
  • this embodiment is not specifically limited.
  • the mobile phone may perform image enhancement processing on the image of the object to be initially captured according to the scene depth information.
  • the mobile phone may only perform image enhancement processing of contrast and sharpness on the overall image of the object to be captured initially based on scene depth information, so as to improve the three-dimensional and layered sense of the final image of the object to be captured.
  • the mobile phone can perform image enhancement processing only on the foreground, such as the sharpness of the portrait, in the image of the object to be captured based on the depth information of the scene, without processing the sharpness of the background, or blurring the image, similar to the user The situation of taking pictures with a large aperture to achieve personalized shooting.
  • the mobile phone can also perform image enhancement processing on the clarity of the area selected by the user on the image of the subject to be initially captured according to the depth information of the scene, without processing other areas or performing image blur processing, similar to the user first taking a picture Focusing effect to achieve personalized shooting.
  • the mobile phone may output the image of the object to be photographed, for example, displayed on the display screen of the mobile phone.
  • this embodiment is not specifically limited.
  • this embodiment provides a shooting method, which can be implemented in electronic devices (such as mobile phones, tablet computers, etc.) with touch screens as shown in FIGS. 1 and 3.
  • the electronic device includes a wide-angle color camera (that is, the first camera), an ultra-wide-angle color camera (that is, the second camera), and a telephoto color camera (that is, the third camera).
  • the method may include the following steps:
  • Step 1401 The electronic device detects the first operation of the user for turning on the camera.
  • the first operation that the user uses to open the camera may be an operation that the user clicks the camera app icon on the electronic device.
  • Step 1402 In response to the first operation, the electronic device displays a shooting interface on the touch screen.
  • the electronic device can enter the shooting mode and display the shooting interface in response to the first operation of the user for turning on the camera.
  • the shooting interface may be the interface shown in FIG. 4A, FIG. 4B, FIG. 4C, or FIG. 6A.
  • Step 1403. The electronic device captures the image through the wide-angle color camera.
  • the wide-angle color camera is a wide-angle color camera.
  • the wide-angle color camera has the best imaging quality and a large field of view, so it can be used.
  • the wide-angle color camera is used as the main camera to shoot, and the wide-angle color camera is used to capture images by default.
  • Step 1404 The electronic device displays the image captured by the wide-angle color camera on the touch screen.
  • the image captured by the wide-angle color camera displayed on the touch screen may be the image of the Eiffel Tower shown in FIG. 6A.
  • Step 1405 The electronic device detects the second operation of the user to instruct zooming.
  • the second operation for the user to instruct zooming may be, as shown in FIG. 6B, the user clicks an operation for instructing a zoom magnification control (eg, control 601).
  • the second operation for the user to instruct zooming may also be a user's pinch operation as shown in FIG. 7B.
  • Step 1406 In response to the second operation, the electronic device switches to capture the image through the ultra-wide-angle color camera or the telephoto color camera.
  • the electronic device may switch to the ultra-wide-angle color camera to capture the image.
  • the electronic device displays the image captured by the ultra-wide-angle color camera or the telephoto color camera on the touch screen.
  • the electronic device may switch to the ultra-wide-angle color camera to capture the image may be the image of the Eiffel Tower shown in FIG. 6B .
  • the user can perform zoom instructions according to the current shooting scene, and the electronic device can automatically switch between the wide-angle color camera, the ultra-wide-angle color camera, and the telephoto color camera according to the user's zoom instruction, thereby passing Matching with the user's zoom indication, the camera can adapt to the current shooting scene to capture and shoot images, and obtain better shooting results.
  • the shooting interface includes at least one first control for indicating the zoom magnification
  • the second operation is a user's preset operation for the first control.
  • the first control may be the control 601, the control 602, etc. shown in FIG. 6A, FIG. 6B, or may be the zoom magnification scale in FIG. 7A and FIG. 7B, etc.
  • the shooting interface includes at least one second control for instructing the camera, and the second operation is a user's preset operation for the second control.
  • the second control may be control 401-control 403 in FIG. 4A, or control 404-control 406 shown in FIG. 4B, or the like.
  • the electronic device can first determine the zoom magnification, and then can automatically switch to a camera corresponding to the zoom magnification to capture an image according to the correspondence between the zoom magnification and the camera shown in Table 1.
  • the above step 1406 may include: in response to the second operation, the electronic device determines the zoom magnification K; if M ⁇ K ⁇ 1, the electronic device switches to capturing the image through the ultra-wide-angle color camera, and through the wide-angle color The camera and / or telephoto color camera captures the image; if K ⁇ N, the electronic device switches to capture the image through the telephoto color camera and captures the image through the wide-angle color camera and / or ultra-wide-angle color camera.
  • the electronic device can first determine the zoom magnification, and then can automatically switch to multiple cameras corresponding to the zoom magnification to capture images according to the correspondence between the zoom magnification and the cameras shown in Table 2, so as to improve the image imaging quality.
  • the electronic device may also automatically switch the camera.
  • the method may further include:
  • the electronic device automatically switches to capturing the image through the ultra-wide-angle color camera .
  • the first preset value may be the preset value 4, as shown in FIGS. 11A-11B, if the distance between the object to be photographed and the electronic device is less than or equal to the preset value 4, the current shooting mode is macro mode, The electronic device automatically switches to the ultra-wide-angle color camera to capture the image.
  • the electronic device automatically switches to capturing the image through the telephoto color camera ,
  • the second preset value is greater than the first preset value.
  • the first preset value may be the preset value 5, as shown in FIG. 12, if the distance between the object to be photographed and the electronic device is greater than or equal to the preset value 5, the current shooting mode is the distant view mode, and the electronic device automatically switches Go to the telephoto color camera to capture the image.
  • the method may further include:
  • the electronic device automatically switches to capturing the image through the ultra-wide-angle color camera.
  • the current camera used to capture the image is a wide-angle color camera
  • the object to be photographed is the Eiffel Tower.
  • the preview image of the shooting interface shown in FIG. 9A only shows a part of the object to be photographed, but the complete image to be photographed cannot be displayed
  • the electronic device can automatically switch to a wide-angle color camera with a wider field of view to capture images.
  • the current camera used to capture the image is a wide-angle color camera
  • the object to be photographed is a group of people.
  • the preview image of the shooting interface shown in FIG. 10A shows only a part of the object to be photographed, but the complete image cannot be displayed.
  • the electronic device can automatically switch to a wide-angle color camera with a wider field of view to capture the image.
  • the method may further include:
  • the electronic device generates a first composition according to the first image captured by the camera currently used to capture the image.
  • the camera currently used to capture the image is a wide-angle color camera
  • the first image may be the preview image shown in FIG. 13A or 13B.
  • the electronic device If the camera currently used to capture the image is a wide-angle color camera or a telephoto color camera, the electronic device also captures the second image through the ultra-wide-angle color camera.
  • the current camera used to capture the image is a wide-angle color camera, not an ultra-wide-angle color camera, the electronic device also captures the second image through the ultra-wide-angle color camera.
  • the electronic device generates a second composition according to the second image captured by the ultra-wide-angle color camera.
  • the electronic device prompts the user whether to switch the camera.
  • the electronic device may prompt the user whether to switch the camera.
  • the electronic device may prompt the user through the prompt box 1301 shown in FIG. 13C whether to switch to a camera with a larger field of view for composition and shooting.
  • the electronic device stops capturing the image through the camera currently used to capture the image.
  • the operation for the user to instruct to switch the camera may be an operation in which the user selects "Yes" on the prompt box 1301, and in response to the operation, the electronic device may stop capturing the image through the wide-angle color camera.
  • the electronic device prompts the user to shoot according to the second image and the second composition captured by the ultra-wide-angle color camera.
  • the image frame captured by the electronic device according to the ultra-wide-angle color camera can be seen in FIG. 13D.
  • the electronic device may continue to prompt the user to take the first image and the first composition captured by the wide-angle color camera. If the user's operation for instructing not to switch the camera is detected, or if the user's operation for instructing to switch the camera is not detected, the electronic device may continue to prompt the user to take the first image and the first composition captured by the wide-angle color camera.
  • the method may further include:
  • the electronic device captures the image through the camera currently used to capture the image and the long wide-angle color camera.
  • the electronic device synthesizes the image captured by the camera currently used to capture the image and the image captured by the long wide-angle color camera.
  • the electronic device displays the synthesized image on the touch screen.
  • the electronic device can continue to capture the image through the camera currently used to capture the image, and can also combine the wider field-of-view characteristics of the ultra-wide-angle color camera to capture the landscape within a larger field of view Picture, thus merging the pictures captured by the two cameras to help users record pictures of more open landscapes.
  • the method may further include:
  • the electronic device detects a third operation that the user uses to instruct recording.
  • the third operation may be an operation in which the user clicks the recording control in FIG. 4A.
  • the electronic device displays the shooting interface in the recording mode in response to the third operation.
  • the electronic device automatically zooms according to the size of the object to be photographed on the shooting interface, or according to the distance between the object to be photographed and the electronic device.
  • the electronic device may set a reference size of the object to be photographed on the screen, and the reference size is fixed during the photographing process.
  • the electronic device can determine the zoom magnification and automatically zoom according to the ratio between the actual size of the object to be shot and the reference size in the image obtained by shooting in the current frame.
  • the electronic device may set a preset value of the distance between the object to be photographed and the electronic device (eg, the aforementioned preset value 1, preset value 2, or preset value 3).
  • the electronic device may determine the zoom magnification according to the relationship between the distance between the current object to be photographed and the electronic device and the preset value, and automatically zoom.
  • the electronic device automatically switches the camera used to capture the image according to the zoom result.
  • the electronic device determines the camera to be used for shooting the next image according to the zoom magnification, so as to automatically switch to the camera corresponding to the zoom magnification to capture the image.
  • the method further includes: the electronic device prompts the user on the touch screen of the camera currently used to capture the image, the current zoom magnification or the current shooting mode, so that the user can know the camera currently used to capture the image in real time and / Or the current zoom ratio.
  • the electronic device prompts the user through the display control 702 that the current zoom magnification is 2 ⁇ .
  • the electronic device prompts the user through the display control 902 that the user is currently in the ultra-wide-angle shooting mode.
  • the electronic device prompts the user that the camera currently used to capture the image is a wide-angle color camera by bolding the border of the control 405 or highlighting the control 405.
  • the electronic device includes hardware and / or software modules corresponding to performing each function.
  • the present application can be implemented in the form of hardware or a combination of hardware and computer software. Whether a function is executed by hardware or computer software driven hardware depends on the specific application and design constraints of the technical solution. A person skilled in the art may use different methods to implement the described functions for each specific application in combination with the embodiments, but such implementation should not be considered beyond the scope of the present application.
  • the electronic device may be divided into function modules according to the above method examples.
  • each function module may be divided corresponding to each function, or two or more functions may be integrated into one processing module.
  • the above integrated module can be implemented in the form of hardware. It should be noted that the division of the modules in this embodiment is schematic, and is only a division of logical functions. In actual implementation, there may be another division manner.
  • FIG. 16 shows a schematic diagram of a possible composition of the electronic device 1600 involved in the above embodiment.
  • the electronic device 1600 may include: a detection unit 1601, a display unit 1602, and a processing unit 1603.
  • the detection unit 1601 may be used to support the electronic device 1600 to perform the above steps 1401, 1405, 1420, etc., and / or other processes used in the technology described herein.
  • the display unit 1602 may be used to support the electronic device 1600 to perform the above steps 1402, 1404, 1407, 1419, 1421, etc., and / or other processes for the technology described herein.
  • the processing unit 1603 may be used to support the electronic device 1600 to perform the above steps 1403, step 1406, steps 1408-1418, step 1422, step 1423, etc., and / or other processes for the technology described herein.
  • the electronic device provided in this embodiment is used to execute the above image capturing method, and therefore can achieve the same effect as the above implementation method.
  • the electronic device may include a processing module, a storage module, and a communication module.
  • the processing module may be used to control and manage the actions of the electronic device. For example, it may be used to support the electronic device to execute the steps performed by the detection unit 1601, the display unit 1602, and the processing unit 1603.
  • the storage module can be used to support electronic devices to store images captured by the camera, as well as store program codes and data.
  • the communication module can be used to support communication between electronic devices and other devices.
  • the processing module may be a processor or a controller. It can implement or execute various exemplary logical blocks, modules, and circuits described in conjunction with the disclosure of the present application.
  • the processor may also be a combination of computing functions, such as a combination of one or more microprocessors, a combination of digital signal processing (DSP) and a microprocessor, and so on.
  • the storage module may be a memory.
  • the communication module may specifically be a device that interacts with other electronic devices, such as a radio frequency circuit, a Bluetooth chip, or a Wi-Fi chip.
  • the electronic device involved in this embodiment may be a device having the structure shown in FIG. 1.
  • one or more computer programs are stored in the internal memory 121 and configured to be executed by one or more processors.
  • the one or more computer programs include instructions.
  • the above instructions can be used to execute various steps performed by the electronic device as shown in FIG. 14 and the corresponding embodiments. In some other embodiments, the above instructions may also be used to perform various steps performed by the electronic device in FIG. 15 and the corresponding embodiments.
  • This embodiment also provides a computer storage medium in which computer instructions are stored.
  • the computer instructions run on an electronic device, the electronic device executes the above-mentioned related method steps to implement the image capturing method in the above embodiment.
  • This embodiment also provides a computer program product.
  • the computer program product runs on a computer, the computer is caused to perform the above-mentioned relevant steps to implement the image capturing method in the above embodiment.
  • the embodiments of the present application also provide an apparatus.
  • the apparatus may specifically be a chip, a component, or a module.
  • the apparatus may include a connected processor and a memory; wherein the memory is used to store computer-executed instructions.
  • the processor may execute computer execution instructions stored in the memory to cause the chip to execute the image capturing method in each of the above method embodiments.
  • the electronic device, computer storage medium, computer program product or chip provided in this embodiment are used to perform the corresponding methods provided above, therefore, for the beneficial effects that can be achieved, refer to the corresponding provided above The beneficial effects of the method will not be repeated here.
  • the disclosed device and method may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the modules or units is only a division of logical functions.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, and may be in electrical, mechanical or other forms.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may be one physical unit or multiple physical units, that is, may be located in one place, or may be distributed in multiple different places . Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in the above embodiments may be integrated into one processing unit, or each unit may exist alone physically, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or software function unit.
  • the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a readable storage medium.
  • the technical solution of this embodiment essentially or part of the contribution to the existing technology or all or part of the technical solution can be embodied in the form of a software product, which is stored in a storage medium It includes several instructions to enable a device (which may be a single-chip microcomputer, chip, etc.) or processor to execute all or part of the steps of the methods described in the above embodiments.
  • the foregoing storage media include various media that can store program codes, such as a USB flash drive, a mobile hard disk, a read-only memory (ROM), a random access memory (RAM), a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Studio Devices (AREA)

Abstract

Les modes de réalisation de la présente invention se rapportent au domaine technique de l'électronique et concernent un procédé de capture d'image et un dispositif électronique, le dispositif permettant de commuter vers différentes caméras afin de photographier dans différentes scènes de photographie de façon à améliorer un effet de photographie d'image. La solution de l'invention comprend : un dispositif électronique comprenant une caméra couleur à grand angle, une caméra couleur à angle ultra-large et une caméra couleur à téléobjectif; après la détection d'une première opération de rotation d'une caméra d'un utilisateur, l'affichage par le dispositif électronique d'une interface de photographie sur un écran tactile en réponse à la première opération; la capture par le dispositif électronique d'une image au moyen de la caméra couleur à grand angle, et l'affichage de l'image capturée sur l'écran tactile; et après la détection d'une seconde opération de l'utilisateur permettant d'indiquer un zoom, la commutation par le dispositif électronique, en réponse à la seconde opération, afin de capturer une image au moyen de la caméra de couleur à angle ultra-large ou de la caméra couleur à téléobjectif, et l'affichage de l'image capturée sur l'écran tactile. Les modes de réalisation de la présente invention sont utilisés pour un processus de photographie.
PCT/CN2019/110387 2018-10-12 2019-10-10 Procédé de capture d'image et dispositif électronique WO2020073959A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201980004268.3A CN111183632A (zh) 2018-10-12 2019-10-10 图像捕捉方法及电子设备

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201811190767.XA CN110248081A (zh) 2018-10-12 2018-10-12 图像捕捉方法及电子设备
CN201811190767.X 2018-10-12

Publications (1)

Publication Number Publication Date
WO2020073959A1 true WO2020073959A1 (fr) 2020-04-16

Family

ID=67882390

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/110387 WO2020073959A1 (fr) 2018-10-12 2019-10-10 Procédé de capture d'image et dispositif électronique

Country Status (2)

Country Link
CN (2) CN110248081A (fr)
WO (1) WO2020073959A1 (fr)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111881845A (zh) * 2020-07-30 2020-11-03 安徽兰臣信息科技有限公司 一种商务会员系统的面部影像捕捉终端
CN113873142A (zh) * 2020-06-30 2021-12-31 Oppo广东移动通信有限公司 多媒体处理芯片、电子设备和动态图像处理方法
CN114531578A (zh) * 2020-11-23 2022-05-24 华为技术有限公司 光源光谱获取方法和设备
CN114650364A (zh) * 2020-12-19 2022-06-21 Oppo广东移动通信有限公司 设备控制方法、装置、可穿戴设备和存储介质
CN114694242A (zh) * 2020-12-25 2022-07-01 华为技术有限公司 Ai识别方法、电子设备和摄像头
EP4030745A1 (fr) * 2021-01-14 2022-07-20 Beijing Xiaomi Mobile Software Co., Ltd. Système à caméras multiples et procédé pour faire fonctionner le système à caméras multiples
CN114866680A (zh) * 2021-02-03 2022-08-05 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN116095460A (zh) * 2022-05-25 2023-05-09 荣耀终端有限公司 录像方法、装置及存储介质
CN116347224A (zh) * 2022-10-31 2023-06-27 荣耀终端有限公司 拍摄帧率控制方法、电子设备、芯片系统及可读存储介质
CN116708751A (zh) * 2022-09-30 2023-09-05 荣耀终端有限公司 一种拍照时长的确定方法、装置和电子设备
CN117177062A (zh) * 2022-05-30 2023-12-05 荣耀终端有限公司 一种摄像头切换方法及电子设备
CN114694242B (zh) * 2020-12-25 2024-06-04 华为技术有限公司 Ai识别方法、电子设备和摄像头

Families Citing this family (42)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备
CN109889751B (zh) * 2019-04-18 2020-09-15 东北大学 基于光学变焦的演讲内容便携式拍摄记录装置
CN110351487A (zh) * 2019-08-26 2019-10-18 Oppo广东移动通信有限公司 控制方法、控制装置、电子设备和存储介质
CN110677580B (zh) * 2019-09-24 2021-09-28 捷开通讯(深圳)有限公司 拍摄方法、装置、存储介质及终端
CN110767145B (zh) * 2019-10-24 2022-07-26 武汉天马微电子有限公司 显示装置及其驱动方法
CN111027374B (zh) * 2019-10-28 2023-06-30 华为终端有限公司 一种图像识别方法及电子设备
CN110809101B (zh) * 2019-11-04 2022-05-17 RealMe重庆移动通信有限公司 图像变焦处理方法及装置、电子设备、存储介质
CN112991242A (zh) * 2019-12-13 2021-06-18 RealMe重庆移动通信有限公司 图像处理方法、图像处理装置、存储介质与终端设备
CN110993633B (zh) * 2019-12-17 2023-06-20 武汉芯盈科技有限公司 一种用于屏下指纹传感器的可调整像素尺寸的成像装置
WO2021134311A1 (fr) * 2019-12-30 2021-07-08 苏州臻迪智能科技有限公司 Procédé et appareil de commutation d'objet à photographier, et procédé et appareil de traitement d'image
CN111147755B (zh) * 2020-01-02 2021-12-31 普联技术有限公司 双摄像头的变焦处理方法、装置及终端设备
CN113194242B (zh) * 2020-01-14 2022-09-20 荣耀终端有限公司 一种长焦场景下的拍摄方法及移动终端
CN115104299A (zh) * 2020-02-21 2022-09-23 索尼集团公司 程序、信息处理方法和电子设备
CN111294517B (zh) * 2020-03-03 2021-12-17 荣耀终端有限公司 一种图像处理的方法及移动终端
CN113497890B (zh) * 2020-03-20 2023-04-07 华为技术有限公司 一种拍摄方法及设备
CN111614891B (zh) * 2020-04-24 2022-09-16 深圳传音控股股份有限公司 一种相机应用功能切换方法、终端及计算机可读存储介质
CN113596316B (zh) * 2020-04-30 2023-12-08 华为技术有限公司 拍照方法及电子设备
CN111464752A (zh) * 2020-05-18 2020-07-28 Oppo广东移动通信有限公司 电子装置的变焦控制方法及电子装置
CN111654629B (zh) * 2020-06-11 2022-06-24 展讯通信(上海)有限公司 摄像头切换方法、装置、电子设备及可读存储介质
CN113890984B (zh) * 2020-07-03 2022-12-27 华为技术有限公司 拍照方法、图像处理方法和电子设备
CN111787224B (zh) * 2020-07-10 2022-07-12 深圳传音控股股份有限公司 图像的获取方法、终端设备和计算机可读存储介质
CN111866388B (zh) * 2020-07-29 2022-07-12 努比亚技术有限公司 一种多重曝光拍摄方法、设备及计算机可读存储介质
WO2022022715A1 (fr) * 2020-07-30 2022-02-03 华为技术有限公司 Procédé et dispositif photographique
CN111970438B (zh) * 2020-08-03 2022-06-28 Oppo广东移动通信有限公司 变焦处理方法及装置、设备、存储介质
CN112073642B (zh) * 2020-09-18 2021-09-14 展讯通信(上海)有限公司 多摄像头设备的视频录制方法及装置、存储介质、终端
CN112333388B (zh) * 2020-10-30 2022-04-12 维沃移动通信(杭州)有限公司 图像显示方法、装置和电子设备
CN118075615A (zh) * 2020-11-30 2024-05-24 华为技术有限公司 一种拍摄视频的方法及电子设备
CN112866576B (zh) * 2021-01-18 2022-05-17 Oppo广东移动通信有限公司 图像预览方法、存储介质及显示设备
CN112784819A (zh) * 2021-03-05 2021-05-11 上海钜星科技有限公司 一种增加警用头盔人脸识别和车牌识别有效距离的方法
CN113242383B (zh) * 2021-03-11 2022-04-29 海信视像科技股份有限公司 显示设备及显示设备自动对焦成像的图像校准方法
CN115150540B (zh) * 2021-03-30 2023-10-24 华为技术有限公司 拍摄方法、终端设备及计算机可读存储介质
CN115150543B (zh) * 2021-03-31 2024-04-16 华为技术有限公司 拍摄方法、装置、电子设备及可读存储介质
CN115225800B (zh) * 2021-04-14 2024-03-05 华为技术有限公司 多摄像头变焦方法、装置及设备
CN113364975B (zh) * 2021-05-10 2022-05-20 荣耀终端有限公司 一种图像的融合方法及电子设备
CN113747028B (zh) * 2021-06-15 2024-03-15 荣耀终端有限公司 一种拍摄方法及电子设备
CN113747065B (zh) * 2021-09-03 2023-12-26 维沃移动通信(杭州)有限公司 拍摄方法、装置及电子设备
CN116546316B (zh) * 2022-01-25 2023-12-08 荣耀终端有限公司 切换摄像头的方法与电子设备
CN116939357A (zh) * 2022-04-02 2023-10-24 华为技术有限公司 微距拍摄方法、电子设备和计算机可读存储介质
CN116703718B (zh) * 2022-09-08 2024-03-22 荣耀终端有限公司 一种图像放大方法及电子设备
CN116709018B (zh) * 2022-10-14 2024-04-09 荣耀终端有限公司 一种变焦条分割方法及电子设备
CN115802158B (zh) * 2022-10-24 2023-09-01 荣耀终端有限公司 切换摄像头的方法与电子设备
CN117354624A (zh) * 2023-12-06 2024-01-05 荣耀终端有限公司 摄像头切换方法、设备以及存储介质

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002724A1 (en) * 2013-06-27 2015-01-01 Altek Semiconductor Corp. Method for adjusting focus position and electronic apparatus
CN104363379A (zh) * 2014-11-28 2015-02-18 广东欧珀移动通信有限公司 使用不同焦距摄像头拍照的方法和终端
CN204392356U (zh) * 2015-03-12 2015-06-10 王欣东 一种带有多个不同固定焦距摄像头的手机
CN106454132A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106527943A (zh) * 2016-11-04 2017-03-22 上海传英信息技术有限公司 摄像头切换方法和移动终端
CN107800951A (zh) * 2016-09-07 2018-03-13 深圳富泰宏精密工业有限公司 电子装置及其镜头切换方法
CN108540711A (zh) * 2017-03-03 2018-09-14 上海传英信息技术有限公司 移动终端及其拍摄方法
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6249202B2 (ja) * 2013-05-10 2017-12-20 ソニー株式会社 画像表示装置、画像表示方法、およびプログラム
JP2016191754A (ja) * 2015-03-31 2016-11-10 キヤノン株式会社 ズームレンズ
CN105049711B (zh) * 2015-06-30 2018-09-04 广东欧珀移动通信有限公司 一种拍照方法及用户终端
CN106851107A (zh) * 2017-03-09 2017-06-13 广东欧珀移动通信有限公司 切换摄像头辅助构图的控制方法、控制装置及电子装置
CN107509032A (zh) * 2017-09-08 2017-12-22 维沃移动通信有限公司 一种拍照提示方法及移动终端
CN107835364A (zh) * 2017-10-30 2018-03-23 维沃移动通信有限公司 一种拍照辅助方法及移动终端

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150002724A1 (en) * 2013-06-27 2015-01-01 Altek Semiconductor Corp. Method for adjusting focus position and electronic apparatus
CN104363379A (zh) * 2014-11-28 2015-02-18 广东欧珀移动通信有限公司 使用不同焦距摄像头拍照的方法和终端
CN204392356U (zh) * 2015-03-12 2015-06-10 王欣东 一种带有多个不同固定焦距摄像头的手机
CN107800951A (zh) * 2016-09-07 2018-03-13 深圳富泰宏精密工业有限公司 电子装置及其镜头切换方法
CN106527943A (zh) * 2016-11-04 2017-03-22 上海传英信息技术有限公司 摄像头切换方法和移动终端
CN106454132A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN108540711A (zh) * 2017-03-03 2018-09-14 上海传英信息技术有限公司 移动终端及其拍摄方法
CN110248081A (zh) * 2018-10-12 2019-09-17 华为技术有限公司 图像捕捉方法及电子设备

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113873142A (zh) * 2020-06-30 2021-12-31 Oppo广东移动通信有限公司 多媒体处理芯片、电子设备和动态图像处理方法
CN113873142B (zh) * 2020-06-30 2023-07-25 Oppo广东移动通信有限公司 多媒体处理芯片、电子设备和动态图像处理方法
CN111881845A (zh) * 2020-07-30 2020-11-03 安徽兰臣信息科技有限公司 一种商务会员系统的面部影像捕捉终端
CN111881845B (zh) * 2020-07-30 2024-03-01 苏州玥林信息科技有限公司 一种商务会员系统的面部影像捕捉终端
CN114531578A (zh) * 2020-11-23 2022-05-24 华为技术有限公司 光源光谱获取方法和设备
CN114531578B (zh) * 2020-11-23 2023-11-07 华为技术有限公司 光源光谱获取方法和设备
CN114650364A (zh) * 2020-12-19 2022-06-21 Oppo广东移动通信有限公司 设备控制方法、装置、可穿戴设备和存储介质
CN114694242A (zh) * 2020-12-25 2022-07-01 华为技术有限公司 Ai识别方法、电子设备和摄像头
CN114694242B (zh) * 2020-12-25 2024-06-04 华为技术有限公司 Ai识别方法、电子设备和摄像头
EP4030745A1 (fr) * 2021-01-14 2022-07-20 Beijing Xiaomi Mobile Software Co., Ltd. Système à caméras multiples et procédé pour faire fonctionner le système à caméras multiples
CN114866680A (zh) * 2021-02-03 2022-08-05 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN114866680B (zh) * 2021-02-03 2024-02-02 Oppo广东移动通信有限公司 图像处理方法、装置、存储介质及电子设备
CN116095460A (zh) * 2022-05-25 2023-05-09 荣耀终端有限公司 录像方法、装置及存储介质
CN116095460B (zh) * 2022-05-25 2023-11-21 荣耀终端有限公司 录像方法、装置及存储介质
CN117177062A (zh) * 2022-05-30 2023-12-05 荣耀终端有限公司 一种摄像头切换方法及电子设备
CN116708751A (zh) * 2022-09-30 2023-09-05 荣耀终端有限公司 一种拍照时长的确定方法、装置和电子设备
CN116708751B (zh) * 2022-09-30 2024-02-27 荣耀终端有限公司 一种拍照时长的确定方法、装置和电子设备
CN116347224A (zh) * 2022-10-31 2023-06-27 荣耀终端有限公司 拍摄帧率控制方法、电子设备、芯片系统及可读存储介质
CN116347224B (zh) * 2022-10-31 2023-11-21 荣耀终端有限公司 拍摄帧率控制方法、电子设备、芯片系统及可读存储介质

Also Published As

Publication number Publication date
CN110248081A (zh) 2019-09-17
CN111183632A (zh) 2020-05-19

Similar Documents

Publication Publication Date Title
WO2020073959A1 (fr) Procédé de capture d'image et dispositif électronique
WO2021093793A1 (fr) Procédé de capture et dispositif électronique
US11785329B2 (en) Camera switching method for terminal, and terminal
WO2020168956A1 (fr) Procédé pour photographier la lune, et dispositif électronique
CN110072070B (zh) 一种多路录像方法及设备、介质
CN114679537B (zh) 一种拍摄方法及终端
CN112887583B (zh) 一种拍摄方法及电子设备
JP7355941B2 (ja) 長焦点シナリオにおける撮影方法および端末
WO2020029306A1 (fr) Procédé de capture d'image et dispositif électronique
CN116055874B (zh) 一种对焦方法和电子设备
CN115967851A (zh) 快速拍照方法、电子设备及计算机可读存储介质
CN115484380A (zh) 拍摄方法、图形用户界面及电子设备
CN112532508B (zh) 一种视频通信方法及视频通信装置
CN116055872B (zh) 图像获取方法、电子设备和计算机可读存储介质
CN117479008A (zh) 一种视频处理方法、电子设备及芯片系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19872056

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19872056

Country of ref document: EP

Kind code of ref document: A1