WO2020019356A1 - 一种终端切换摄像头的方法及终端 - Google Patents

一种终端切换摄像头的方法及终端 Download PDF

Info

Publication number
WO2020019356A1
WO2020019356A1 PCT/CN2018/097676 CN2018097676W WO2020019356A1 WO 2020019356 A1 WO2020019356 A1 WO 2020019356A1 CN 2018097676 W CN2018097676 W CN 2018097676W WO 2020019356 A1 WO2020019356 A1 WO 2020019356A1
Authority
WO
WIPO (PCT)
Prior art keywords
shooting
camera
target
terminal
picture
Prior art date
Application number
PCT/CN2018/097676
Other languages
English (en)
French (fr)
Inventor
秦超
李远友
贾志平
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to CN202111196803.5A priority Critical patent/CN113905179B/zh
Priority to CN201880023140.7A priority patent/CN110506416B/zh
Priority to EP18928001.9A priority patent/EP3800876B1/en
Priority to US17/262,742 priority patent/US11412132B2/en
Priority to PCT/CN2018/097676 priority patent/WO2020019356A1/zh
Publication of WO2020019356A1 publication Critical patent/WO2020019356A1/zh
Priority to US17/854,324 priority patent/US11595566B2/en
Priority to US18/162,761 priority patent/US11785329B2/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/03Arrangements for converting the position or the displacement of a member into a coded form
    • G06F3/041Digitisers, e.g. for touch screens or touch pads, characterised by the transducing means
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M1/00Substation equipment, e.g. for use by subscribers
    • H04M1/72Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
    • H04M1/724User interfaces specially adapted for cordless or mobile telephones
    • H04M1/72403User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
    • H04M1/7243User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
    • H04M1/72439User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/67Focus control based on electronic image sensor signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/69Control of means for changing angle of the field of view, e.g. optical zoom objectives or electronic zooming
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/22Details of telephonic subscriber devices including a touch pad, a touch sensor or a touch detector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera

Definitions

  • the present application relates to the field of terminals, and in particular, to a method and terminal for switching a camera by a terminal.
  • the field of view can also be called the field of view, and the size of the field of view determines the field of view of the optical instrument (such as a camera).
  • the FOV of the camera is larger, the field of view of the shooting picture is larger; when the FOV of the camera is smaller, the field of view of the shooting picture is smaller.
  • a wide-angle camera 1 with a large FOV and a telephoto camera 2 with a small FOV can be installed on a mobile phone.
  • Users can trigger the phone to switch cameras by manual zoom. For example, if it is detected that the user has zoomed in the focal length by 2 times or more, the mobile phone can automatically switch to the telephoto camera 2.
  • the phone can switch the camera according to the intensity of the ambient light. For example, when the intensity of ambient light is weak, the mobile phone can automatically switch to the wide-angle camera 1 to get more light.
  • the user is more concerned about the shooting effect of the shooting target in the shooting screen, but none of the above camera switching methods can guarantee the shooting effect of the shooting target, making the shooting effect poor.
  • the present application provides a method and a terminal for switching a camera by a terminal.
  • the camera can be automatically switched according to the dynamic of the shooting target, thereby improving the shooting effect of the shooting target.
  • the present application provides a method for a terminal to switch cameras, including: in response to a first operation of a user turning on a shooting function, the terminal displays a first shooting picture captured by the first camera, and the first shooting picture includes a shooting target; If the shooting target cannot be completely displayed in the first shooting screen, it means that the FOV of the first camera currently used is small. Therefore, the terminal can switch the first shooting screen to the second camera (the FOV of the second camera is greater than that of the first camera). FOV) captures the second shooting picture, and turns off the first camera. That is, the terminal may determine a camera that can guarantee the shooting effect of the shooting target according to the position and size of the shooting target in the shooting screen. Therefore, during the shooting process, the user is automatically and intelligently assisted to switch the appropriate camera to shoot the shooting target, thereby improving the shooting effect of the shooting target.
  • the method further includes: the terminal determines whether the shooting target or a occupancy area of the shooting target coincides with a boundary line of the first shooting screen, and shoots. The target is located in the occupancy area; if overlap occurs, the terminal determines that the shooting target cannot be completely displayed in the first shooting screen; if no overlap occurs, the terminal determines that the shooting target can be completely displayed in the first shooting screen.
  • the method further includes: the terminal determining a shooting target in the first shooting picture.
  • the method further includes: the terminal receives a second operation input by the user into the first shooting picture, and the second operation is used to select the first shooting picture
  • the shooting target in the first shooting frame includes: determining, in response to the second operation, the terminal extracting an image feature at a position selected by the user in the first shooting frame, and determining the shooting target according to the image feature .
  • the terminal switches the first shooting picture to the second shooting picture captured by the second camera, including: the terminal turns on the second camera in the background to capture the second shooting picture; the terminal gradually zooms in through digital zoom The content in the second shooting screen; when the enlarged content of the second shooting screen is consistent with the content in the first shooting screen, the terminal turns off the first camera and displays the enlarged second shooting screen in the foreground; the terminal gradually recovers The standard focal length of the second shooting picture until the terminal completely displays the second shooting picture.
  • the terminal performs a smooth transition during the process of switching from the first shooting picture to the second shooting picture, avoiding the visual mutation caused by jumping from the first shooting picture to the second shooting picture, so as to improve the user experience.
  • the method further includes: the terminal determines, according to the size and / or position of the shooting target in the first shooting screen, that it is ready to switch. A target camera; the terminal switches the first shooting picture to a shooting picture captured by the target camera.
  • the present application provides a method for switching a camera by a terminal.
  • the terminal includes at least two cameras.
  • the method includes: in response to a user's first operation of turning on a shooting function, the terminal displays a first shooting screen captured by the first camera.
  • the first shooting screen includes a shooting target; if the shooting target can be completely displayed in the first shooting screen, the terminal determines a target camera with a better shooting effect according to the size and / or position of the shooting target in the first shooting screen. Further, the terminal may switch the first shooting picture to a shooting picture captured by the target camera, and turn off the first camera.
  • the terminal determining the target camera to be switched according to the position of the above-mentioned shooting target in the first shooting screen includes: the terminal calculating whether the shooting target is completely displayed on the first shooting screen preset In the best shooting area, the best shooting area is located at the center of the first shooting frame; if the shooting target cannot be completely displayed in the best shooting area, the terminal determines the second camera as the target camera, and the FOV of the second camera is greater than FOV of the first camera.
  • the terminal determines the target camera to be switched according to the size of the shooting target in the first shooting screen, including: the terminal calculates a target proportion of the shooting target in the first shooting screen; if the If the target ratio is greater than the upper limit of the preset target ratio range, the terminal determines the second camera as the target camera, and the FOV of the second camera is greater than the FOV of the first camera; if the target ratio is less than the preset target ratio range The lower limit of the number, the terminal determines the third camera as the target camera, and the FOV of the third camera is smaller than the FOV of the first camera.
  • the terminal determines the target camera to be switched according to the size and position of the above-mentioned shooting target in the first shooting picture, including: the terminal determining the first shooting picture, the second shooting picture, and the third shooting picture Positional relationship between them, the second shooting picture is a shooting picture taken when the second camera is turned on, and the third shooting picture is a shooting picture taken when the third camera is turned on, which is calculated according to the first shooting picture,
  • the FOV of the second camera is greater than the FOV of the first camera, and the FOV of the third camera is smaller than the FOV of the first camera;
  • the terminal is based on the size and position of the first, second, and third shooting frames in the first shooting frame, Determine the target camera to be switched.
  • the terminal determines the target camera to be switched according to the size and position of the above-mentioned shooting target in the first shooting screen, the second shooting screen, and the third shooting screen, including: the terminal starts from the first camera, At least one candidate camera is determined in the second camera and the third camera, and the shooting target of the candidate camera can completely display the shooting target; the terminal determines the target camera from the candidate camera, and the shooting target is in the shooting screen of the target camera.
  • the target proportion satisfies the preset conditions.
  • the target camera is a second camera
  • the FOV of the second camera is greater than the FOV of the first camera
  • the terminal switches the first shooting picture to a shooting picture captured by the target camera, including: The terminal opens the second camera in the background to capture the second shooting picture; the terminal gradually enlarges the content in the second shooting picture through digital zoom; when the enlarged content of the second shooting picture is consistent with the content in the first shooting picture, the terminal Turn off the first camera and display the enlarged second shooting picture in the foreground; the terminal gradually restores the standard focal length of the second shooting picture until the terminal completely displays the second shooting picture.
  • the target camera is a third camera
  • the FOV of the third camera is smaller than the FOV of the first camera
  • the terminal switches the first shooting picture to a shooting picture captured by the target camera, including: The terminal opens the third camera in the background to capture the third shooting screen; the terminal gradually enlarges the content in the first shooting screen through digital zoom; when the enlarged content of the first shooting screen is consistent with the content in the third shooting screen, the terminal Turn off the first camera and display the third shooting screen in the foreground.
  • the present application provides a terminal, including: a receiving unit, a display unit, a determining unit, and a switching unit, wherein the receiving unit is configured to receive a first operation of a user to turn on a shooting function, and the display unit is configured to display the first A first shooting picture captured by a camera, and the first shooting picture includes a shooting target; the switching unit is configured to: if the shooting target cannot be completely displayed in the first shooting picture, switch the first shooting picture to the second camera capturing The second shooting screen of the first camera is turned off, and the FOV of the second camera is greater than the FOV of the first camera.
  • the determining unit is configured to determine whether the shooting target or the footprint of the shooting target coincides with a boundary line of the first shooting frame, and the shooting target is located in the footprint; if If they are overlapped, it is determined that the shooting target cannot be completely displayed in the first shooting screen; if no overlap occurs, it is determined that the shooting targets can be completely displayed in the first shooting screen.
  • the determining unit is further configured to determine a shooting target in the first shooting frame.
  • the receiving unit is further configured to receive a second operation input by the user into the first shooting frame, and the second operation is used to select a shooting target in the first shooting frame;
  • the determining unit is specifically configured to: In response to the second operation, an image feature at a position selected by the user in the first shooting frame is extracted, and the shooting target is determined according to the image feature.
  • the switching unit is specifically configured to: turn on the second camera in the background to capture the second shooting picture; gradually enlarge the content in the second shooting picture through digital zoom; when the second shooting picture is enlarged When the content is consistent with the content in the first shooting picture, the first camera is turned off and the enlarged second shooting picture is displayed in the foreground; the standard focal length of the second shooting picture is gradually restored until the terminal completely displays the second shooting picture.
  • the determining unit is further configured to: if the shooting target can be completely displayed in the first shooting frame, determine the ready-to-switch based on the size and / or position of the shooting target in the first shooting frame.
  • a target camera; the switching unit is further configured to switch the first shooting picture to a shooting picture captured by the target camera.
  • the present application provides a terminal, including: a receiving unit, a display unit, a determining unit, and a switching unit, wherein the receiving unit is configured to: receive a first operation of a user to turn on a shooting function; the display unit is configured to: display a first A first shooting picture captured by a camera, and the first shooting picture includes a shooting target; the determining unit is configured to: if the shooting target can be completely displayed in the first shooting picture, according to the size of the shooting target in the first shooting picture And / or position to determine a target camera to be switched; the switching unit is configured to switch the first shooting picture to a shooting picture captured by the target camera and turn off the first camera.
  • the determining unit is specifically configured to calculate whether the shooting target is completely displayed in an optimal shooting area preset in the first shooting frame, and the optimal shooting area is located at the center of the first shooting frame; If the shooting target cannot be completely displayed in the optimal shooting area, the second camera is determined as the target camera, and the FOV of the second camera is greater than the FOV of the first camera.
  • the determining unit is specifically configured to calculate a target proportion of the shooting target in the first shooting frame; if the target proportion is greater than an upper limit of the preset target ratio range, the second The camera is determined as the target camera, and the FOV of the second camera is greater than the FOV of the first camera; if the target ratio is less than the lower limit of the preset target ratio range, the third camera is determined as the target camera, and the FOV of the third camera is less than FOV of the first camera.
  • the determining unit is specifically configured to determine a positional relationship between the first photographing frame, the second photographing frame, and the third photographing frame, and the second photographing frame is a second image calculated according to the first photographing frame.
  • the third shot is the shot taken when the third camera is open.
  • the FOV of the second camera is greater than the FOV of the first camera, and the FOV of the third camera is less than the first.
  • the FOV of the camera determine the target camera to be switched according to the size and position of the shooting target in the first, second, and third shooting frames.
  • the determining unit is specifically configured to determine at least one candidate camera from the first camera, the second camera, and the third camera, and the shooting target of the candidate camera can completely display the shooting target;
  • a target camera is determined from the candidate cameras, and a target ratio of the shooting target in a shooting picture of the target camera satisfies a preset condition.
  • the switching unit is specifically configured to: turn on the second camera in the background to capture the second shooting picture; gradually enlarge the content in the second shooting picture through digital zoom; when the second shooting picture is enlarged When the content is consistent with the content in the first shooting picture, the first camera is turned off and the enlarged second shooting picture is displayed in the foreground; the standard focal length of the second shooting picture is gradually restored until the terminal completely displays the second shooting picture.
  • the switching unit is specifically configured to: turn on the third camera in the background to capture the third shooting picture; gradually enlarge the content in the first shooting picture through digital zoom; when the first shooting picture is enlarged When the content is consistent with the content in the third shooting screen, the first camera is turned off, and the third shooting screen is displayed in the foreground.
  • the present application provides a terminal, including: at least two cameras, a touch screen, one or more processors, a memory, and one or more programs; wherein the processor is coupled to the memory, and the one or more programs are It is stored in a memory, and when the terminal is running, the processor executes one or more programs stored in the memory, so that the terminal executes the method for switching a camera by the terminal according to any one of the above.
  • the present application provides a computer storage medium including computer instructions.
  • the terminal causes the terminal to execute the method for switching a camera of the terminal according to any one of the foregoing.
  • the present application provides a computer program product that, when the computer program product runs on a computer, causes the computer to execute the method for switching a camera by a terminal according to any one of the foregoing.
  • the terminals described in the third to fifth aspects, the computer storage medium described in the sixth aspect, and the computer program product described in the seventh aspect are used to execute the corresponding ones provided above.
  • Method therefore, the beneficial effects that can be achieved can refer to the beneficial effects in the corresponding methods provided above, which will not be repeated here.
  • FIG. 1 is a first schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 2 is a second schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 3 is a first schematic diagram of a multi-camera shooting principle provided by an embodiment of the present application.
  • FIG. 4 is a schematic structural diagram of an operating system in a terminal according to an embodiment of the present application.
  • FIG. 5 is a second schematic diagram of a shooting principle of a multi-camera according to an embodiment of the present application.
  • FIG. 6A is a first schematic flowchart of a method for switching a camera by a terminal according to an embodiment of the present application
  • 6B is a second schematic flowchart of a method for switching a camera by a terminal according to an embodiment of the present application
  • FIG. 7 is a first schematic scenario view of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 8 is a second schematic scenario diagram of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 9 is a third schematic scenario diagram of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 10 is a fourth schematic scenario diagram of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 11 is a fifth schematic scenario diagram of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 12 is a sixth schematic scenario diagram of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 13 is a scenario diagram VII of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 14 is a schematic scenario eight of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 15 is a schematic scenario nine of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 16 is a schematic scenario ten of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 17 is a schematic diagram XI of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 18 is a schematic scenario diagram 12 of a method for switching a camera by a terminal according to an embodiment of the present application.
  • FIG. 19 is a third schematic structural diagram of a terminal according to an embodiment of the present application.
  • FIG. 20 is a fourth schematic structural diagram of a terminal according to an embodiment of the present application.
  • the method for switching cameras provided in the embodiments of the present application can be applied to a terminal.
  • the terminal may be a mobile phone, a tablet computer, a desktop type, a laptop, a laptop computer, an Ultra-mobile Personal Computer (UMPC), a handheld computer, a netbook, or a Personal Digital Assistant (Personal Digital Assistant, PDA), wearable electronic devices, virtual reality devices, etc.
  • UMPC Ultra-mobile Personal Computer
  • PDA Personal Digital Assistant
  • wearable electronic devices wearable electronic devices
  • virtual reality devices etc.
  • the specific form of the terminal is not particularly limited in the embodiments of the present application.
  • FIG. 1 is a structural block diagram of a terminal 100 according to an embodiment of the present invention.
  • the terminal 100 may include a processor 110, an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a radio frequency module 150, a communication module 160, and an audio module.
  • a processor 110 an external memory interface 120, an internal memory 121, a USB interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a radio frequency module 150, a communication module 160, and an audio module.
  • speaker 170A, receiver 170B, microphone 170C, headphone interface 170D, sensor module 180, button 190, motor 191, indicator 192, camera 193, display 194, and SIM card interface 195 may be included in the terminal 100.
  • the sensor module can include pressure sensor 180A, gyroscope sensor 180B, barometric pressure sensor 180C, magnetic sensor 180D, acceleration sensor 180E, distance sensor 180F, proximity light sensor 180G, fingerprint sensor 180H, temperature sensor 180J, touch sensor 180K, and ambient light sensor. 180L, bone conduction sensor 180M, etc.
  • the structure illustrated in the embodiment of the present invention does not limit the terminal 100. It may include more or fewer parts than shown, or some parts may be combined, or some parts may be split, or different parts may be arranged.
  • the illustrated components can be implemented in hardware, software, or a combination of software and hardware.
  • the processor 110 may include one or more processing units.
  • the processor 110 may include an application processor (AP), a modem processor, a graphics processing unit (GPU), and an image signal processor. (image signal processor, ISP), controller, memory, video codec, digital signal processor (DSP), baseband processor, and / or neural network processing unit (NPU) Wait.
  • AP application processor
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • DSP digital signal processor
  • NPU neural network processing unit
  • different processing units may be independent devices or integrated in one or more processors.
  • the controller may be a decision maker that instructs the various components of the terminal 100 to coordinate work according to the instructions. It is the nerve center and command center of the terminal 100.
  • the controller generates operation control signals according to the instruction operation code and timing signals, and completes the control of fetching and executing the instructions.
  • the processor 110 may further include a memory for storing instructions and data.
  • the memory in the processor is a cache memory, which can store instructions or data that the processor has just used or recycled. If the processor needs to use the instruction or data again, it can be called directly from the memory. Repeated accesses are avoided, the processor's waiting time is reduced, and the efficiency of the system is improved.
  • the processor 110 may include an interface.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit (inter-integrated circuit, sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transceiver (universal asynchronous) receiver / transmitter (UART) interface, mobile industry processor interface (MIPI), general-purpose input / output (GPIO) interface, subscriber identity module (SIM) interface, and / Or universal serial bus (universal serial bus, USB) interface.
  • I2C integrated circuit
  • I2S integrated circuit
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transceiver
  • MIPI mobile industry processor interface
  • GPIO general-purpose input / output
  • SIM subscriber identity module
  • USB universal serial bus
  • the I2C interface is a two-way synchronous serial bus, including a serial data line (SDA) and a serial clock line (SCL).
  • the processor may include multiple sets of I2C buses.
  • the processor can be coupled to touch sensors, chargers, flashes, cameras, etc. through different I2C bus interfaces.
  • the processor may couple the touch sensor through the I2C interface, so that the processor and the touch sensor communicate through the I2C bus interface to implement the touch function of the terminal 100.
  • the I2S interface can be used for audio communication.
  • the processor may include multiple sets of I2S buses.
  • the processor may be coupled to the audio module through an I2S bus to implement communication between the processor and the audio module.
  • the audio module can transmit audio signals to the communication module through the I2S interface, so as to implement the function of receiving calls through a Bluetooth headset.
  • the PCM interface can also be used for audio communications, sampling, quantizing, and encoding analog signals.
  • the audio module and the communication module may be coupled through a PCM bus interface.
  • the audio module can also transmit audio signals to the communication module through the PCM interface, so as to implement the function of receiving calls through a Bluetooth headset. Both the I2S interface and the PCM interface can be used for audio communication, and the sampling rates of the two interfaces are different.
  • the UART interface is a universal serial data bus for asynchronous communication. This bus is a two-way communication bus. It converts the data to be transferred between serial and parallel communications.
  • a UART interface is typically used to connect the processor and the communication module 160.
  • the processor communicates with the Bluetooth module through a UART interface to implement the Bluetooth function.
  • the audio module can transmit audio signals to the communication module through the UART interface, so as to implement the function of playing music through a Bluetooth headset.
  • the MIPI interface can be used to connect processors with peripheral devices such as displays, cameras, etc.
  • the MIPI interface includes a camera serial interface (CSI), a display serial interface (DSI), and the like.
  • the processor and the camera communicate through a CSI interface to implement a shooting function of the terminal 100.
  • the processor and the display screen communicate through a DSI interface to implement a display function of the terminal 100.
  • the GPIO interface can be configured by software.
  • the GPIO interface can be configured as a control signal or as a data signal.
  • the GPIO interface may be used to connect the processor with a camera, a display screen, a communication module, an audio module, a sensor, and the like.
  • GPIO interface can also be configured as I2C interface, I2S interface, UART interface, MIPI interface, etc.
  • the USB interface 130 may be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface can be used to connect a charger to charge the terminal 100, and can also be used to transfer data between the terminal 100 and peripheral devices. It can also be used to connect headphones and play audio through headphones. It can also be used to connect other electronic devices, such as AR devices.
  • the interface connection relationship between the modules illustrated in the embodiments of the present invention is only a schematic description, and does not constitute a limitation on the structure of the terminal 100.
  • the terminal 100 may use different interface connection modes or a combination of multiple interface connection modes in the embodiments of the present invention.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the charging management module may receive a charging input of a wired charger through a USB interface.
  • the charging management module may receive a wireless charging input through a wireless charging coil of the terminal 100. While the charging management module is charging the battery, it can also supply power to the terminal device through the power management module 141.
  • the power management module 141 is used to connect the battery 142, the charge management module 140 and the processor 110.
  • the power management module receives inputs from the battery and / or charge management module, and supplies power to a processor, an internal memory, an external memory, a display screen, a camera, and a communication module.
  • the power management module can also be used to monitor battery capacity, battery cycle times, battery health (leakage, impedance) and other parameters.
  • the power management module 141 may also be disposed in the processor 110.
  • the power management module 141 and the charge management module may also be provided in the same device.
  • the wireless communication function of the terminal 100 may be implemented by the antenna module 1, the antenna module 2 radio frequency module 150, the communication module 160, a modem, and a baseband processor.
  • the antenna 1 and the antenna 2 are used for transmitting and receiving electromagnetic wave signals.
  • Each antenna in the terminal 100 may be used to cover a single or multiple communication frequency bands. Different antennas can also be multiplexed to improve antenna utilization. For example, a cellular network antenna can be multiplexed into a wireless LAN diversity antenna. In some embodiments, the antenna may be used in conjunction with a tuning switch.
  • the radio frequency module 150 may provide a communication processing module applied to the terminal 100 and including a wireless communication solution such as 2G / 3G / 4G / 5G.
  • the radio frequency module may include at least one filter, a switch, a power amplifier, a Low Noise Amplifier (LNA), and the like.
  • the radio frequency module receives electromagnetic waves from the antenna 1, and processes the received electromagnetic waves by filtering, amplifying, etc., and transmitting them to the modem for demodulation.
  • the radio frequency module can also amplify the signal modulated by the modem and turn it into electromagnetic wave radiation through the antenna 1.
  • at least part of the functional modules of the radio frequency module 150 may be disposed in the processor 150.
  • at least part of the functional modules of the radio frequency module 150 may be provided in the same device as at least part of the modules of the processor 110.
  • the modem may include a modulator and a demodulator.
  • the modulator is used for modulating the low-frequency baseband signal to be transmitted into a medium-high frequency signal.
  • the demodulator is used to demodulate the received electromagnetic wave signal into a low-frequency baseband signal.
  • the demodulator then transmits the demodulated low-frequency baseband signal to the baseband processor for processing.
  • the low-frequency baseband signal is processed by the baseband processor and then passed to the application processor.
  • the application processor outputs sound signals through audio equipment (not limited to speakers, receivers, etc.), or displays images or videos through a display screen.
  • the modem may be a separate device.
  • the modem may be independent of the processor and disposed in the same device as the radio frequency module or other functional modules.
  • the communication module 160 can provide wireless LAN (wireless local area networks, WLAN), Bluetooth (Bluetooth, BT), global navigation satellite system (GNSS), frequency modulation (FM) applied to the terminal 100.
  • a communication processing module of a wireless communication solution such as near field communication (NFC), infrared technology (infrared, IR).
  • the communication module 160 may be one or more devices that integrate at least one communication processing module.
  • the communication module receives the electromagnetic wave through the antenna 2, frequency-modulates and filters the electromagnetic wave signal, and sends the processed signal to the processor.
  • the communication module 160 may also receive a signal to be transmitted from the processor, frequency-modulate it, amplify it, and turn it into electromagnetic wave radiation through the antenna 2.
  • the antenna 1 of the terminal 100 is coupled to a radio frequency module, and the antenna 2 is coupled to a communication module, so that the terminal 100 can communicate with a network and other devices through a wireless communication technology.
  • the wireless communication technology may include a global mobile communication system (GSM), a general packet radio service (GPRS), a code division multiple access (CDMA), and broadband.
  • GSM global mobile communication system
  • GPRS general packet radio service
  • CDMA code division multiple access
  • WCDMA wideband code division multiple access
  • TD-SCDMA time-division code division multiple access
  • LTE long term evolution
  • BT GNSS
  • WLAN NFC
  • FM FM
  • IR technology IR
  • the GNSS may include a global positioning system (GPS), a global navigation satellite system (GLONASS), a beidou navigation navigation system (BDS), and a quasi-zenith satellite system (quasi -zenith satellite system (QZSS)) and / or satellite-based augmentation systems (SBAS).
  • GPS global positioning system
  • GLONASS global navigation satellite system
  • BDS Bertdou navigation navigation system
  • QZSS quasi-zenith satellite system
  • SBAS satellite-based augmentation systems
  • the terminal 100 implements a display function through a GPU, a display screen 194, and an application processor.
  • the GPU is a microprocessor for image processing, which connects the display screen and the application processor.
  • the GPU is used to perform mathematical and geometric calculations for graphics rendering.
  • the processor 110 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 194 is used to display images, videos, and the like.
  • the display includes a display panel.
  • the display panel can adopt LCD (liquid crystal display), OLED (organic light-emitting diode), active matrix organic light emitting diode or active matrix organic light emitting diode (active-matrix organic light emitting diode) emitting diodes (AMOLED), flexible light-emitting diodes (FLEDs), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (QLEDs), etc.
  • the terminal 100 may include one or N display screens, where N is a positive integer greater than 1.
  • the terminal 100 may implement a shooting function through an ISP, a camera 193, a video codec, a GPU, a display screen, and an application processor.
  • ISP is used to process data from camera feedback. For example, when taking a picture, the shutter is opened, and the light is transmitted to the light receiving element of the camera through the lens. The light signal is converted into an electrical signal, and the light receiving element of the camera passes the electrical signal to the ISP for processing and converts the image to the naked eye. ISP can also optimize the image's noise, brightness, and skin tone. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene. In some embodiments, an ISP may be provided in the camera 193.
  • the camera 193 is used to capture still images or videos.
  • An object generates an optical image through a lens and projects it onto a photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the optical signal into an electrical signal, and then passes the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs digital image signals to the DSP for processing.
  • DSP converts digital image signals into image signals in standard RGB, YUV and other formats.
  • the terminal 100 may include one or N cameras, where N is a positive integer greater than 1.
  • the terminal 100 may include multiple cameras with different FOVs.
  • a camera A, a camera B, and a camera C may be provided on the rear case of the terminal 100.
  • the camera A, the camera B, and the camera C may be arranged horizontally, vertically, or in a triangle.
  • the camera A, the camera B, and the camera C have different FOVs.
  • the FOV of camera A is smaller than the FOV of camera B
  • the FOV of camera B is smaller than the FOV of camera C.
  • the shooting frame 3 captured by camera C is the largest, the shooting frame 2 captured by camera B is smaller than the shooting frame 3, and the shooting frame 1 captured by camera A is the smallest, that is, the shooting frame 3 may include the shooting frame. 2 and shooting screen 1.
  • the shooting target may dynamically change in the above-mentioned shooting screen 1 to shooting screen 3.
  • the terminal 100 may determine the shooting according to the size and position of the shooting target in each shooting screen Best performing target camera. In this way, if the currently used camera is not the target camera, the terminal 100 may automatically switch to the target camera for shooting, thereby improving the shooting effect of the shooting target.
  • the digital signal processor is used to process digital signals.
  • other digital signals can also be processed.
  • the digital signal processor is used to perform a Fourier transform on the frequency point energy and the like.
  • Video codecs are used to compress or decompress digital video.
  • the terminal 100 may support one or more video codecs. In this way, the terminal 100 can play or record videos in multiple encoding formats, such as: MPEG1, MPEG2, MPEG3, MPEG4, and so on.
  • the NPU is a neural-network (NN) computing processor.
  • NN neural-network
  • applications such as intelligent cognition of the terminal 100 can be implemented, such as: image recognition, face recognition, speech recognition, text understanding, and the like.
  • the external memory interface 120 can be used to connect an external memory card, such as a Micro SD card, to achieve the expansion of the storage capacity of the terminal 100.
  • the external memory card communicates with the processor through an external memory interface to implement a data storage function. For example, save music, videos and other files on an external memory card.
  • the internal memory 121 may be used to store computer executable program code, where the executable program code includes instructions.
  • the processor 110 executes various functional applications and data processing of the terminal 100 by executing instructions stored in the internal memory 121.
  • the memory 121 may include a storage program area and a storage data area.
  • the storage program area may store an operating system, at least one application required by a function (such as a sound playback function, an image playback function, etc.) and the like.
  • the storage data area may store data (such as audio data, phone book, etc.) created during the use of the terminal 100.
  • the memory 121 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, other volatile solid-state storage devices, a universal flash memory (universal flash storage, UFS), etc. .
  • a non-volatile memory such as at least one magnetic disk storage device, a flash memory device, other volatile solid-state storage devices, a universal flash memory (universal flash storage, UFS), etc.
  • the terminal 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, a headphone interface 170D, and an application processor. Such as music playback, recording, etc.
  • the audio module is used to convert digital audio information into an analog audio signal output, and is also used to convert an analog audio input into a digital audio signal.
  • the audio module can also be used to encode and decode audio signals.
  • the audio module may be disposed in the processor 110, or some functional modules of the audio module may be disposed in the processor 110.
  • the speaker 170A also called a "horn" is used to convert audio electrical signals into sound signals.
  • the terminal 100 can listen to music through a speaker or listen to a hands-free call.
  • the receiver 170B also referred to as the "handset" is used to convert audio electrical signals into sound signals.
  • the terminal 100 answers a call or a voice message, it can answer the voice by holding the receiver close to the human ear.
  • the microphone 170C also called “microphone”, “microphone”, is used to convert sound signals into electrical signals.
  • the user can make a sound through the mouth close to the microphone, and input the sound signal into the microphone.
  • the terminal 100 may be provided with at least one microphone.
  • the terminal 100 may be provided with two microphones, and in addition to collecting sound signals, a noise reduction function may also be implemented.
  • the terminal 100 may further be provided with three, four, or more microphones to collect sound signals, reduce noise, and also identify the source of the sound, and implement a directional recording function.
  • the headset interface 170D is used to connect a wired headset.
  • the earphone interface can be a USB interface or a 3.5mm open mobile terminal platform (OMTP) standard interface, and the American Cellular Telecommunications Industry Association (United States of America, CTIA) standard interface.
  • OMTP open mobile terminal platform
  • CTIA American Cellular Telecommunications Industry Association
  • the pressure sensor 180A is used to sense a pressure signal, and can convert the pressure signal into an electrical signal.
  • the pressure sensor may be disposed on the display screen.
  • the capacitive pressure sensor may be at least two parallel plates having a conductive material. When a force is applied to the pressure sensor, the capacitance between the electrodes changes.
  • the terminal 100 determines the intensity of the pressure according to the change in capacitance.
  • the terminal 100 detects the intensity of the touch operation according to a pressure sensor.
  • the terminal 100 may also calculate a touched position according to a detection signal of the pressure sensor.
  • touch operations acting on the same touch position but different touch operation intensities may correspond to different operation instructions. For example, when a touch operation with a touch operation intensity lower than the first pressure threshold is applied to the short message application icon, an instruction for viewing the short message is executed. When a touch operation with a touch operation intensity greater than or equal to the first pressure threshold is applied to the short message application icon, an instruction for creating a short message is executed.
  • the gyro sensor 180B may be used to determine a motion posture of the terminal 100.
  • the angular velocity of the terminal 100 about three axes may be determined by a gyro sensor.
  • a gyroscope sensor can be used for image stabilization. Exemplarily, when the shutter is pressed, the gyro sensor detects the angle of the terminal 100 to shake, and calculates the distance to be compensated by the lens module according to the angle, so that the lens can offset the shake of the terminal 100 through reverse movement to achieve anti-shake.
  • the gyroscope sensor can also be used for navigation and somatosensory game scenes.
  • the barometric pressure sensor 180C is used to measure air pressure.
  • the terminal 100 calculates an altitude based on the air pressure value measured by the air pressure sensor to assist in positioning and navigation.
  • the magnetic sensor 180D includes a Hall sensor.
  • the terminal 100 can detect the opening and closing of the flip leather case by using a magnetic sensor.
  • the terminal 100 may detect the opening and closing of the flip according to a magnetic sensor. Further, according to the opened and closed state of the holster or the opened and closed state of the flip cover, characteristics such as automatic unlocking of the flip cover are set.
  • the acceleration sensor 180E can detect the magnitude of the acceleration of the terminal 100 in various directions (generally three axes).
  • the magnitude and direction of gravity can be detected when the terminal 100 is stationary. It can also be used to identify the posture of the terminal, and is used in applications such as switching between horizontal and vertical screens, and pedometers.
  • the terminal 100 can measure the distance by infrared or laser. In some embodiments, when shooting a scene, the terminal 100 may use a distance sensor to measure distances to achieve fast focusing.
  • the proximity light sensor 180G may include, for example, a light emitting diode (LED) and a light detector, such as a photodiode.
  • the light emitting diode may be an infrared light emitting diode. Infrared light is emitted outward through a light emitting diode.
  • the terminal 100 may use a proximity light sensor to detect that the user is holding the terminal 100 close to the ear to talk, so as to automatically turn off the screen to save power.
  • the proximity light sensor can also be used in holster mode, and the pocket mode automatically unlocks and locks the screen.
  • the ambient light sensor 180L is used to sense ambient light brightness.
  • the terminal 100 can adaptively adjust the brightness of the display screen according to the perceived ambient light brightness.
  • the ambient light sensor can also be used to automatically adjust the white balance when taking pictures.
  • the ambient light sensor can also cooperate with the proximity light sensor to detect whether the terminal 100 is in a pocket to prevent accidental touch.
  • the fingerprint sensor 180H is used to collect fingerprints.
  • the terminal 100 may use the collected fingerprint characteristics to realize fingerprint unlocking, access application lock, fingerprint photographing, fingerprint answering an incoming call, and the like.
  • the temperature sensor 180J is used to detect the temperature.
  • the terminal 100 executes a temperature processing strategy using the temperature detected by the temperature sensor. For example, when the temperature reported by the temperature sensor exceeds a threshold, the terminal 100 executes reducing the performance of a processor located near the temperature sensor in order to reduce power consumption and implement thermal protection.
  • the touch sensor 180K is also called “touch panel”. Can be set on the display. Used to detect touch operations on or near it. The detected touch operation can be passed to the application processor to determine the type of touch event and provide corresponding visual output through the display screen.
  • the bone conduction sensor 180M can acquire vibration signals.
  • the bone conduction sensor may obtain a vibration signal of a human voice oscillating bone mass.
  • Bone conduction sensors can also touch the human pulse and receive blood pressure beating signals.
  • a bone conduction sensor may also be provided in the headset.
  • the audio module 170 may analyze a voice signal based on a vibration signal of a oscillating bone mass obtained by the bone conduction sensor to implement a voice function.
  • the application processor may analyze the heart rate information based on the blood pressure beating signal obtained by the bone conduction sensor to implement a heart rate detection function.
  • the keys 190 include a power-on key, a volume key, and the like.
  • the keys can be mechanical keys. It can also be a touch button.
  • the terminal 100 receives a key input, and generates a key signal input related to user settings and function control of the terminal 100.
  • the motor 191 may generate a vibration alert.
  • the motor can be used for incoming vibration alert and touch vibration feedback.
  • the touch operation applied to different applications can correspond to different vibration feedback effects.
  • Touch operations on different areas of the display can also correspond to different vibration feedback effects.
  • Different application scenarios (such as time reminders, receiving information, alarm clocks, games, etc.) can also correspond to different vibration feedback effects.
  • Touch vibration feedback effect can also support customization.
  • the indicator 192 can be an indicator light, which can be used to indicate the charging status, power change, and can also be used to indicate messages, missed calls, notifications, and so on.
  • the SIM card interface 195 is used to connect to a subscriber identity module (SIM).
  • SIM subscriber identity module
  • the SIM card can be contacted and separated from the terminal 100 by inserting or removing the SIM card interface.
  • the terminal 100 may support one or N SIM card interfaces, and N is a positive integer greater than 1.
  • the SIM card interface can support Nano SIM cards, Micro SIM cards, SIM cards, etc. Multiple SIM cards can be inserted into the same SIM card interface at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface is also compatible with different types of SIM cards.
  • the SIM card interface is also compatible with external memory cards.
  • the terminal 100 interacts with the network through a SIM card to implement functions such as calling and data communication.
  • the terminal 100 uses an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the terminal 100 and cannot be separated from the terminal 100.
  • the software system of the terminal 100 may adopt a layered architecture, an event-driven architecture, a micro-core architecture, a micro-service architecture, or a cloud architecture.
  • the embodiment of the present invention takes the Android system with a layered architecture as an example, and exemplifies the software structure of the terminal 100.
  • FIG. 4 is a software structural block diagram of the terminal 100 according to an embodiment of the present invention.
  • the layered architecture divides the software into several layers, each of which has a clear role and division of labor. Layers communicate with each other through interfaces.
  • the Android system is divided into four layers, which are an application layer, an application framework layer, an Android runtime and a system library, and a kernel layer from top to bottom.
  • the application layer can include a series of application packages.
  • the application package can include applications such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and SMS.
  • These applications can be system-level applications of the operating system (for example, desktop, SMS, call, calendar, contacts, etc.), or can be ordinary-level applications (for example, WeChat, Taobao, etc.).
  • a system-level application generally means that the application has system-level permissions and can access various system resources.
  • Ordinary-level applications generally refer to: the application has ordinary permissions, may not be able to obtain some system resources, or needs user authorization to obtain some system resources.
  • System-level applications can be pre-installed applications in the phone.
  • Ordinary-level applications can be pre-installed applications on the phone, or can be installed by subsequent users.
  • the application framework layer provides an application programming interface (API) and a programming framework for applications at the application layer.
  • API application programming interface
  • the application framework layer includes some predefined functions.
  • the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, and the like.
  • the window manager is used to manage window programs.
  • the window manager can obtain the display size, determine whether there is a status bar, lock the screen, take a screenshot, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • the data may include videos, images, audio, calls made and received, browsing history and bookmarks, phone books, and so on.
  • the view system includes visual controls, such as controls that display text, controls that display pictures, and so on.
  • the view system can be used to build applications.
  • the display interface can consist of one or more views.
  • the display interface including the SMS notification icon may include a view that displays text and a view that displays pictures.
  • the phone manager is used to provide a communication function of the terminal 100. For example, management of call status (including connection, hang up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages that can disappear automatically after a short stay without user interaction.
  • the notification manager is used to notify the completion of downloading, message alert, etc.
  • the notification manager can also be a notification that appears in the status bar at the top of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • text messages are displayed in the status bar, a tone is emitted, the terminal vibrates, and the indicator light flashes.
  • Android Runtime includes core libraries and virtual machines. Android runtime is responsible for the scheduling and management of the Android system.
  • the core library contains two parts: one is the functional functions that the Java language needs to call, and the other is the Android core library.
  • the application layer and the application framework layer run in a virtual machine.
  • the virtual machine executes the java files of the application layer and the application framework layer as binary files.
  • Virtual machines are used to perform object lifecycle management, stack management, thread management, security and exception management, and garbage collection.
  • the system library can include multiple functional modules. For example: surface manager (media manager), media library (Media library), three-dimensional graphics processing library OpenGL ES, 2D graphics engine SGL, etc.
  • the Surface Manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
  • the media library supports a variety of commonly used audio and video formats for playback and recording, as well as still image files.
  • the media library can support multiple audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • OpenGL ES is used to implement 3D graphics drawing, image rendering, compositing, and layer processing.
  • SGL is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer contains at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the following describes the workflow of the software and hardware of the terminal 100 by way of example in conjunction with capturing a photographing scene.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer can process touch operations into raw input events (including touch coordinates, time stamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer and identifies the control corresponding to the input event. Taking the touch operation as a touch and click operation, and the control corresponding to the click operation is the control of the camera application icon as an example, the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer. The camera captures each frame.
  • the terminal 100 may first open one of the cameras (for example, camera A). After the camera A is turned on, the terminal can capture each frame captured by the camera A (hereinafter referred to as the captured frame 1), and display the captured frame 1 on the display screen 194. At this time, the terminal may determine a shooting target in the shooting screen 1. For example, as shown in FIG. 5, a person identified in the shooting screen 1 may be used as the shooting target 501.
  • the terminal 100 can track changes of the shooting target 501 in the shooting screen 1 in real time, for example, parameters such as the position and size of the shooting target 501.
  • the terminal 100 can determine the position and size of the shooting target 501 in the shooting screen 1
  • the position and size of the shooting target 501 in the shooting screen of the camera B that is, the shooting screen 2
  • the position and size of the shooting target 501 in the shooting screen of the camera C that is, the shooting screen 3
  • the terminal 100 can determine a camera that can ensure the shooting effect of the shooting target 501 according to the position and size of the shooting target 501 in each shooting screen. For example, if the shooting target 501 has overflowed the shooting screen 1 of the camera A, the terminal 100 can switch the currently opened camera A to the camera B or camera C with a larger FOV, thereby automatically and intelligently helping the user to switch during the shooting process A suitable camera captures the shooting target to improve the shooting effect of the shooting target.
  • shooting picture 1 is a shooting picture generated by camera A through all the photosensitive elements in camera A at the default standard focal length
  • shooting picture 2 is the camera B passes through all the photosensors in camera B at the default standard focal length.
  • the shooting screen 3 is a shooting screen generated by the camera C through all the photosensitive elements in the camera C under the default standard focal length.
  • the standard focal length of a camera is usually the minimum focal length supported by the camera. Taking the shooting frame 3 as an example, the user can manually increase the focal length of the camera C.
  • the terminal 100 can use digital zoom to intercept a part of the shooting frame 3 as a new shooting frame (for example, the shooting frame 3 '), and take the shooting frame.
  • 3 ' is displayed in the preview interface. Since the shooting screen 3 'is part of the shooting screen 3, although the field of view presented by the shooting screen 3' becomes smaller, the resolution of the shooting screen 3 ' also decreases accordingly.
  • FIG. 6A is a schematic flowchart of a method for switching cameras according to an embodiment of the present application. As shown in FIG. 6A, the method for switching cameras may include:
  • the mobile phone receives a user's first operation on the camera.
  • the mobile phone is equipped with three cameras with different FOVs.
  • the first operation may be any operation that the user opens the camera.
  • the first operation may specifically be an operation of clicking the camera APP icon, or opening the camera APP from another APP.
  • the first operation may be any operation that the user starts to record a video.
  • the first operation may specifically be an operation of a user clicking a recording button after opening a camera APP, and this embodiment of the present application does not place any limitation on this.
  • the mobile phone may be equipped with N (N is an integer greater than 1) cameras.
  • N is an integer greater than 1) cameras.
  • the FOVs of these N cameras are different. That is, the field of view of the shooting pictures taken by the mobile phone using different cameras is different.
  • the N cameras can be set on the rear case of the mobile phone as a rear camera, or can be set on the front panel of the mobile phone as a front camera.
  • three cameras A, B, and C with different FOVs are used as examples for illustration.
  • the FOV of camera A is the smallest
  • the FOV of camera C is the largest
  • the FOV of camera B is between the FOV of camera A and the FOV of camera C.
  • the mobile phone uses the first camera to capture a first shooting picture.
  • the mobile phone may use a default camera or randomly select a camera (that is, the first camera) for shooting.
  • the captured shooting picture (the picture captured by the first camera is referred to as the first shooting picture in this application) is sent to the mobile phone, and the first shooting picture is displayed on the display screen by the mobile phone.
  • the mobile phone can detect that the user has performed the first operation for opening the camera. Furthermore, the mobile phone may select one camera from the camera A, the camera B, and the camera C as the first camera to start working. For example, the mobile phone may use the camera B with the intermediate FOV as the first camera, and then call the camera driver to start the camera B and use the camera B to capture the first shooting picture. At this time, as shown in (b) of FIG. 7, the mobile phone may display the first shooting frame 702 of each frame captured by the camera B in the preview interface of the camera APP.
  • the mobile phone may also set a switch button 703 in the preview interface of the camera APP. After detecting that the user turns on the switch button 703, the mobile phone can automatically determine and switch the camera with better shooting effect according to the following steps S603-S606, otherwise, the mobile phone can continue to use the first camera to capture the first shooting picture.
  • the mobile phone determines a shooting target in the first shooting screen.
  • the user can manually select the desired shooting target in the first shooting picture 702.
  • a circle selection button 801 may be set in the preview interface of the camera APP. After the user clicks the circle selection button 801, the mobile phone can provide a circle selection frame 802 whose position and size can be changed to the user. The user can drag the circle selection frame 802 to select a shooting target in the first shooting frame 702.
  • the mobile phone may extract the image in the circle selection frame 802 for image recognition, and obtain the image features in the circle selection frame 802, thereby It is determined that the shooting target in the first shooting screen 702 is a car.
  • the user can also select the shooting target desired by the user in the first shooting screen 702 by clicking, double-clicking, long-pressing, and pressing.
  • the embodiment of the present application does not limit this.
  • the mobile phone may automatically perform image recognition on the first shooting picture, and then determine a shooting target in the first shooting picture according to the recognition result.
  • a mobile phone may use a face recognized in the first shooting frame as a shooting target, or a person or an object located at the center of the first shooting frame as a shooting target, or a person or a person occupying a certain area proportion of the first shooting frame or An object is used as a shooting target, and this embodiment of the present application does not place any limitation on this.
  • the mobile phone After the mobile phone determines the shooting target in the first shooting screen 702, it can prompt the user about the shooting target in the first shooting screen 702 by using prompt methods such as adding a frame, highlighting, voice, or text.
  • prompt methods such as adding a frame, highlighting, voice, or text.
  • the user can also modify the shooting target. For example, the user can drag the border displayed by the mobile phone around the shooting target to a position desired by the user, and the image in the border after dragging can be used as a new shooting target.
  • the mobile phone can track the position and size of the shooting target in the first shooting frame in real time. For example, when the mobile phone determines that the shooting target in the first shooting picture 702 is a car, it can obtain the image characteristics of the car. When the mobile phone uses the camera B to continuously refresh the first shooting picture 702, the mobile phone can The image feature determines parameters such as the position and size of the car in the refreshed first shooting screen 702.
  • the mobile phone determines a target camera according to a position and a size of the shooting target in the first shooting picture.
  • the mobile phone may first determine whether the camera needs to be switched according to the position of the shooting target in the first shooting frame. If the camera needs to be switched, the mobile phone can further determine which camera to use as the target camera to be switched.
  • the mobile phone may first determine whether the shooting target is completely displayed in the first shooting screen according to the position of the shooting target in the first shooting screen. Take the first shooting picture as an example of shooting picture 2 taken by camera B. As shown in (a) of FIG. 9, after the mobile phone determines the shooting target 1001, it can further determine whether each boundary line of the shooting picture 2 is consistent with the shooting target 1001. Coincidence occurs. If the overlap occurs, it indicates that the shooting target cannot be completely displayed in the first shooting screen. At this time, the mobile phone may continue to execute step S611 in FIG. 6B. For example, if a boundary line 903 of the shooting frame 2 in FIG.
  • the mobile phone can determine a camera with a larger FOV (for example, camera C) as the target camera to be switched, so that the shooting target 1001 can more completely display the shooting screen shot by camera C. 3 in.
  • FOV field-of-V
  • the mobile phone can set a regular-shaped footprint 1002 for the shooting target 1001. .
  • the footprint 1002 is generally a rectangular area that can accommodate the shooting target 1001.
  • the mobile phone can determine whether the camera needs to be replaced by comparing the coincidence degrees of the boundaries of the footprint 1002 and the boundaries of the shooting frame 2. For example, as shown in (a) of FIG. 10, when a boundary line of the occupancy area 1002 and the boundary line 903 of the shooting frame 2 completely overlap, it indicates that the shooting target 1001 cannot be completely displayed in the shooting frame 2. Then, as shown in (b) of FIG. 10, the mobile phone may determine a camera with a larger FOV (for example, camera C) as the above target camera, so that the shooting target 1001 can be displayed closer to the center of the shooting frame 3.
  • FOV for example, camera C
  • the first camera currently used (for example, camera B) is already the camera with the largest FOV, it means that the target camera cannot be displayed completely within the maximum viewing angle of the mobile phone, and the mobile phone does not need to switch to another camera. Still the first camera in use.
  • the mobile phone can continue to perform any of steps S612-S615. That is, when the shooting target can be completely displayed on the first shooting screen, the embodiments of the present application provide four ways to determine the target camera to be switched.
  • the mobile phone can continue to use the first camera (ie, camera B) in use as the target camera.
  • the mobile phone can further determine whether the shooting target 1001 is located in the best shooting of the shooting screen 2 according to the position of the shooting target 1001. If the shooting target 1001 is located in the best shooting area of shooting frame 2, the shooting effect of shooting target 1001 in shooting frame 2 is better. Therefore, the mobile phone does not need to switch to another camera, and the target camera is still in use at this time.
  • Camera B the mobile phone does not need to switch to another camera, and the target camera is still in use at this time.
  • the mobile phone can determine a camera with a larger FOV (for example, camera C) as the target camera to be switched, so that the shooting target 1001 can be displayed in the shooting screen 3 with a larger optimal shooting area.
  • FOV field-to-V
  • the mobile phone can further calculate the target proportion of the shooting target 1001 in the shooting screen 2 according to the size of the shooting target 1001.
  • the target ratio may be a size ratio between the shooting target 1001 and the shooting frame 2; or, the target ratio may be a size ratio between the shooting target 1001 and the best shooting area in the shooting frame 2; or The target ratio may be the size ratio between the footprint 1002 of the shooting target 1001 and the shooting frame 2; or the target ratio may also be between the footprint of the shooting target 1001 and the best shooting area in the shooting frame 2 The size ratio.
  • the mobile phone can also set a target target proportion range in advance. For example, when the ratio of the shooting target to the shooting screen is between 50% and 80%, the shooting effect of the shooting target in the shooting screen is most in line with human vision. effect. Therefore, the above-mentioned target proportion range can be set to 50% -80%. In this way, if the target proportion of the shooting target 1001 in the shooting screen 2 falls within the above target proportion range, it indicates that the shooting effect of the shooting target 1001 in the shooting screen 2 is better, so the mobile phone does not need to switch to another camera. When the target camera is still the camera B in use.
  • the target proportion of the shooting target 1001 in the shooting frame 2 is greater than the upper limit of the target proportion range, it means that the FOV of the currently used camera B is too small, and the mobile phone can set a camera with a larger FOV (such as camera C) ) Is determined as the above target camera.
  • the target account of the shooting target 1001 in the shooting screen 2 is smaller than the lower limit of the target ratio range, it means that the FOV of the currently used camera B is too large, and the mobile phone can determine a camera with a smaller FOV (such as camera A) For the above target camera.
  • the mobile phone can determine that the camera A captures the camera according to the FOV of camera A, camera B, and camera C The relative positional relationship between the shooting picture 1, the shooting picture 2 shot by the camera B, and the shooting picture 3 shot by the camera C.
  • the shooting frame 1 is located at the center of the shooting frame 2 and is 70% of the size of the shooting frame 2
  • the shooting frame 2 is located at the center of the shooting frame 3 and is 70% of the shooting frame 3.
  • the mobile phone may also determine the target camera to be switched in combination with the position and size of the shooting target 1001 in the shooting pictures of other cameras (for example, the above-mentioned shooting picture 1 and shooting picture 3).
  • step S615 may specifically include the following steps S901-S902.
  • the mobile phone determines a candidate camera according to the positions of the shooting targets in the first shooting picture, the second shooting picture, and the third shooting picture.
  • the second shooting screen refers to a shooting screen when shooting with a second camera
  • the third shooting screen refers to a shooting screen when shooting with a third camera.
  • the mobile phone may use, as a candidate camera, a camera capable of completely capturing the foregoing shooting target according to a position relationship of the shooting target in the first shooting screen, the second shooting screen, and the third shooting screen. Since the mobile phone has determined that the shooting target can be completely displayed in the first shooting screen, the first camera (for example, camera B) currently in use is one of the candidate cameras.
  • the first camera for example, camera B
  • the mobile phone can also calculate whether the shooting target can be completely displayed on the second shooting screen and the third shooting screen according to the method of calculating whether the shooting target can be completely displayed on the first shooting screen. For example, the mobile phone can calculate whether the boundary of the second shooting screen coincides with the shooting target (or the target's footprint). If they overlap, it means that the shooting target cannot be completely displayed in the second shooting screen. If they do not overlap, it means shooting The target can be completely displayed in the second shooting picture, and the second camera corresponding to the second shooting picture can be used as one of the candidate cameras. Similarly, the mobile phone can also calculate whether the boundary of the third shooting frame coincides with the shooting target (or the target's footprint).
  • the shooting target cannot be completely displayed in the third shooting frame. If they do not overlap, then It indicates that the shooting target can be completely displayed in the third shooting screen, and the third camera corresponding to the third shooting screen can be used as one of the candidate cameras.
  • the mobile phone can calculate the footprints of the shooting targets in the first shooting frame and the second shooting frame (or the third shooting frame). It is determined whether the boundary lines of are coincident, and it is determined whether the shooting target can be completely displayed on the second shooting frame (or the third shooting frame). Alternatively, the mobile phone can also determine whether the shooting target can be completely displayed in the second shooting screen by calculating whether the footprint of the shooting target in the second shooting screen coincides with the boundary line of the second shooting screen. Whether the footprint of the target in the third shooting frame coincides with the boundary line of the third shooting frame, and determines whether the shooting target can be completely displayed in the third shooting frame, which is not limited in the embodiment of the present application.
  • the camera for example, camera C
  • the camera can also be used as one of the candidate cameras
  • the FOV camera for example, camera A
  • whether the camera (for example, camera A) can be used as one of the candidate cameras can be determined in the above manner.
  • the mobile phone can determine that the shooting target 1001 can be completely displayed in the shooting screen 3 by performing the above step S611, and the camera C is one of the candidate cameras. Further, the mobile phone can also calculate whether the shooting target 1001 can be completely displayed in the shooting screen 2 shot by the camera B and the shooting screen 1 shot by the camera A.
  • the mobile phone may determine the camera C as The only candidate camera.
  • the mobile phone may determine camera A, camera B, and camera C as candidates camera.
  • the first shooting picture is still taken as an example of shooting picture 3.
  • the mobile phone determines the shooting target 1001
  • it can further determine the footprint 1002 of the shooting target 1001. .
  • the mobile phone compares the positional relationship between the footprint 1002 and the shooting frame 3 by executing step S611.
  • the shooting frame 3 can completely display the footprint 1002 of the shooting target 1001
  • the mobile phone may determine the camera C as one of the candidate cameras.
  • the mobile phone can further compare the positional relationship between the footprint 1002 and the shooting frame 1 and the shooting frame 2.
  • the mobile phone may determine the camera C as the only candidate camera.
  • the mobile phone Cameras C and B can be determined as candidate cameras.
  • each of the shooting frame 1, shooting frame 2, and shooting frame 3 has an optimal shooting area. Therefore, when the mobile phone determines the candidate camera, the camera that can completely display the shooting target 1001 in the optimal shooting area can be used as the candidate camera. Therefore, the problem of distortion caused by the subsequent display of the shooting target 1001 on the edge of the shooting screen is avoided, and the shooting effect of the shooting target 1001 is improved.
  • the candidate camera determined according to the position of the shooting target 1001 in FIG. 11 (a) includes only the camera C corresponding to the shooting screen 3, in order to display the shooting target 1001 completely, In order to improve the shooting effect of the shooting target 1001, the mobile phone may determine the camera C as the target camera.
  • the mobile phone can continue to perform the following step S902 to determine the best shooting effect from the multiple candidate cameras. Excellent target camera.
  • the mobile phone determines one target camera from the multiple candidate cameras according to the size of the shooting target in the first shooting frame, the second shooting frame, and the third shooting frame.
  • the mobile phone can determine the candidate camera with the smallest FOV as the target camera, so as to increase the proportion of subsequent shooting targets in the shooting screen.
  • the mobile phone may calculate the target proportions of the shooting targets in the first shooting frame, the second shooting frame, and the third shooting frame, respectively.
  • the target ratio may be the size ratio between the shooting target and the first, second, and third shooting frames; or the target ratio may be the shooting target and the first, second, and third shooting frames, respectively.
  • the size ratio between the best shooting areas in the second shooting picture and the third shooting picture; or, the target ratio may be the space between the shooting target's occupying area and the first shooting picture, the second shooting picture, and the third shooting picture, respectively.
  • the target ratio may also be the size ratio between the occupancy area of the shooting target and the best shooting area in the first shooting picture, the second shooting picture, and the third shooting picture, respectively.
  • the mobile phone can determine the camera corresponding to the shooting screen with the largest shooting target ratio as the target camera.
  • the mobile phone can also set an optimal proportion range of the shooting target in advance. For example, when the ratio of the shooting target to the shooting screen is between 50% and 80%, the shooting effect of the shooting target in the shooting screen is most suitable for people. Eye visual effects. Therefore, the above-mentioned optimal ratio range can be set to 50% -80%. In this way, when there are multiple candidate cameras, the mobile phone can calculate the target proportions of the shooting targets in the first, second, and third shooting frames, respectively, and then fall into the optimal ratio range. The camera corresponding to the shooting picture inside is determined as the target camera.
  • the mobile phone determines the target camera according to the position and size of the shooting target in the first shooting screen, the second shooting screen, and the third shooting screen, and the subsequent mobile phone uses the target camera to shoot the shooting effect of the shooting target. the best.
  • the target camera determined by the mobile phone through step S604 is consistent with the first camera currently in use, it means that the shooting target has shown the best shooting effect in the first shooting screen displayed by the current mobile phone, and the mobile phone does not need to be switched
  • the first camera may continue to use the first camera to capture the first shooting picture of the next frame, and perform the above steps S603-S604 in a loop.
  • the mobile phone switches the first camera to a second camera.
  • the mobile phone Take the first camera as the above camera B as an example. If the target camera determined by the mobile phone in step S604 is the camera A with a smaller FOV (ie, the second camera), as shown in FIG. 13, the mobile phone is using the camera B to display the shooting screen. At the same time, you can turn on camera A in the background first. Furthermore, the mobile phone can display the shooting screen 1 captured by the camera A in the preview interface of the camera APP on the front desk, and close the camera B to complete the process of switching from the camera B to the camera A. Avoiding problems such as the interruption of the shooting screen caused by the mobile phone directly closing camera B and then turning on camera A, and improving the user experience when the mobile phone switches cameras.
  • the mobile phone can also perform a smooth transition during the process of switching from shooting screen 2 to shooting screen 1, avoiding the jump from shooting screen 2 to shooting screen 1.
  • the mobile phone can gradually enlarge the shooting screen 2 by digital zoom. Screen content.
  • the mobile phone may display the shooting screen 1 captured by the camera A in the preview interface of the camera APP to complete the switching process from the camera B to the camera A.
  • the resolution of the shooting screen 1 captured by the camera A is higher. Therefore, in the above switching process, after the mobile phone turns on the camera A in the background, the shooting picture 1 captured by the camera A can also be merged into the currently displayed shooting picture 2 by image fusion technology, and then the fused image is gradually enlarged. Until the fused image is the same as the content in the shooting screen 1, the mobile phone may display the shooting screen 1 captured by the camera A in the preview interface of the camera APP, and complete the switching process from the camera B to the camera A.
  • the mobile phone switches the first camera to a third camera.
  • the mobile phone Take the first camera as an example of the aforementioned camera B. If the target camera determined by the mobile phone in step S604 is the camera C with a larger FOV (ie, the third camera), as shown in FIG. 15, the mobile phone is using the camera B to display the shooting screen 2 at the same time, you can first turn on the camera C in the background. Furthermore, the mobile phone may display the shooting screen 3 captured by the camera C in the preview interface of the camera APP in the foreground, and close the camera B to complete the switching process from the camera B to the camera C. Avoiding problems such as the interruption of the shooting screen caused when the mobile phone directly closes the camera B and then turning on the camera C, and improves the user experience when the mobile phone switches the camera.
  • FOV the third camera
  • the mobile phone can also perform a smooth transition during the process of switching from the shooting screen 2 to the shooting screen 3 to avoid the jump from the shooting screen 2 to the shooting screen 3 A visual mutation to enhance the user experience.
  • the shooting screen 3 includes the contents of the shooting screen 2, as shown in FIG. 16, after the mobile phone turns on the camera C, the focal length of the shooting screen 3 captured by the camera C can be enlarged by digital zooming, so that In the shooting screen 3, a screen 2 'having the same content as the shooting screen 2 is obtained.
  • the mobile phone can turn off the camera B being used.
  • the mobile phone can gradually restore the standard focal length of the picture 2 ', that is, the focal length of the shooting picture 3 is gradually reduced until the shooting picture 3 is completely displayed in the preview interface of the camera APP, and the switching process from the camera B to the camera C is completed.
  • the resolution of the shooting screen 2 captured by the camera B is higher. Therefore, during the above switching process, after the mobile phone turns on the shooting screen 3 captured by the camera C, it can also perform image fusion on the shooting screen 3 and the shooting screen 2, and then take out the same part from the merged image as the shooting screen 2. And gradually transition to the content of the shooting screen 3, complete the switching process from camera B to camera C.
  • the new camera can be used as the first camera to continue performing the above steps S602-S606 in a loop, so as to dynamically adjust the appropriate according to the shooting target in the shooting screen. Camera for shooting.
  • the camera B may be turned on first, and the shooting screen 2 captured by the camera B is displayed. Furthermore, the mobile phone can recognize the shooting target in the shooting screen 2.
  • the shooting target as the car 1701 as an example, after the mobile phone determines that the shooting target is the car 1701, the position and size of the car 1701 in each frame of the shooting picture 2 can be detected in real time. Then, during the movement of the car 1701, if the mobile phone detects that the boundary between the car 1701 and the shooting picture 2 coincides, it means that the shooting picture 2 can no longer fully display the car 1701, so the mobile phone can Camera B switches to camera C with a larger FOV. At this time, the mobile phone can display the shooting frame 3 captured by the camera C. Since the FOV of the camera C is greater than the FOV of the camera B, the car 1701 is displayed more completely in the shooting frame 3, thereby improving the shooting effect of the shooting target.
  • the camera B may be turned on first, and the shooting screen 2 captured by the camera B is displayed. Furthermore, the mobile phone can recognize the shooting target in the shooting screen 2. Taking the shooting target as the user 1801 as an example, after the mobile phone determines that the shooting target is the user 1801, the position and size of the user 1801 in each shooting frame 2 can be detected in real time.
  • the mobile phone detects that the target ratio of the user 1801 in the shooting screen 2 is less than the lower limit of the target ratio range, it means that the display area of the user 1801 in the shooting screen 2 is too small, so ,
  • the mobile phone can switch the currently used camera B to a camera with a smaller FOV.
  • the mobile phone can display the shooting screen 1 captured by the camera A. Since the FOV of the camera A is smaller than the FOV of the camera B, the target proportion of the user 1801 in the shooting frame 3 increases.
  • the mobile phone detects that the target ratio of the user 1801 in the shooting screen 2 is larger than the upper limit of the target ratio range, it indicates that the display area of the user 1801 in the shooting screen 2 is too large. Therefore, the mobile phone can Camera B switches to camera C with a larger FOV. Since the FOV of the camera C is greater than the FOV of the camera B, the user 1801 reduces the proportion of the target in the shooting frame 3 captured by the camera C, thereby improving the shooting effect of the shooting target.
  • the switching process of the three cameras of the first camera, the second camera, and the third camera is used as an example. It can be understood that the method for switching cameras provided in this embodiment of the application may It is applied to a scene where two cameras are switched, or a scene where three or more cameras are switched, which is not limited in the embodiment of the present application.
  • the mobile phone can determine the current shooting target in the shooting scene, and then determine the target camera with better shooting effect according to the parameters such as the position and size of the shooting target, and then smooth the current shooting screen.
  • the transition to the shooting screen captured by the target camera can realize the automatic switching function of the camera with the shooting target as the guide during the shooting process, which improves the shooting effect of the shooting target.
  • an embodiment of the present application discloses a terminal.
  • the terminal is configured to implement the methods described in the foregoing method embodiments, and includes a display unit 1901 and a receiving unit 1902. A determining unit 1903 and a switching unit 1904.
  • the display unit 1901 is used to support the terminal to execute the process S602 in FIG. 6A;
  • the receiving unit 1902 is used to support the terminal to execute the process S601 in FIG. 6A;
  • the determination unit 1903 is used to support the terminal to execute the processes S603-S604 in FIG. 6A, and FIG. 6B
  • the switching unit 1904 is used to support the terminal to execute the process S605 or S606 in FIG. 6A.
  • all relevant content of each step involved in the above method embodiment can be referred to the functional description of the corresponding functional module, which will not be repeated here.
  • the embodiments of the present application disclose a terminal including a processor, and a memory, an input device, and an output device connected to the processor.
  • the input device and the output device may be integrated into one device.
  • a touch-sensitive surface may be used as an input device
  • a display screen may be used as an output device
  • the touch-sensitive surface and the display screen may be integrated into a touch screen.
  • the foregoing terminal may include: at least two cameras 2000, a touch screen 2001, the touch screen 2001 including a touch-sensitive surface 2006 and a display screen 2007; one or more processors 2002; a memory 2003; one or Multiple application programs (not shown); and one or more computer programs 2004, each of which may be connected through one or more communication buses 2005.
  • the one or more computer programs 2004 are stored in the memory 2003 and configured to be executed by the one or more processors 2002.
  • the one or more computer programs 2004 include instructions, and the instructions can be used to execute 6A, FIG. 6B and each step in the corresponding embodiment.
  • Each functional unit in each of the embodiments of the present application may be integrated into one processing unit, or each of the units may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the integrated unit When the integrated unit is implemented in the form of a software functional unit and sold or used as an independent product, it may be stored in a computer-readable storage medium.
  • the technical solutions of the embodiments of the present application essentially or partly contribute to the existing technology or all or part of the technical solutions may be embodied in the form of a software product.
  • the computer software product is stored in a storage device.
  • the medium includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a processor to perform all or part of the steps of the method described in the embodiments of the present application.
  • the foregoing storage media include: flash media, mobile hard disks, read-only memories, random access memories, magnetic disks, or optical discs, which can store program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例公开了一种终端切换摄像头的方法及终端,涉及终端领域,当终端安装有多个摄像头时,可根据拍摄目标动态的自动切换摄像头,从而提高拍摄目标的拍摄效果。该方法包括:响应于用户打开拍摄功能的第一操作,终端显示第一摄像头捕捉到的第一拍摄画面,所述第一拍摄画面中包括拍摄目标;若所述拍摄目标无法完整显示在所述第一拍摄画面中,则所述终端将所述第一拍摄画面切换为第二摄像头捕捉到的第二拍摄画面,关闭所述第一摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV。

Description

一种终端切换摄像头的方法及终端 技术领域
本申请涉及终端领域,尤其涉及一种终端切换摄像头的方法及终端。
背景技术
视场角(field of view,FOV)又可称为视场,视场角的大小决定了光学仪器(例如摄像头)的视野范围。当摄像头的FOV越大时,拍摄画面的视野范围越大;当摄像头的FOV越小时,拍摄画面的视野范围越小。
目前,许多手机等移动终端上都安装了双摄像头以提高拍摄质量。例如,可以在手机上安装FOV较大的广角摄像头1和FOV较小的长焦摄像头2。用户可以通过手动变焦触发手机切换摄像头。例如,如果检测到用户将拍摄时的焦距放大了2倍或2倍以上,则手机可自动切换为长焦摄像头2。又或者,手机还可以根据环境光的强度切换摄像头。例如,当环境光的强度较弱时,手机可自动切换为广角摄像头1获得更多的光线。而用户在拍摄时更关心的是拍摄画面中拍摄目标的拍摄效果,但上述摄像头的切换方法都无法保证拍摄目标的拍摄效果,使得拍摄效果不佳。
发明内容
本申请提供一种终端切换摄像头的方法及终端,当终端安装有多个摄像头时,可根据拍摄目标动态的自动切换摄像头,从而提高拍摄目标的拍摄效果。
为达到上述目的,本申请采用如下技术方案:
第一方面,本申请提供一种终端切换摄像头的方法,包括:响应于用户打开拍摄功能的第一操作,终端显示第一摄像头捕捉到的第一拍摄画面,第一拍摄画面中包括拍摄目标;若拍摄目标无法完整显示在第一拍摄画面中,说明当前使用的第一摄像头的FOV较小,因此,终端可将第一拍摄画面切换为第二摄像头(第二摄像头的FOV大于第一摄像头的FOV)捕捉到的第二拍摄画面,并关闭第一摄像头。也就是说,终端可以根据拍摄目标在拍摄画面中的位置和大小,确定出能够保证拍摄目标拍摄效果的摄像头。从而在拍摄过程中自动、智能的帮助用户切换合适的摄像头拍摄拍摄目标,提高拍摄目标的拍摄效果。
在一种可能的设计方法中,在终端确定第一拍摄画面中的拍摄目标之后,还包括:终端确定上述拍摄目标或拍摄目标的占位区是否与第一拍摄画面的边界线发生重合,拍摄目标位于所述占位区中;若发生重合,则终端确定拍摄目标无法完整显示在第一拍摄画面中;若未发生重合,则终端确定拍摄目标能够完整显示在第一拍摄画面中。
在一种可能的设计方法中,在终端显示第一摄像头捕捉到的第一拍摄画面之后,还包括:终端确定第一拍摄画面中的拍摄目标。
在一种可能的设计方法中,在终端确定第一拍摄画面中的拍摄目标之后,还包括:终端接收用户向第一拍摄画面中输入的第二操作,第二操作用于选中第一拍摄画面中的拍摄目标;其中,终端确定第一拍摄画面中的拍摄目标,包括:响应于第二操作,终端提取第 一拍摄画面中用户选中位置处的图像特征,并根据该图像特征确定出拍摄目标。
在一种可能的设计方法中,终端将第一拍摄画面切换为第二摄像头捕捉到的第二拍摄画面,包括:终端在后台打开第二摄像头以捕捉第二拍摄画面;终端通过数字变焦逐渐放大第二拍摄画面中的内容;当第二拍摄画面被放大后的内容与第一拍摄画面中的内容一致时,终端关闭第一摄像头,并在前台显示被放大的第二拍摄画面;终端逐渐恢复第二拍摄画面的标准焦距,直至终端完整的显示出第二拍摄画面。这样,终端在从第一拍摄画面切换至第二拍摄画面的过程中进行了平滑过渡,避免从第一拍摄画面跳转至第二拍摄画面带来的视觉突变,以提高用户体验的使用体验。
在一种可能的设计方法中,若上述拍摄目标能够完整显示在第一拍摄画面中,则该方法还包括:终端根据拍摄目标在第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;终端将第一拍摄画面切换为该目标摄像头捕捉到的拍摄画面。
第二方面,本申请提供一种终端切换摄像头的方法,终端包括至少两个摄像头,该方法包括:响应于用户打开拍摄功能的第一操作,终端显示第一摄像头捕捉到的第一拍摄画面,第一拍摄画面中包括拍摄目标;若该拍摄目标能够完整显示在第一拍摄画面中,则终端根据该拍摄目标在第一拍摄画面中的大小和/或位置,确定拍摄效果更优的目标摄像头;进而,终端可将第一拍摄画面切换为该目标摄像头捕捉到的拍摄画面,并关闭第一摄像头。
在一种可能的设计方法中,终端根据上述拍摄目标在第一拍摄画面中的位置,确定准备切换的目标摄像头,包括:终端计算该拍摄目标是否完整显示在第一拍摄画面中预设的最佳拍摄区中,该最佳拍摄区位于第一拍摄画面的中心;若该拍摄目标无法完整显示在该最佳拍摄区中,则终端将第二摄像头确定为目标摄像头,第二摄像头的FOV大于第一摄像头的FOV。
在一种可能的设计方法中,终端根据上述拍摄目标在第一拍摄画面中的大小,确定准备切换的目标摄像头,包括:终端计算该拍摄目标在第一拍摄画面中的目标占比;若该目标占比大于预设的目标占比范围的上限,则终端将第二摄像头确定为目标摄像头,第二摄像头的FOV大于第一摄像头的FOV;若该目标占比小于预设的目标占比范围的下限,则终端将第三摄像头确定为目标摄像头,第三摄像头的FOV小于第一摄像头的FOV。
在一种可能的设计方法中,终端根据上述拍摄目标在第一拍摄画面中的大小和位置,确定准备切换的目标摄像头,包括:终端确定第一拍摄画面、第二拍摄画面以及第三拍摄画面之间的位置关系,第二拍摄画面为根据第一拍摄画面计算的第二摄像头打开时拍摄的拍摄画面,第三拍摄画面为根据第一拍摄画面计算的第三摄像头打开时拍摄的拍摄画面,第二摄像头的FOV大于第一摄像头的FOV,第三摄像头的FOV小于第一摄像头的FOV;终端根据该拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的大小和位置,确定准备切换的目标摄像头。
在一种可能的设计方法中,终端根据上述拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的大小和位置,确定准备切换的目标摄像头,包括:终端从第一摄像头、第二摄像头以及第三摄像头中确定至少一个候选摄像头,该候选摄像头的拍摄画面中能够完整显示该拍摄目标;终端从该候选摄像头中确定目标摄像头,该拍摄目标在该目标摄像头的拍摄画面中的目标占比满足预设条件。
在一种可能的设计方法中,上述目标摄像头为第二摄像头,第二摄像头的FOV大于第一摄像头的FOV;其中,终端将第一拍摄画面切换为该目标摄像头捕捉到的拍摄画面,包括:终端在后台打开第二摄像头以捕捉第二拍摄画面;终端通过数字变焦逐渐放大第二拍摄画面中的内容;当第二拍摄画面被放大后的内容与第一拍摄画面中的内容一致时,终端关闭第一摄像头,并在前台显示被放大的第二拍摄画面;终端逐渐恢复第二拍摄画面的标准焦距,直至终端完整的显示出第二拍摄画面。
在一种可能的设计方法中,上述目标摄像头为第三摄像头,第三摄像头的FOV小于第一摄像头的FOV;其中,终端将第一拍摄画面切换为该目标摄像头捕捉到的拍摄画面,包括:终端在后台打开第三摄像头以捕捉第三拍摄画面;终端通过数字变焦逐渐放大第一拍摄画面中的内容;当第一拍摄画面被放大后的内容与第三拍摄画面中的内容一致时,终端关闭第一摄像头,并在前台显示第三拍摄画面。
第三方面,本申请提供一种终端,包括:接收单元、显示单元、确定单元和切换单元,其中,接收单元用于:接收用户打开拍摄功能的第一操作,显示单元用于:显示第一摄像头捕捉到的第一拍摄画面,第一拍摄画面中包括拍摄目标;切换单元用于:若该拍摄目标无法完整显示在第一拍摄画面中,则将第一拍摄画面切换为第二摄像头捕捉到的第二拍摄画面,关闭第一摄像头,第二摄像头的FOV大于第一摄像头的FOV。
在一种可能的设计方法中,确定单元用于:确定该拍摄目标或该拍摄目标的占位区是否与第一拍摄画面的边界线发生重合,该拍摄目标位于该占位区中;若发生重合,则确定该拍摄目标无法完整显示在第一拍摄画面中;若未发生重合,则确定该拍摄目标能够完整显示在第一拍摄画面中。
在一种可能的设计方法中,确定单元还用于:确定第一拍摄画面中的拍摄目标。
在一种可能的设计方法中,接收单元还用于:接收用户向第一拍摄画面中输入的第二操作,第二操作用于选中第一拍摄画面中的拍摄目标;确定单元具体用于:响应于第二操作,提取第一拍摄画面中用户选中位置处的图像特征,并根据该图像特征确定该拍摄目标。
在一种可能的设计方法中,切换单元具体用于:在后台打开第二摄像头以捕捉第二拍摄画面;通过数字变焦逐渐放大第二拍摄画面中的内容;当第二拍摄画面被放大后的内容与第一拍摄画面中的内容一致时,关闭第一摄像头,并在前台显示被放大的第二拍摄画面;逐渐恢复第二拍摄画面的标准焦距,直至终端完整的显示出第二拍摄画面。
在一种可能的设计方法中,确定单元还用于:若该拍摄目标能够完整显示在第一拍摄画面中,根据该拍摄目标在第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;切换单元还用于:将第一拍摄画面切换为该目标摄像头捕捉到的拍摄画面。
第四方面,本申请提供一种终端,包括:接收单元、显示单元、确定单元和切换单元,其中,接收单元用于:接收用户打开拍摄功能的第一操作;显示单元用于:显示第一摄像头捕捉到的第一拍摄画面,第一拍摄画面中包括拍摄目标;确定单元用于:若该拍摄目标能够完整显示在第一拍摄画面中,则根据该拍摄目标在第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;切换单元用于:将第一拍摄画面切换为该目标摄像头捕捉到的拍摄画面,关闭第一摄像头。
在一种可能的设计方法中,确定单元具体用于:计算该拍摄目标是否完整显示在第一 拍摄画面中预设的最佳拍摄区中,该最佳拍摄区位于第一拍摄画面的中心;若该拍摄目标无法完整显示在该最佳拍摄区中,则将第二摄像头确定为目标摄像头,第二摄像头的FOV大于第一摄像头的FOV。
在一种可能的设计方法中,确定单元具体用于:计算该拍摄目标在第一拍摄画面中的目标占比;若该目标占比大于预设的目标占比范围的上限,则将第二摄像头确定为目标摄像头,第二摄像头的FOV大于第一摄像头的FOV;若该目标占比小于预设的目标占比范围的下限,则将第三摄像头确定为目标摄像头,第三摄像头的FOV小于第一摄像头的FOV。
在一种可能的设计方法中,确定单元具体用于:确定第一拍摄画面、第二拍摄画面以及第三拍摄画面之间的位置关系,第二拍摄画面为根据第一拍摄画面计算的第二摄像头打开时拍摄的拍摄画面,第三拍摄画面为根据第一拍摄画面计算的第三摄像头打开时拍摄的拍摄画面,第二摄像头的FOV大于第一摄像头的FOV,第三摄像头的FOV小于第一摄像头的FOV;根据该拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的大小和位置,确定准备切换的目标摄像头。
在一种可能的设计方法中,确定单元具体用于:从第一摄像头、第二摄像头以及第三摄像头中确定至少一个候选摄像头,该候选摄像头的拍摄画面中能够完整显示该拍摄目标;从该候选摄像头中确定目标摄像头,该拍摄目标在该目标摄像头的拍摄画面中的目标占比满足预设条件。
在一种可能的设计方法中,切换单元具体用于:在后台打开第二摄像头以捕捉第二拍摄画面;通过数字变焦逐渐放大第二拍摄画面中的内容;当第二拍摄画面被放大后的内容与第一拍摄画面中的内容一致时,关闭第一摄像头,并在前台显示被放大的第二拍摄画面;逐渐恢复第二拍摄画面的标准焦距,直至终端完整的显示出第二拍摄画面。
在一种可能的设计方法中,切换单元具体用于:在后台打开第三摄像头以捕捉第三拍摄画面;通过数字变焦逐渐放大第一拍摄画面中的内容;当第一拍摄画面被放大后的内容与第三拍摄画面中的内容一致时,关闭第一摄像头,并在前台显示第三拍摄画面。
第五方面,本申请提供一种终端,包括:至少两个摄像头、触摸屏、一个或多个处理器、存储器、以及一个或多个程序;其中,处理器与存储器耦合,上述一个或多个程序被存储在存储器中,当终端运行时,该处理器执行该存储器存储的一个或多个程序,以使终端执行上述任一项所述的终端切换摄像头的方法。
第六方面,本申请提供一种计算机存储介质,包括计算机指令,当计算机指令在终端上运行时,使得终端执行上述任一项所述的终端切换摄像头的方法。
第七方面,本申请提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行上述任一项所述的终端切换摄像头的方法。
可以理解地,上述提供的第三方面至第五方面所述的终端、第六方面所述的计算机存储介质,以及第七方面所述的计算机程序产品均用于执行上文所提供的对应的方法,因此,其所能达到的有益效果可参考上文所提供的对应的方法中的有益效果,此处不再赘述。
附图说明
图1为本申请实施例提供的一种终端的结构示意图一;
图2为本申请实施例提供的一种终端的结构示意图二;
图3为本申请实施例提供的一种多摄像头的拍摄原理示意图一;
图4为本申请实施例提供的一种终端内操作系统的结构示意图;
图5为本申请实施例提供的一种多摄像头的拍摄原理示意图二;
图6A为本申请实施例提供的一种终端切换摄像头的方法的流程示意图一;
图6B为本申请实施例提供的一种终端切换摄像头的方法的流程示意图二;
图7为本申请实施例提供的一种终端切换摄像头的方法的场景示意图一;
图8为本申请实施例提供的一种终端切换摄像头的方法的场景示意图二;
图9为本申请实施例提供的一种终端切换摄像头的方法的场景示意图三;
图10为本申请实施例提供的一种终端切换摄像头的方法的场景示意图四;
图11为本申请实施例提供的一种终端切换摄像头的方法的场景示意图五;
图12为本申请实施例提供的一种终端切换摄像头的方法的场景示意图六;
图13为本申请实施例提供的一种终端切换摄像头的方法的场景示意图七;
图14为本申请实施例提供的一种终端切换摄像头的方法的场景示意图八;
图15为本申请实施例提供的一种终端切换摄像头的方法的场景示意图九;
图16为本申请实施例提供的一种终端切换摄像头的方法的场景示意图十;
图17为本申请实施例提供的一种终端切换摄像头的方法的场景示意图十一;
图18为本申请实施例提供的一种终端切换摄像头的方法的场景示意图十二;
图19为本申请实施例提供的一种终端的结构示意图三;
图20为本申请实施例提供的一种终端的结构示意图四。
具体实施方式
下面将结合附图对本申请实施例的实施方式进行详细描述。
本申请实施例提供的一种切换摄像头的方法可应用于终端。示例性的,该终端可以为手机、平板电脑、桌面型、膝上型、笔记本电脑、超级移动个人计算机(Ultra-mobile Personal Computer,UMPC)、手持计算机、上网本、个人数字助理(Personal Digital Assistant,PDA)、可穿戴电子设备、虚拟现实设备等,本申请实施例中对终端的具体形式不做特殊限制。
图1是本发明实施例的终端100的结构框图。
终端100可以包括处理器110,外部存储器接口120,内部存储器121,USB接口130,充电管理模块140,电源管理模块141,电池142,天线1,天线2,射频模块150,通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,马达191,指示器192,摄像头193,显示屏194,以及SIM卡接口195等。其中传感器模块可以包括压力传感器180A,陀螺仪传感器180B,气压传感器180C,磁传感器180D,加速度传感器180E,距离传感器180F,接近光传感器180G,指纹传感器180H,温度传感器180J,触摸传感器180K,环境光传感器180L,骨传导传感器180M等。
本发明实施例示意的结构并不构成对终端100的限定。可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器 (application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(Neural-network Processing Unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是指挥终端100的各个部件按照指令协调工作的决策者。是终端100的神经中枢和指挥中心。控制器根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器110中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器中的存储器为高速缓冲存储器,可以保存处理器刚用过或循环使用的指令或数据。如果处理器需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器的等待时间,因而提高了系统的效率。
在一些实施例中,处理器110可以包括接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
I2C接口是一种双向同步串行总线,包括一根串行数据线(serial data line,SDA)和一根串行时钟线(derail clock line,SCL)。在一些实施例中,处理器可以包含多组I2C总线。处理器可以通过不同的I2C总线接口分别耦合触摸传感器,充电器,闪光灯,摄像头等。例如:处理器可以通过I2C接口耦合触摸传感器,使处理器与触摸传感器通过I2C总线接口通信,实现终端100的触摸功能。
I2S接口可以用于音频通信。在一些实施例中,处理器可以包含多组I2S总线。处理器可以通过I2S总线与音频模块耦合,实现处理器与音频模块之间的通信。在一些实施例中,音频模块可以通过I2S接口向通信模块传递音频信号,实现通过蓝牙耳机接听电话的功能。
PCM接口也可以用于音频通信,将模拟信号抽样,量化和编码。在一些实施例中,音频模块与通信模块可以通过PCM总线接口耦合。在一些实施例中,音频模块也可以通过PCM接口向通信模块传递音频信号,实现通过蓝牙耳机接听电话的功能。所述I2S接口和所述PCM接口都可以用于音频通信,两种接口的采样速率不同。
UART接口是一种通用串行数据总线,用于异步通信。该总线为双向通信总线。它将要传输的数据在串行通信与并行通信之间转换。在一些实施例中,UART接口通常被用于连接处理器与通信模块160。例如:处理器通过UART接口与蓝牙模块通信,实现蓝牙功能。在一些实施例中,音频模块可以通过UART接口向通信模块传递音频信号,实现通过蓝牙耳机播放音乐的功能。
MIPI接口可以被用于连接处理器与显示屏,摄像头等外围器件。MIPI接口包括摄像头串行接口(camera serial interface,CSI),显示屏串行接口(display serial interface,DSI) 等。在一些实施例中,处理器和摄像头通过CSI接口通信,实现终端100的拍摄功能。处理器和显示屏通过DSI接口通信,实现终端100的显示功能。
GPIO接口可以通过软件配置。GPIO接口可以配置为控制信号,也可配置为数据信号。在一些实施例中,GPIO接口可以用于连接处理器与摄像头,显示屏,通信模块,音频模块,传感器等。GPIO接口还可以被配置为I2C接口,I2S接口,UART接口,MIPI接口等。
USB接口130可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口可以用于连接充电器为终端100充电,也可以用于终端100与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。还可以用于连接其他电子设备,例如AR设备等。
本发明实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对终端100的结构限定。终端100可以采用本发明实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。在一些有线充电的实施例中,充电管理模块可以通过USB接口接收有线充电器的充电输入。在一些无线充电的实施例中,充电管理模块可以通过终端100的无线充电线圈接收无线充电输入。充电管理模块为电池充电的同时,还可以通过电源管理模块141为终端设备供电。
电源管理模块141用于连接电池142,充电管理模块140与处理器110。电源管理模块接收所述电池和/或充电管理模块的输入,为处理器,内部存储器,外部存储器,显示屏,摄像头,和通信模块等供电。电源管理模块还可以用于监测电池容量,电池循环次数,电池健康状态(漏电,阻抗)等参数。在一些实施例中,电源管理模块141也可以设置于处理器110中。在一些实施例中,电源管理模块141和充电管理模块也可以设置于同一个器件中。
终端100的无线通信功能可以通过天线模块1,天线模块2射频模块150,通信模块160,调制解调器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。终端100中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将蜂窝网天线复用为无线局域网分集天线。在一些实施例中,天线可以和调谐开关结合使用。
射频模块150可以提供应用在终端100上的包括2G/3G/4G/5G等无线通信的解决方案的通信处理模块。射频模块可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(Low Noise Amplifier,LNA)等。射频模块由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调器进行解调。射频模块还可以对经调制解调器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,射频模块150的至少部分功能模块可以被设置于处理器150中。在一些实施例中,射频模块150的至少部分功能模块可以与处理器110的至少部分模块被设置在同一个器件中。
调制解调器可以包括调制器和解调器。调制器用于将待发送的低频基带信号调制成中高频信号。解调器用于将接收的电磁波信号解调为低频基带信号。随后解调器将解调得到 的低频基带信号传送至基带处理器处理。低频基带信号经基带处理器处理后,被传递给应用处理器。应用处理器通过音频设备(不限于扬声器,受话器等)输出声音信号,或通过显示屏显示图像或视频。在一些实施例中,调制解调器可以是独立的器件。在一些实施例中,调制解调器可以独立于处理器,与射频模块或其他功能模块设置在同一个器件中。
通信模块160可以提供应用在终端100上的包括无线局域网(wireless local area networks,WLAN),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案的通信处理模块。通信模块160可以是集成至少一个通信处理模块的一个或多个器件。通信模块经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器。通信模块160还可以从处理器接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
在一些实施例中,终端100的天线1和射频模块耦合,天线2和通信模块耦合,使得终端100可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。所述GNSS可以包括全球卫星定位系统(global positioning system,GPS),全球导航卫星系统(global navigation satellite system,GLONASS),北斗卫星导航系统(beidou navigation satellite system,BDS),准天顶卫星系统(quasi-zenith satellite system,QZSS))和/或星基增强系统(satellite based augmentation systems,SBAS)。
终端100通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器110可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏194用于显示图像,视频等。显示屏包括显示面板。显示面板可以采用LCD(liquid crystal display,液晶显示屏),OLED(organic light-emitting diode,有机发光二极管),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,终端100可以包括1个或N个显示屏,N为大于1的正整数。
终端100可以通过ISP,摄像头193,视频编解码器,GPU,显示屏以及应用处理器等实现拍摄功能。
ISP用于处理摄像头反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。 感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,终端100可以包括1个或N个摄像头,N为大于1的正整数。
示例性的,终端100可以包括多个FOV不相同的摄像头。例如,如图2所示,可在终端100的后壳上设置摄像头A、摄像头B以及摄像头C。摄像头A、摄像头B以及摄像头C可呈横向、纵向或呈三角形排布。并且,摄像头A、摄像头B以及摄像头C具有不同的FOV。例如,摄像头A的FOV小于摄像头B的FOV,摄像头B的FOV小于摄像头C的FOV。
当FOV越大时摄像头能够捕捉到的拍摄画面的视野范围越大。并且,由于摄像头A、摄像头B以及摄像头C之间的距离较近,因此,可以认为这三个摄像头的拍摄画面的中心是重合的。那么,仍如图3所示,摄像头C捕捉到的拍摄画面3最大,摄像头B捕捉到的拍摄画面2小于拍摄画面3,摄像头A捕捉到的拍摄画面1最小,即拍摄画面3可包括拍摄画面2和拍摄画面1。
在拍摄过程中,可能会出现拍摄目标在上述拍摄画面1至拍摄画面3中动态变化的情况,在本申请实施例中,终端100可以根据拍摄目标在各个拍摄画面中的大小和位置,确定拍摄效果最佳的目标摄像头。这样,如果当前使用的摄像头不是该目标摄像头,则终端100可自动切换至目标摄像头进行拍摄,从而提高拍摄目标的拍摄效果。
另外,数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当终端100在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。终端100可以支持一种或多种视频编解码器。这样,终端100可以播放或录制多种编码格式的视频,例如:MPEG1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现终端100的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展终端100的存储能力。外部存储卡通过外部存储器接口与处理器通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器110通过运行存储在内部存储器121的指令,从而执行终端100的各种功能应用以及数据处理。存储器121可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储终端100使用过程中所创建的数据(比如音频数据,电话本等)等。此外,存储器121可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁 盘存储器件,闪存器件,其他易失性固态存储器件,通用闪存存储器(universal flash storage,UFS)等。
终端100可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。音频模块还可以用于对音频信号编码和解码。在一些实施例中,音频模块可以设置于处理器110中,或将音频模块的部分功能模块设置于处理器110中。
扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。终端100可以通过扬声器收听音乐,或收听免提通话。
受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当终端100接听电话或语音信息时,可以通过将受话器靠近人耳接听语音。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。当拨打电话或发送语音信息时,用户可以通过人嘴靠近麦克风发声,将声音信号输入到麦克风。终端100可以设置至少一个麦克风。在一些实施例中,终端100可以设置两个麦克风,除了采集声音信号,还可以实现降噪功能。在一些实施例中,终端100还可以设置三个,四个或更多麦克风,实现采集声音信号,降噪,还可以识别声音来源,实现定向录音功能等。
耳机接口170D用于连接有线耳机。耳机接口可以是USB接口,也可以是3.5mm的开放移动终端平台(open mobile terminal platform,OMTP)标准接口,美国蜂窝电信工业协会(cellular telecommunications industry association of the USA,CTIA)标准接口。
压力传感器180A用于感受压力信号,可以将压力信号转换成电信号。在一些实施例中,压力传感器可以设置于显示屏。压力传感器的种类很多,如电阻式压力传感器,电感式压力传感器,电容式压力传感器等。电容式压力传感器可以是包括至少两个具有导电材料的平行板。当有力作用于压力传感器,电极之间的电容改变。终端100根据电容的变化确定压力的强度。当有触摸操作作用于显示屏,终端100根据压力传感器检测所述触摸操作强度。终端100也可以根据压力传感器的检测信号计算触摸的位置。在一些实施例中,作用于相同触摸位置,但不同触摸操作强度的触摸操作,可以对应不同的操作指令。例如:当有触摸操作强度小于第一压力阈值的触摸操作作用于短消息应用图标时,执行查看短消息的指令。当有触摸操作强度大于或等于第一压力阈值的触摸操作作用于短消息应用图标时,执行新建短消息的指令。
陀螺仪传感器180B可以用于确定终端100的运动姿态。在一些实施例中,可以通过陀螺仪传感器确定终端100围绕三个轴(即,x,y和z轴)的角速度。陀螺仪传感器可以用于拍摄防抖。示例性的,当按下快门,陀螺仪传感器检测终端100抖动的角度,根据角度计算出镜头模组需要补偿的距离,让镜头通过反向运动抵消终端100的抖动,实现防抖。陀螺仪传感器还可以用于导航,体感游戏场景。
气压传感器180C用于测量气压。在一些实施例中,终端100通过气压传感器测得的气压值计算海拔高度,辅助定位和导航。
磁传感器180D包括霍尔传感器。终端100可以利用磁传感器检测翻盖皮套的开合。在一些实施例中,当终端100是翻盖机时,终端100可以根据磁传感器检测翻盖的开合。 进而根据检测到的皮套的开合状态或翻盖的开合状态,设置翻盖自动解锁等特性。
加速度传感器180E可检测终端100在各个方向上(一般为三轴)加速度的大小。当终端100静止时可检测出重力的大小及方向。还可以用于识别终端姿态,应用于横竖屏切换,计步器等应用。
距离传感器180F,用于测量距离。终端100可以通过红外或激光测量距离。在一些实施例中,拍摄场景,终端100可以利用距离传感器测距以实现快速对焦。
接近光传感器180G可以包括例如发光二极管(LED)和光检测器,例如光电二极管。发光二极管可以是红外发光二极管。通过发光二极管向外发射红外光。使用光电二极管检测来自附近物体的红外反射光。当检测到充分的反射光时,可以确定终端100附近有物体。当检测到不充分的反射光时,可以确定终端100附近没有物体。终端100可以利用接近光传感器检测用户手持终端100贴近耳朵通话,以便自动熄灭屏幕达到省电的目的。接近光传感器也可用于皮套模式,口袋模式自动解锁与锁屏。
环境光传感器180L用于感知环境光亮度。终端100可以根据感知的环境光亮度自适应调节显示屏亮度。环境光传感器也可用于拍照时自动调节白平衡。环境光传感器还可以与接近光传感器配合,检测终端100是否在口袋里,以防误触。
指纹传感器180H用于采集指纹。终端100可以利用采集的指纹特性实现指纹解锁,访问应用锁,指纹拍照,指纹接听来电等。
温度传感器180J用于检测温度。在一些实施例中,终端100利用温度传感器检测的温度,执行温度处理策略。例如,当温度传感器上报的温度超过阈值,终端100执行降低位于温度传感器附近的处理器的性能,以便降低功耗实施热保护。
触摸传感器180K,也称“触控面板”。可设置于显示屏。用于检测作用于其上或附近的触摸操作。可以将检测到的触摸操作传递给应用处理器,以确定触摸事件类型,并通过显示屏提供相应的视觉输出。
骨传导传感器180M可以获取振动信号。在一些实施例中,骨传导传感器可以获取人体声部振动骨块的振动信号。骨传导传感器也可以接触人体脉搏,接收血压跳动信号。在一些实施例中,骨传导传感器也可以设置于耳机中。音频模块170可以基于所述骨传导传感器获取的声部振动骨块的振动信号,解析出语音信号,实现语音功能。应用处理器可以基于所述骨传导传感器获取的血压跳动信号解析心率信息,实现心率检测功能。
按键190包括开机键,音量键等。按键可以是机械按键。也可以是触摸式按键。终端100接收按键输入,产生与终端100的用户设置以及功能控制有关的键信号输入。
马达191可以产生振动提示。马达可以用于来电振动提示,也可以用于触摸振动反馈。例如,作用于不同应用(例如拍照,音频播放等)的触摸操作,可以对应不同的振动反馈效果。作用于显示屏不同区域的触摸操作,也可对应不同的振动反馈效果。不同的应用场景(例如:时间提醒,接收信息,闹钟,游戏等)也可以对应不同的振动反馈效果。触摸振动反馈效果还可以支持自定义。
指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
SIM卡接口195用于连接用户标识模块(subscriber identity module,SIM)。SIM卡可以 通过插入SIM卡接口,或从SIM卡接口拔出,实现和终端100的接触和分离。终端100可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口也可以兼容不同类型的SIM卡。SIM卡接口也可以兼容外部存储卡。终端100通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,终端100采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在终端100中,不能和终端100分离。
终端100的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构。本发明实施例以分层架构的Android系统为例,示例性说明终端100的软件结构。
图4是本发明实施例的终端100的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图4所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
这些应用程序可以是操作系统的系统级应用(例如,桌面、短信、通话、日历、联系人等),也可以是普通级别应用(例如,微信、淘宝等)。系统级应用一般指的是:该应用具有系统级权限,可以获取各种系统资源。普通级别应用一般指的是:该应用具有普通权限,可能无法获取某些系统资源,或者需要用户授权,才能获取一些系统资源。系统级应用可以为手机中预装的应用。普通级别应用可以为手机中预装的应用,也可以为后续用户自行安装的应用。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图4所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供终端100的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提 醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,终端振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库OpenGL ES,2D图形引擎SGL等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
OpenGL ES用于实现三维图形绘图,图像渲染,合成,和图层处理等。
SGL是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明终端100软件以及硬件的工作流程。
当触摸传感器接收到触摸操作,相应的硬件中断被发给内核层。内核层可将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头捕获每一帧拍摄画面。
如果终端100包括多个摄像头(例如上述摄像头A、摄像头B以及摄像头C),则终端100可先打开其中一个摄像头(例如摄像头A)。打开摄像头A后,终端可采集到摄像头A捕捉到的每一帧拍摄画面(后续实施例中称为拍摄画面1),并将拍摄画面1显示在显示屏194中。此时,终端可确定出拍摄画面1中的拍摄目标。例如,如图5所示,可以将拍摄画面1中识别出的人物作为拍摄目标501。
终端100确定拍摄目标501后,终端100可实时追踪拍摄目标501在拍摄画面1中的变化情况,例如,拍摄目标501的位置和大小等参数。此时,终端100虽然没有开启摄像头B和摄像头C,但由于每个摄像头的位置和FOV等参数都是一定的,因此,终端100除了可以确定出拍摄目标501在拍摄画面1中的位置和大小之外,还可以确定出拍摄目标501在摄像头B的拍摄画面(即拍摄画面2)中的位置和大小,以及拍摄目标501在摄像头C的拍摄画面(即拍摄画面3)中的位置和大小。
这样,终端100可以根据拍摄目标501在各个拍摄画面中的位置和大小,确定出能够 保证拍摄目标501拍摄效果的摄像头。例如,如果拍摄目标501已经溢出了摄像头A的拍摄画面1,则终端100可以将当前开启的摄像头A切换为FOV更大的摄像头B或摄像头C,从而在拍摄过程中自动、智能的帮助用户切换合适的摄像头拍摄拍摄目标,提高拍摄目标的拍摄效果。
需要说明的是,上述拍摄画面1是摄像头A在默认的标准焦距下通过摄像头A中所有感光元件生成的拍摄画面,同样,拍摄画面2是摄像头B在默认的标准焦距下通过摄像头B中所有感光元件生成的拍摄画面,拍摄画面3是摄像头C在默认的标准焦距下通过摄像头C中所有感光元件生成的拍摄画面。摄像头的标准焦距通常是该摄像头所支持的最小焦距。以拍摄画面3举例,用户可以手动增加摄像头C的焦距,此时终端100可通过数字变焦的方式截取拍摄画面3中的部分内容作为新的拍摄画面(例如拍摄画面3’),并将拍摄画面3’显示在预览界面中。由于拍摄画面3’是拍摄画面3中的部分内容,因此,拍摄画面3’所呈现的视野范围虽然变小,但拍摄画面3’的分辨率也随之下降。
为了便于理解,以下结合附图对本申请实施例提供的一种切换摄像头的方法进行具体介绍。以下实施例中均以手机作为终端举例说明。
图6A为本申请实施例提供的一种切换摄像头的方法的流程示意图。如图6A所示,该切换摄像头的方法可以包括:
S601、手机接收用户针对摄像头的第一操作,手机安装有三个FOV不相同的摄像头。
其中,上述第一操作可以是用户打开摄像头的任意操作。例如,第一操作具体可以是点击相机APP的图标,或者从其他APP中打开相机APP的操作。或者,第一操作也可以是用户开始录制视频的任意操作。例如,第一操作具体可以是用户打开相机APP后点击录制按钮的操作,本申请实施例对此不做任何限制。
需要说明的是,本申请实施例中手机可以安装有N(N为大于1的整数)个摄像头。这N个摄像头的FOV不相同。即手机使用不同摄像头拍摄出的拍摄画面的视野范围不同。这N个摄像头可以设置在手机的后壳上作为后置摄像头,也可以设置在手机的前面板上作为前置摄像头。为方便阐述本申请实施例提供的切换摄像头的方法,后续实施例中均以三个FOV不相同的摄像头A、B、C举例说明。
其中,摄像头A的FOV最小,摄像头C的FOV最大,摄像头B的FOV介于摄像头A的FOV和摄像头C的FOV之间。那么,如图3所示,手机使用摄像头A拍摄时得到的拍摄画面1的视野范围最小,手机使用摄像头C拍摄时得到的拍摄画面3的视野范围最大,手机使用摄像头B拍摄时得到的拍摄画面2的视野最大范围介于拍摄画面1的视野范围和拍摄画面3的视野范围之间。
S602、手机使用第一摄像头捕捉第一拍摄画面。
手机接收到上述第一操作后,可使用默认的摄像头或随机选择一个摄像头(即第一摄像头)进行拍摄。第一摄像头开始拍摄后可将捕捉到的拍摄画面(本申请中将第一摄像头捕捉到的画面称为第一拍摄画面)发送给手机,由手机将第一拍摄画面显示在显示屏中。
示例性的,如图7中的(a)所示,用户点击相机APP的图标701后,手机可检测出用户执行了用于打开摄像头的第一操作。进而,手机可从摄像头A、摄像头B以及摄像头C中选择一个摄像头作为第一摄像头开始工作。例如,手机可将FOV处于中间值的摄像头 B作为第一摄像头,进而通过调用摄像头驱动,启动摄像头B并使用摄像头B捕获第一拍摄画面。此时,如图7中的(b)所示,手机可以将摄像头B捕获到的每一帧第一拍摄画面702显示在相机APP的预览界面中。
另外,如图7中的(b)所示,手机也可以在相机APP的预览界面中设置一个开关按钮703。当检测到用户打开该开关按钮703后,手机可按照下述步骤S603-S606的方法自动确定并切换拍摄效果更优的摄像头,否则,手机可继续使用上述第一摄像头捕捉第一拍摄画面。
S603、手机确定第一拍摄画面中的拍摄目标。
由于第一摄像头捕捉到的第一拍摄画面702可实时显示在手机的显示屏中,因此,用户可手动在第一拍摄画面702中选择所需的拍摄目标。例如,如图8所示,可以在相机APP的预览界面中设置一个圈选按钮801。用户点击该圈选按钮801后,手机可提供一个位置和大小可变化的圈选框802给用户,用户可以拖动该圈选框802选择第一拍摄画面702中的拍摄目标。例如,如果检测到用户使用圈选框802圈选了第一拍摄画面702中的小汽车,则手机可提取圈选框802内的图像进行图像识别,得到圈选框802内的图像特征,从而确定第一拍摄画面702中的拍摄目标为小汽车。
当然,用户也可以通过点击、双击、长按、重压等方式在第一拍摄画面702中选中用户所希望的拍摄目标,本申请实施例对此不做任何限制。
又或者,手机在获取到上述第一拍摄画面后,也可以自动对第一拍摄画面进行图像识别,进而根据识别结果确定第一拍摄画面中的拍摄目标。例如,手机可将第一拍摄画面中识别出的人脸作为拍摄目标,或者,将位于第一拍摄画面中心的人或物作为拍摄目标,或者,将占据第一拍摄画面一定面积比例的人或物作为拍摄目标,本申请实施例对此不做任何限制。
手机确定出第一拍摄画面702中的拍摄目标后,可通过增加边框、高亮、语音或文字等提示方法提示用户第一拍摄画面702中的拍摄目标。另外,如果用户对手机自动确定出的拍摄目标不满意,用户也可以对拍摄目标进行修改。例如,用户可以拖动手机在拍摄目标周围显示的边框至用户所希望的位置,拖动后该边框内的图像可作为新的拍摄目标。
另外,手机确定出第一拍摄画面中的拍摄目标后,可实时追踪拍摄目标在第一拍摄画面中的位置和大小。例如,手机在确定上述第一拍摄画面702中的拍摄目标为小汽车时,可以得到该小汽车的图像特征,当手机使用摄像头B不断刷新第一拍摄画面702时,手机可根据该小汽车的图像特征在刷新后的第一拍摄画面702中确定该小汽车的位置和大小等参数。
S604、手机根据拍摄目标在第一拍摄画面中的位置和大小确定目标摄像头。
在步骤S604中,手机可以先根据拍摄目标在第一拍摄画面中的位置确定是否需要切换摄像头。如果需要切换摄像头,手机可进一步确定将哪个摄像头作为准备切换的目标摄像头。
示例性的,如图6B所示,手机可以先根据拍摄目标在第一拍摄画面中的位置确定拍摄目标能否完整显示在第一拍摄画面中。以第一拍摄画面为摄像头B拍摄的拍摄画面2举例,如图9中的(a)所示,手机确定出拍摄目标1001后,可进一步确定拍摄画面2的各 条边界线是否与拍摄目标1001发生重合。如果发生重合,说明拍摄目标无法完整显示在第一拍摄画面中,此时手机可继续执行图6B中的步骤S611。例如,图9中拍摄画面2的一条边界线903与拍摄目标1001发生重合,且重合部分占边界线903的30%以上,则说明拍摄目标1001无法完整显示在拍摄画面2中,导致拍摄目标1001的拍摄效果不佳。因此,如图9中的(b)所示,手机可将FOV更大的摄像头(例如摄像头C)确定为准备切换的目标摄像头,使得拍摄目标1001能够更加完整的显示在摄像头C拍摄的拍摄画面3中。
或者,由于拍摄目标1001在拍摄画面中一般是一个不规则的图形,因此,如图10中的(a)-(b)所示,手机可以为拍摄目标1001设置一个形状规则的占位区1002。占位区1002一般是一个矩形区域,可以容纳下拍摄目标1001。此时,手机可通过比较占位区1002与拍摄画面2各条边界的重合度,确定是否需要更换摄像头。例如,仍如图10中的(a)所示,当占位区1002的一条边界线与拍摄画面2的边界线903完全重合,说明拍摄目标1001无法完整显示在拍摄画面2中。那么,如图10中的(b)所示,手机可将FOV更大的摄像头(例如摄像头C)确定为上述目标摄像头,使得拍摄目标1001能够更加靠近拍摄画面3的中心显示。
当然,如果当前使用的第一摄像头(例如摄像头B)已经是FOV最大的摄像头了,说明在手机的最大视角范围内也无法完整显示目标摄像头,则手机无需再切换到其他摄像头,此时目标摄像头仍为正在使用的第一摄像头。
在本申请的另一些实施例中,仍如图6B所示,如果上述拍摄目标1001(或拍摄目标1001的占位区1002)与拍摄画面2的各条边界没有发生重合,则说明当前拍摄画面2能够完整显示出拍摄目标1001,则手机可继续执行步骤S612-S615中的任一步骤。也就是说,当拍摄目标能够完整显示在第一拍摄画面时,本申请实施例提供了四种方式确定准备切换的目标摄像头。
方式一(即步骤S612)
如果上述拍摄目标1001(或拍摄目标1001的占位区1002)与拍摄画面2的各条边界没有发生重合,则说明当前拍摄画面2能够完整显示出拍摄目标1001,不会出现拍摄出的拍摄目标1001显示不完整的情况,因此,手机可以将正在使用的第一摄像头(即摄像头B)继续作为目标摄像头。
方式二(即步骤S613)
在方式二中,由于摄像头拍摄出的拍摄画面边缘的图像容易产生畸变,因此,可以将每个拍摄画面中位于中心的区域视为最佳拍摄区。那么,如果拍摄画面2能够完整显示出拍摄目标1001,则手机还可以进一步根据拍摄目标1001的位置确定拍摄目标1001是否位于拍摄画面2的最佳拍摄内。如果拍摄目标1001位于拍摄画面2的最佳拍摄区内,则说明拍摄目标1001在拍摄画面2中的拍摄效果较好,因此,手机无需再切换到其他摄像头,此时目标摄像头仍为正在使用的摄像头B。
相应的,如果拍摄目标1001已经超出了拍摄画面2的最佳拍摄区,则继续使用摄像头B拍摄会使得拍摄目标1001的边缘产生畸变。因此,手机可将FOV更大的摄像头(例如摄像头C)确定为准备切换的目标摄像头,使得拍摄目标1001能够显示在最佳拍摄区更 大的拍摄画面3中。
方式三(即步骤S614)
在方式三中,如果拍摄画面2能够完整显示出拍摄目标1001,则手机还可以进一步根据拍摄目标1001的大小计算拍摄目标1001在拍摄画面2中的目标占比。其中,该目标占比可以是拍摄目标1001与拍摄画面2之间的大小比例;或者,该目标占比可以是拍摄目标1001与拍摄画面2中最佳拍摄区之间的大小比例;或者,该目标占比可以是拍摄目标1001的占位区1002与拍摄画面2之间的大小比例;或者,该目标占比也可以是拍摄目标1001的占位区与拍摄画面2中最佳拍摄区之间的大小比例。
并且,手机还可以预先设置一个拍摄目标的目标占比范围,例如,当拍摄目标与拍摄画面的比例在50%-80%之间时,拍摄目标在拍摄画面中的拍摄效果最符合人眼视觉效果。因此,可将上述目标占比范围设置为50%-80%。这样,如果拍摄目标1001在拍摄画面2中的目标占比落入上述目标占比范围,则说明拍摄目标1001在拍摄画面2中的拍摄效果较好,因此,手机无需再切换到其他摄像头,此时目标摄像头仍为正在使用的摄像头B。
相应的,如果拍摄目标1001在拍摄画面2中的目标占比大于目标占比范围的上限值,则说明当前使用的摄像头B的FOV过小,手机可将FOV更大的摄像头(例如摄像头C)确定为上述目标摄像头。或者,如果拍摄目标1001在拍摄画面2中的目标占小于目标占比范围的下限值,则说明当前使用的摄像头B的FOV过大,手机可将FOV更小的摄像头(例如摄像头A)确定为上述目标摄像头。
方式四(即步骤S615)
在方式四中,虽然手机在执行上述步骤S601-S603的过程中只开启了第一摄像头(即摄像头B),但手机可以根据摄像头A、摄像头B以及摄像头C的FOV,确定出摄像头A拍摄出的拍摄画面1、摄像头B拍摄出的拍摄画面2以及摄像头C拍摄出的拍摄画面3之间的相对位置关系。例如,仍如图3所示,拍摄画面1位于拍摄画面2中心,且大小为拍摄画面2的70%,拍摄画面2位于拍摄画面3中心,且大小为拍摄画面3的70%。
因此,如果上述拍摄目标1001(或拍摄目标1001的占位区1002)与第一拍摄画面(即拍摄画面2)的各条边界没有发生重合,即第一拍摄画面能够完整显示出拍摄目标1001,则手机也可以结合拍摄目标1001在其他摄像头的拍摄画面(例如上述拍摄画面1和拍摄画面3)中的位置和大小确定准备切换的目标摄像头。
示例性的,步骤S615具体可包括下述步骤S901-S902。
S901、手机根据拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的位置确定候选摄像头。
其中,第二拍摄画面是指使用第二摄像头拍摄时的拍摄画面,第三拍摄画面是指使用第三摄像头拍摄时的拍摄画面。
示例性的,手机可以根据拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的位置关系,将能够完整拍摄到上述拍摄目标的摄像头作为候选摄像头。由于手机已经确定出拍摄目标能够完整显示在第一拍摄画面中,因此当前正在使用的第一摄像头(例如摄像头B)为候选摄像头之一。
同时,手机还可以按照计算拍摄目标是否能够完整显示在第一拍摄画面的方法,计算 拍摄目标是否能够完整显示在第二拍摄画面和第三拍摄画面中。例如,手机可以计算第二拍摄画面的边界是否与拍摄目标(或拍摄目标的占位区)重合,如果重合,则说明拍摄目标无法完整显示在第二拍摄画面中,如果不重合,则说明拍摄目标能够完整显示在第二拍摄画面中,与第二拍摄画面对应的第二摄像头可作为候选摄像头之一。类似的,手机还可以计算第三拍摄画面的边界是否与拍摄目标(或拍摄目标的占位区)重合,如果重合,则说明拍摄目标无法完整显示在第三拍摄画面中,如果不重合,则说明拍摄目标能够完整显示在第三拍摄画面中,与第三拍摄画面对应的第三摄像头可作为候选摄像头之一。
需要说明的是,拍摄目标在不同的拍摄画面中的占位区可能不同,那么,手机可以通过计算拍摄目标在第一拍摄画面中的占位区与第二拍摄画面(或第三拍摄画面)的边界线是否重合,确定拍摄目标是否能够完整显示在第二拍摄画面(或第三拍摄画面)。又或者,手机也可以通过计算拍摄目标在第二拍摄画面中的占位区与第二拍摄画面的边界线是否重合,确定拍摄目标是否能够完整显示在第二拍摄画面中,并且,通过计算拍摄目标在第三拍摄画面中的占位区与第三拍摄画面的边界线是否重合,确定拍摄目标是否能够完整显示在第三拍摄画面中,本申请实施例对此不做任何限制。
可以理解地,当存在大于拍摄第一拍摄画面的摄像头(例如摄像头B)的FOV的摄像头(例如摄像头C),那么该摄像头(例如摄像头C)也可以作为候选摄像头之一;当存于小于拍摄第一拍摄画面的摄像头(例如摄像头B)的FOV的摄像头(例如摄像头A),那么该摄像头(例如摄像头A)是否可以作为候选摄像头之一,则可以采用上述方式进行判断。
示例性的,如图11中的(a)所示,假设手机当前使用的第一摄像头为摄像头C,此时摄像头C拍摄出的第一拍摄画面为拍摄画面3。那么,手机通过执行上述步骤S611可确定出拍摄目标1001能够完全显示在拍摄画面3中,则摄像头C为候选摄像头之一。进一步地,手机还可以计算拍摄目标1001能否完整显示在摄像头B拍摄的拍摄画面2以及摄像头A拍摄的拍摄画面1中。如果只有拍摄画面3能够完整显示出拍摄目标1001,也就是说使用摄像头C可以完整拍摄到拍摄目标1001,但使用摄像头A和摄像头B无法完整拍摄到拍摄目标1001,则手机可将摄像头C确定为唯一的候选摄像头。又或者,如图11中的(b)所示,如果手机确定出拍摄画面1和拍摄画面2均能够完整显示出拍摄目标1001,则手机可将摄像头A、摄像头B以及摄像头C均确定为候选摄像头。
又或者,仍以第一拍摄画面为拍摄画面3举例,如图12中的(a)-(b)所示,手机确定出拍摄目标1001后,可进一步确定出拍摄目标1001的占位区1002。那么,手机通过执行步骤S611比较占位区1002与拍摄画面3之间的位置关系。如图12中的(a)所示,如果拍摄画面3能够完整显示出拍摄目标1001的占位区1002,则手机可将摄像头C确定为候选摄像头之一。并且,手机可进一步比较占位区1002与拍摄画面1和拍摄画面2之间的位置关系。如果拍摄画面1和拍摄画面2均无法完整显示拍摄目标1001的占位区1002,则手机可将摄像头C确定为唯一的候选摄像头。又例如,如图12中的(b)所示,如果拍摄画面2能够完整显示出拍摄目标1001的占位区1002,但拍摄画面1无法完整显示出拍摄目标1001的占位区1002,则手机可将摄像头C和摄像头B确定为候选摄像头。
又或者,由于上述拍摄画面1、拍摄画面2以及拍摄画面3各自均具有一个最佳拍摄 区。因此,手机在确定候选摄像头时,可以将能够在最佳拍摄区内完整显示拍摄目标1001的摄像头作为候选摄像头。从而避免后续将拍摄目标1001显示在拍摄画面的边缘而导致畸变的问题,提高拍摄目标1001的拍摄效果。
如果手机确定出的候选摄像头只有一个,例如,根据图11(a)中拍摄目标1001的位置确定出的候选摄像头仅包括与拍摄画面3对应的摄像头C,那么,为了能够完整显示拍摄目标1001,以提高拍摄目标1001的拍摄效果,手机可将摄像头C确定为目标摄像头。
如果手机确定出的候选摄像头有多个,则说明当前有多个摄像头都可完整的拍摄出拍摄目标,此时手机可继续执行下述步骤S902,从这多个候选摄像头中确定一个拍摄效果最优的目标摄像头。
S902、当候选摄像头有多个时,手机根据拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的大小,从多个候选摄像头中确定一个目标摄像头。
当候选摄像头有多个时,手机可以将FOV最小的候选摄像头确定为目标摄像头,以提高后续拍摄出的拍摄目标在拍摄画面中的占比。
又或者,当候选摄像头有多个时,手机可计算拍摄目标分别在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的目标占比。例如,该目标占比可以是拍摄目标分别与第一拍摄画面、第二拍摄画面以及第三拍摄画面之间的大小比例;或者,该目标占比可以是拍摄目标分别与第一拍摄画面、第二拍摄画面以及第三拍摄画面中最佳拍摄区之间的大小比例;或者,该目标占比可以是拍摄目标的占位区分别与第一拍摄画面、第二拍摄画面以及第三拍摄画面之间的大小比例;或者,该目标占比也可以是拍摄目标的占位区分别与第一拍摄画面、第二拍摄画面以及第三拍摄画面中最佳拍摄区之间的大小比例。
由于拍摄目标的占比较大时能够更加突出拍摄目标在拍摄画面中的显示效果,因此,手机可以将拍摄目标的占比最大的拍摄画面所对应的摄像头确定为目标摄像头。
又或者,手机还可以预先设置一个拍摄目标的最佳占比范围,例如,当拍摄目标与拍摄画面的比例在50%-80%之间时,拍摄目标在拍摄画面中的拍摄效果最符合人眼视觉效果。因此,可将上述最佳占比范围设置为50%-80%。这样,当候选摄像头有多个时,手机可计算拍摄目标分别在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的目标占比,进而将目标占比落入上述最佳占比范围内的拍摄画面所对应的摄像头确定为目标摄像头。
至此,通过步骤S901-S902,手机根据拍摄目标在第一拍摄画面、第二拍摄画面以及第三拍摄画面中的位置和大小确定出目标摄像头,后续手机使用该目标摄像头拍摄上述拍摄目标的拍摄效果最好。
另外,如果手机通过步骤S604确定出的目标摄像头与当前正在使用的第一摄像头一致,则说明拍摄目标在当前手机显示出的第一拍摄画面中已经呈现了最佳的拍摄效果,则手机无需切换第一摄像头,可继续使用第一摄像头捕捉下一帧的第一拍摄画面,并循环执行上述步骤S603-S604。
S605、若目标摄像头为第二摄像头,则手机将第一摄像头切换为第二摄像头。
以第一摄像头为上述摄像头B举例,如果手机在步骤S604中确定出的目标摄像头为FOV更小的摄像头A(即第二摄像头),则如图13所示,手机在使用摄像头B显示拍摄画面2的同时可先在后台开启摄像头A。进而,手机可将摄像头A捕捉到的拍摄画面1在前 台显示在相机APP的预览界面中,并关闭摄像头B,完成从摄像头B到摄像头A的切换过程。避免手机直接关闭摄像头B再打开摄像头A时引起的拍摄画面中断等问题,提高手机切换摄像头时的用户体验。
又或者,由于拍摄画面2与拍摄画面1的视野不一致,因此,手机在从拍摄画面2切换至拍摄画面1的过程中还可以进行平滑过渡,避免从拍摄画面2跳转至拍摄画面1带来的视觉突变,以提高用户体验的使用体验。
例如,由于拍摄画面2中包括了拍摄画面1的内容,因此,在手机开启摄像头A之后,并且在切换到拍摄画面1之前,如图14所示,手机可以通过数字变焦逐渐放大拍摄画面2的画面内容。当手机显示出的拍摄画面2的内容与拍摄画面1的内容一致时,手机可将摄像头A捕捉到的拍摄画面1显示在相机APP的预览界面中,完成从摄像头B到摄像头A的切换过程。
又例如,相比于拍摄画面2中与拍摄画面1相同部分的分辨率,使用摄像头A拍摄出的拍摄画面1的分辨率更高。因此,在上述切换过程中,手机在后台开启摄像头A后,还可以将摄像头A捕捉到的拍摄画面1通过图像融合技术融合至当前显示的拍摄画面2中,进而将融合后的图像逐渐放大,直至当融合后的图像与拍摄画面1中的内容相同时,手机可将摄像头A捕捉到的拍摄画面1显示在相机APP的预览界面中,完成从摄像头B到摄像头A的切换过程。
S606、若目标摄像头为第三摄像头,则手机将第一摄像头切换为第三摄像头。
以第一摄像头为上述摄像头B举例,如果手机在步骤S604中确定出的目标摄像头为FOV更大的摄像头C(即第三摄像头),则如图15所示,手机在使用摄像头B显示拍摄画面2的同时可先在后台开启摄像头C。进而,手机可将摄像头C捕捉到的拍摄画面3在前台显示在相机APP的预览界面中,并关闭摄像头B,完成从摄像头B到摄像头C的切换过程。避免手机直接关闭摄像头B再打开摄像头C时引起的拍摄画面中断等问题,提高手机切换摄像头时的用户体验。
又或者,由于拍摄画面2与拍摄画面3的视野不一致,因此,手机在从拍摄画面2切换至拍摄画面3的过程中还可以进行平滑过渡,避免从拍摄画面2跳转至拍摄画面3带来的视觉突变,以提高用户体验的使用体验。
例如,由于拍摄画面3中包括了拍摄画面2的内容,因此,如图16所示,在手机开启摄像头C之后可通过数字变焦的方式将摄像头C捕捉到的拍摄画面3的焦距放大,从而在拍摄画面3中得到与拍摄画面2的内容相同的画面2’。此时,手机可关闭正在使用的摄像头B。进而,手机可逐渐恢复画面2’的标准焦距,即将拍摄画面3的焦距逐渐缩小,直至将拍摄画面3完全显示在相机APP的预览界面中,完成从摄像头B到摄像头C的切换过程。
又例如,相比于拍摄画面3中与拍摄画面2相同部分的分辨率,使用摄像头B拍摄出的拍摄画面2的分辨率更高。因此,在上述切换过程中,手机开启摄像头C捕捉到的拍摄画面3后,还可以将拍摄画面3与拍摄画面2进行图像融合,进而从融合后的图像中取出与拍摄画面2相同的部分,并逐渐过渡为与拍摄画面3的内容,完成从摄像头B到摄像头C的切换过程。
手机通过上述步骤S605或S606将第一摄像头切换为新的摄像头后,可将新的摄像头作为上述第一摄像头继续循环执行上述步骤S602-S606,从而根据拍摄画面中的拍摄目标动态的调整合适的摄像头进行拍摄。
示例性的,如图17所示,手机在运行相机应用时可先打开摄像头B,并显示摄像头B捕捉到的拍摄画面2。进而,手机可识别拍摄画面2中的拍摄目标。以拍摄目标为小汽车1701举例,手机确定出拍摄目标为小汽车1701后,可实时的检测每一帧拍摄画面2中小汽车1701的位置和大小。那么,在小汽车1701移动的过程中,如果手机检测出小汽车1701与拍摄画面2的边界线发生重合时,说明拍摄画面2已经无法完整显示出小汽车1701,因此,手机可将当前使用的摄像头B切换为FOV更大的摄像头C。此时,手机可显示出摄像头C捕捉到的拍摄画面3。由于摄像头C的FOV大于摄像头B的FOV,因此,小汽车1701在拍摄画面3中显示的更加完整,从而提高拍摄目标的拍摄效果。
又或者,如图18所示,手机在运行相机应用时可先打开摄像头B,并显示摄像头B捕捉到的拍摄画面2。进而,手机可识别拍摄画面2中的拍摄目标。以拍摄目标为用户1801举例,手机确定出拍摄目标为用户1801后,可实时的检测每一帧拍摄画面2中用户1801的位置和大小。那么,在用户1801移动的过程中,如果手机检测出用户1801在拍摄画面2中的目标占比小于目标占比范围的下限值,说明用户1801在拍摄画面2中的显示面积过小,因此,手机可将当前使用的摄像头B切换为FOV更小的摄像头A。此时,手机可显示出摄像头A捕捉到的拍摄画面1。由于摄像头A的FOV小于摄像头B的FOV,因此,用户1801在拍摄画面3中的目标占比增加。相应的,如果手机检测出用户1801在拍摄画面2中的目标占比大于目标占比范围的上限值,说明用户1801在拍摄画面2中的显示面积过大,因此,手机可将当前使用的摄像头B切换为FOV更大的摄像头C。由于摄像头C的FOV大于摄像头B的FOV,因此,用户1801在摄像头C捕捉的拍摄画面3中的目标占比降低,从而提高拍摄目标的拍摄效果。
需要说明的是,上述实施例中是以第一摄像头、第二摄像头以及第三摄像头这三个摄像头的切换过程举例说明的,可以理解的是,本申请实施例提供的切换摄像头的方法也可以应用于两个摄像头进行切换的场景,或者应用于三个以上的摄像头进行切换的场景,本申请实施例对此不做任何限制。
至此,通过执行上述步骤S601-S606,手机可在拍摄场景中确定出当前的拍摄目标,进而根据拍摄目标的位置和大小等参数确定出拍摄效果更好的目标摄像头,进而从当前的拍摄画面平滑过渡至目标摄像头捕捉到的拍摄画面,可以在拍摄过程中以拍摄目标为导向实现摄像头的自动切换功能,提高了拍摄目标的拍摄效果。
在本申请的一些实施例中,本申请实施例公开了一种终端,如图19所示,该终端用于实现以上各个方法实施例中记载的方法,其包括:显示单元1901、接收单元1902、确定单元1903以及切换单元1904。其中,显示单元1901用于支持终端执行图6A中的过程S602;接收单元1902支持终端执行图6A中的过程S601;确定单元1903用于支持终端执行图6A中的过程S603-S604,以及图6B中的过程S611-S615;切换单元1904用于支持终端执行图6A中的过程S605或S606。其中,上述方法实施例涉及的各步骤的所有相关内容均可以援引到对应功能模块的功能描述,在此不再赘述。
在本申请的另一些实施例中,本申请实施例公开了一种终端,包括处理器,以及与处理器相连的存储器、输入设备和输出设备。其中,输入设备和输出设备可集成为一个设备,例如,可将触敏表面作为输入设备,将显示屏作为输出设备,并将触敏表面和显示屏集成为触摸屏。此时,如图20所示,上述终端可以包括:至少两个摄像头2000,触摸屏2001,所述触摸屏2001包括触敏表面2006和显示屏2007;一个或多个处理器2002;存储器2003;一个或多个应用程序(未示出);以及一个或多个计算机程序2004,上述各器件可以通过一个或多个通信总线2005连接。其中该一个或多个计算机程序2004被存储在上述存储器2003中并被配置为被该一个或多个处理器2002执行,该一个或多个计算机程序2004包括指令,上述指令可以用于执行如图6A、图6B及相应实施例中的各个步骤。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (28)

  1. 一种终端切换摄像头的方法,其特征在于,所述终端包括第一摄像头和第二摄像头,所述方法包括:
    响应于用户打开拍摄功能的第一操作,终端显示所述第一摄像头捕捉到的第一拍摄画面,所述第一拍摄画面中包括拍摄目标;
    若所述拍摄目标无法完整显示在所述第一拍摄画面中,则所述终端将所述第一拍摄画面切换为第二摄像头捕捉到的第二拍摄画面,关闭所述第一摄像头,所述第二摄像头的视场角FOV大于所述第一摄像头的FOV。
  2. 根据权利要求1所述的终端切换摄像头的方法,其特征在于,在所述终端确定第一拍摄画面中的拍摄目标之后,还包括:
    所述终端确定所述拍摄目标或所述拍摄目标的占位区是否与所述第一拍摄画面的边界线发生重合,所述拍摄目标位于所述占位区中;
    若发生重合,则所述终端确定所述拍摄目标无法完整显示在所述第一拍摄画面中;若未发生重合,则所述终端确定所述拍摄目标能够完整显示在所述第一拍摄画面中。
  3. 根据权利要求1或2所述的终端切换摄像头的方法,其特征在于,在终端显示第一摄像头捕捉到的第一拍摄画面之后,还包括:
    所述终端确定所述第一拍摄画面中的拍摄目标。
  4. 根据权利要求3所述的终端切换摄像头的方法,其特征在于,在所述终端确定所述第一拍摄画面中的拍摄目标之前,还包括:
    所述终端接收用户向所述第一拍摄画面中输入的第二操作,所述第二操作用于选中所述第一拍摄画面中的拍摄目标;
    其中,所述终端确定第一拍摄画面中的拍摄目标,包括:
    响应于所述第二操作,所述终端提取所述第一拍摄画面中用户选中位置处的图像特征,并根据所述图像特征确定所述拍摄目标。
  5. 根据权利要求1-4中任一项所述的终端切换摄像头的方法,其特征在于,所述终端将所述第一拍摄画面切换为第二摄像头捕捉到的第二拍摄画面,包括:
    所述终端在后台打开所述第二摄像头以捕捉第二拍摄画面;
    所述终端通过数字变焦逐渐放大所述第二拍摄画面中的内容;
    当所述第二拍摄画面被放大后的内容与所述第一拍摄画面中的内容一致时,所述终端关闭所述第一摄像头,并在前台显示被放大的所述第二拍摄画面;
    所述终端逐渐恢复所述第二拍摄画面的标准焦距,直至所述终端完整的显示出所述第二拍摄画面。
  6. 根据权利要求1-5中任一项所述的终端切换摄像头的方法,其特征在于,若所述拍摄目标能够完整显示在所述第一拍摄画面中,则所述方法还包括:
    所述终端根据所述拍摄目标在所述第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;
    所述终端将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面。
  7. 一种终端切换摄像头的方法,其特征在于,所述终端包括至少两个摄像头,所述至少两个摄像头包括第一摄像头,所述方法包括:
    响应于用户打开拍摄功能的第一操作,终端显示所述第一摄像头捕捉到的第一拍摄画面,所述第一拍摄画面中包括拍摄目标;
    若所述拍摄目标能够完整显示在所述第一拍摄画面中,则所述终端根据所述拍摄目标在所述第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;
    所述终端将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面,关闭所述第一摄像头。
  8. 根据权利要求7所述的终端切换摄像头的方法,其特征在于,所述至少两个摄像头还包括第二摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV,所述终端根据所述拍摄目标在所述第一拍摄画面中的位置,确定准备切换的目标摄像头,包括:
    所述终端计算所述拍摄目标是否完整显示在所述第一拍摄画面中预设的最佳拍摄区中,所述最佳拍摄区位于所述第一拍摄画面的中心;
    若所述拍摄目标无法完整显示在所述最佳拍摄区中,则所述终端将所述第二摄像头确定为目标摄像头。
  9. 根据权利要求7所述的终端切换摄像头的方法,其特征在于,所述至少两个摄像头还包括第二摄像头和第三摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV,所述第三摄像头的FOV小于所述第一摄像头的FOV,所述终端根据所述拍摄目标在所述第一拍摄画面中的大小,确定准备切换的目标摄像头,包括:
    所述终端计算所述拍摄目标在所述第一拍摄画面中的目标占比;
    若所述目标占比大于预设的目标占比范围的上限,则所述终端将所述第二摄像头确定为目标摄像头;若所述目标占比小于预设的目标占比范围的下限,则所述终端将所述第三摄像头确定为目标摄像头。
  10. 根据权利要求7所述的终端切换摄像头的方法,其特征在于,所述至少两个摄像头还包括第二摄像头和第三摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV,所述第三摄像头的FOV小于所述第一摄像头的FOV,所述终端根据所述拍摄目标在所述第一拍摄画面中的大小和位置,确定准备切换的目标摄像头,包括:
    所述终端确定所述第一拍摄画面、第二拍摄画面以及第三拍摄画面之间的位置关系,所述第二拍摄画面为根据所述第一拍摄画面计算的所述第二摄像头打开时拍摄的拍摄画面,所述第三拍摄画面为根据所述第一拍摄画面计算的所述第三摄像头打开时拍摄的拍摄画面;
    所述终端根据所述拍摄目标在所述第一拍摄画面、所述第二拍摄画面以及所述第三拍摄画面中的大小和位置,确定准备切换的目标摄像头。
  11. 根据权利要求10所述的终端切换摄像头的方法,其特征在于,所述终端根据所述拍摄目标在所述第一拍摄画面、所述第二拍摄画面以及所述第三拍摄画面中的大小和位置,确定准备切换的目标摄像头,包括:
    所述终端从所述第一摄像头、所述第二摄像头以及所述第三摄像头中确定至少一个候选摄像头,所述候选摄像头的拍摄画面中能够完整显示所述拍摄目标;
    所述终端从所述候选摄像头中确定目标摄像头,所述拍摄目标在所述目标摄像头的拍摄画面中的目标占比满足预设条件。
  12. 根据权利要求7-11中任一项所述的终端切换摄像头的方法,其特征在于,所述目 标摄像头为第二摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV;
    其中,所述终端将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面,包括:
    所述终端在后台打开所述第二摄像头以捕捉第二拍摄画面;
    所述终端通过数字变焦逐渐放大所述第二拍摄画面中的内容;
    当所述第二拍摄画面被放大后的内容与所述第一拍摄画面中的内容一致时,所述终端关闭所述第一摄像头,并在前台显示被放大的所述第二拍摄画面;
    所述终端逐渐恢复所述第二拍摄画面的标准焦距,直至所述终端完整的显示出所述第二拍摄画面。
  13. 根据权利要求7-11中任一项所述的终端切换摄像头的方法,其特征在于,所述目标摄像头为第三摄像头,所述第三摄像头的FOV小于所述第一摄像头的FOV;
    其中,所述终端将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面,包括:
    所述终端在后台打开所述第三摄像头以捕捉第三拍摄画面;
    所述终端通过数字变焦逐渐放大所述第一拍摄画面中的内容;
    当所述第一拍摄画面被放大后的内容与所述第三拍摄画面中的内容一致时,所述终端关闭所述第一摄像头,并在前台显示所述第三拍摄画面。
  14. 一种终端,其特征在于,包括:
    触摸屏,其中,所述触摸屏包括触敏表面和显示器;
    一个或多个处理器;
    一个或多个存储器;
    第一摄像头和第二摄像头;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述终端执行时,使得所述终端执行以下步骤:
    响应于用户打开拍摄功能的第一操作,显示所述第一摄像头捕捉到的第一拍摄画面,所述第一拍摄画面中包括拍摄目标;
    若所述拍摄目标无法完整显示在所述第一拍摄画面中,则将所述第一拍摄画面切换为所述第二摄像头捕捉到的第二拍摄画面,关闭所述第一摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV。
  15. 根据权利要求14所述的终端,其特征在于,在所述终端确定第一拍摄画面中的拍摄目标之后,所述终端还用于执行:
    确定所述拍摄目标或所述拍摄目标的占位区是否与所述第一拍摄画面的边界线发生重合,所述拍摄目标位于所述占位区中;
    若发生重合,则确定所述拍摄目标无法完整显示在所述第一拍摄画面中;若未发生重合,则确定所述拍摄目标能够完整显示在所述第一拍摄画面中。
  16. 根据权利要求14或15所述的终端,其特征在于,在终端显示第一摄像头捕捉到的第一拍摄画面之后,所述终端还用于执行:
    确定所述第一拍摄画面中的拍摄目标。
  17. 根据权利要求16所述的终端,其特征在于,在所述终端确定所述第一拍摄画面中的拍摄目标之前,所述终端还用于执行:
    接收用户向所述第一拍摄画面中输入的第二操作,所述第二操作用于选中所述第一拍摄画面中的拍摄目标;
    其中,所述终端确定第一拍摄画面中的拍摄目标,具体包括:
    响应于所述第二操作,提取所述第一拍摄画面中用户选中位置处的图像特征,并根据所述图像特征确定所述拍摄目标。
  18. 根据权利要求14-17中任一项所述的终端,其特征在于,所述终端将所述第一拍摄画面切换为第二摄像头捕捉到的第二拍摄画面,具体包括:
    在后台打开所述第二摄像头以捕捉第二拍摄画面;
    通过数字变焦逐渐放大所述第二拍摄画面中的内容;
    当所述第二拍摄画面被放大后的内容与所述第一拍摄画面中的内容一致时,关闭所述第一摄像头,并在前台显示被放大的所述第二拍摄画面;
    逐渐恢复所述第二拍摄画面的标准焦距,直至完整的显示出所述第二拍摄画面。
  19. 根据权利要求14-18中任一项所述的终端,其特征在于,若所述拍摄目标能够完整显示在所述第一拍摄画面中,则所述终端还用于执行:
    根据所述拍摄目标在所述第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;
    将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面。
  20. 一种终端,其特征在于,包括:
    触摸屏,其中,所述触摸屏包括触敏表面和显示器;
    一个或多个处理器;
    一个或多个存储器;
    至少两个摄像头,所述至少两个摄像头包括第一摄像头;
    以及一个或多个计算机程序,其中所述一个或多个计算机程序被存储在所述一个或多个存储器中,所述一个或多个计算机程序包括指令,当所述指令被所述终端执行时,使得所述终端执行以下步骤:
    响应于用户打开拍摄功能的第一操作,显示所述第一摄像头捕捉到的第一拍摄画面,所述第一拍摄画面中包括拍摄目标;
    若所述拍摄目标能够完整显示在所述第一拍摄画面中,则根据所述拍摄目标在所述第一拍摄画面中的大小和/或位置,确定准备切换的目标摄像头;
    将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面,关闭所述第一摄像头。
  21. 根据权利要求20所述的终端,其特征在于,所述至少两个摄像头还包括第二摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV,所述终端根据所述拍摄目标在所述第一拍摄画面中的位置,确定准备切换的目标摄像头,具体包括:
    计算所述拍摄目标是否完整显示在所述第一拍摄画面中预设的最佳拍摄区中,所述最佳拍摄区位于所述第一拍摄画面的中心;
    若所述拍摄目标无法完整显示在所述最佳拍摄区中,则将第二摄像头确定为目标摄像头。
  22. 根据权利要求20所述的终端,其特征在于,所述至少两个摄像头还包括第二摄像头和第三摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV,所述第三摄像 头的FOV小于所述第一摄像头的FOV,所述终端根据所述拍摄目标在所述第一拍摄画面中的大小,确定准备切换的目标摄像头,具体包括:
    计算所述拍摄目标在所述第一拍摄画面中的目标占比;
    若所述目标占比大于预设的目标占比范围的上限,则将所述第二摄像头确定为目标摄像头;若所述目标占比小于预设的目标占比范围的下限,则将所述第三摄像头确定为目标摄像头。
  23. 根据权利要求20所述的终端,其特征在于,所述至少两个摄像头还包括第二摄像头和第三摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV,所述第三摄像头的FOV小于所述第一摄像头的FOV,所述终端根据所述拍摄目标在所述第一拍摄画面中的大小和位置,确定准备切换的目标摄像头,具体包括:
    确定所述第一拍摄画面、第二拍摄画面以及第三拍摄画面之间的位置关系,所述第二拍摄画面为根据所述第一拍摄画面计算的所述第二摄像头打开时拍摄的拍摄画面,所述第三拍摄画面为根据所述第一拍摄画面计算的所述第三摄像头打开时拍摄的拍摄画面;
    根据所述拍摄目标在所述第一拍摄画面、所述第二拍摄画面以及所述第三拍摄画面中的大小和位置,确定准备切换的目标摄像头。
  24. 根据权利要求23所述的终端,其特征在于,所述终端根据所述拍摄目标在所述第一拍摄画面、所述第二拍摄画面以及所述第三拍摄画面中的大小和位置,确定准备切换的目标摄像头,具体包括:
    从所述第一摄像头、所述第二摄像头以及所述第三摄像头中确定至少一个候选摄像头,所述候选摄像头的拍摄画面中能够完整显示所述拍摄目标;
    从所述候选摄像头中确定目标摄像头,所述拍摄目标在所述目标摄像头的拍摄画面中的目标占比满足预设条件。
  25. 根据权利要求20-24中任一项所述的终端,其特征在于,所述目标摄像头为第二摄像头,所述第二摄像头的FOV大于所述第一摄像头的FOV;
    其中,所述终端将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面,具体包括:
    在后台打开所述第二摄像头以捕捉第二拍摄画面;
    通过数字变焦逐渐放大所述第二拍摄画面中的内容;
    当所述第二拍摄画面被放大后的内容与所述第一拍摄画面中的内容一致时,关闭所述第一摄像头,并在前台显示被放大的所述第二拍摄画面;
    逐渐恢复所述第二拍摄画面的标准焦距,直至完整的显示出所述第二拍摄画面。
  26. 根据权利要求20-25中任一项所述的终端,其特征在于,所述目标摄像头为第三摄像头,所述第三摄像头的FOV小于所述第一摄像头的FOV;
    其中,所述终端将所述第一拍摄画面切换为所述目标摄像头捕捉到的拍摄画面,具体包括:
    在后台打开所述第三摄像头以捕捉第三拍摄画面;
    通过数字变焦逐渐放大所述第一拍摄画面中的内容;
    当所述第一拍摄画面被放大后的内容与所述第三拍摄画面中的内容一致时,关闭所述第一摄像头,并在前台显示所述第三拍摄画面。
  27. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在终端上运行时,使得所述终端执行如权利要求1-6或权利要求7-13中任一项所述的一种终端切换摄像头的方法。
  28. 一种包含指令的计算机程序产品,其特征在于,当所述计算机程序产品在终端上运行时,使得所述终端执行如权利要求1-6或权利要求7-13中任一项所述的一种终端切换摄像头的方法。
PCT/CN2018/097676 2018-07-27 2018-07-27 一种终端切换摄像头的方法及终端 WO2020019356A1 (zh)

Priority Applications (7)

Application Number Priority Date Filing Date Title
CN202111196803.5A CN113905179B (zh) 2018-07-27 2018-07-27 一种终端切换摄像头的方法及终端
CN201880023140.7A CN110506416B (zh) 2018-07-27 2018-07-27 一种终端切换摄像头的方法及终端
EP18928001.9A EP3800876B1 (en) 2018-07-27 2018-07-27 Method for terminal to switch cameras, and terminal
US17/262,742 US11412132B2 (en) 2018-07-27 2018-07-27 Camera switching method for terminal, and terminal
PCT/CN2018/097676 WO2020019356A1 (zh) 2018-07-27 2018-07-27 一种终端切换摄像头的方法及终端
US17/854,324 US11595566B2 (en) 2018-07-27 2022-06-30 Camera switching method for terminal, and terminal
US18/162,761 US11785329B2 (en) 2018-07-27 2023-02-01 Camera switching method for terminal, and terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/097676 WO2020019356A1 (zh) 2018-07-27 2018-07-27 一种终端切换摄像头的方法及终端

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US17/262,742 A-371-Of-International US11412132B2 (en) 2018-07-27 2018-07-27 Camera switching method for terminal, and terminal
US17/854,324 Continuation US11595566B2 (en) 2018-07-27 2022-06-30 Camera switching method for terminal, and terminal

Publications (1)

Publication Number Publication Date
WO2020019356A1 true WO2020019356A1 (zh) 2020-01-30

Family

ID=68585559

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/097676 WO2020019356A1 (zh) 2018-07-27 2018-07-27 一种终端切换摄像头的方法及终端

Country Status (4)

Country Link
US (3) US11412132B2 (zh)
EP (1) EP3800876B1 (zh)
CN (2) CN110506416B (zh)
WO (1) WO2020019356A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021251689A1 (en) 2020-06-10 2021-12-16 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11412132B2 (en) 2018-07-27 2022-08-09 Huawei Technologies Co., Ltd. Camera switching method for terminal, and terminal
CN111372003A (zh) * 2020-04-20 2020-07-03 惠州Tcl移动通信有限公司 一种摄像头切换方法、装置及终端
CN113596316B (zh) * 2020-04-30 2023-12-08 华为技术有限公司 拍照方法及电子设备
CN111654631B (zh) * 2020-06-19 2021-11-12 厦门紫光展锐科技有限公司 变焦控制方法、系统、设备及介质
CN111669505B (zh) * 2020-06-30 2022-09-20 维沃移动通信(杭州)有限公司 相机启动方法及装置
CN115812308A (zh) * 2020-07-14 2023-03-17 深圳传音控股股份有限公司 拍摄控制方法、装置、智能设备及计算机可读存储介质
CN111866388B (zh) * 2020-07-29 2022-07-12 努比亚技术有限公司 一种多重曝光拍摄方法、设备及计算机可读存储介质
CN111866389B (zh) * 2020-07-29 2022-07-12 努比亚技术有限公司 一种视频跟踪拍摄方法、设备及计算机可读存储介质
CN111770277A (zh) * 2020-07-31 2020-10-13 RealMe重庆移动通信有限公司 一种辅助拍摄方法及终端、存储介质
CN112055156B (zh) * 2020-09-15 2022-08-09 Oppo(重庆)智能科技有限公司 预览图像更新方法、装置、移动终端及存储介质
JP2022051312A (ja) * 2020-09-18 2022-03-31 キヤノン株式会社 撮影制御装置、撮影制御方法、及びプログラム
CN114285984B (zh) * 2020-09-27 2024-04-16 宇龙计算机通信科技(深圳)有限公司 基于ar眼镜的摄像头切换方法、装置、存储介质以及终端
CN116782024A (zh) * 2020-09-30 2023-09-19 华为技术有限公司 一种拍摄方法和电子设备
CN112887602A (zh) * 2021-01-26 2021-06-01 Oppo广东移动通信有限公司 摄像头切换方法、装置、存储介质及电子设备
CN113364975B (zh) * 2021-05-10 2022-05-20 荣耀终端有限公司 一种图像的融合方法及电子设备
CN113473005B (zh) * 2021-06-16 2022-08-09 荣耀终端有限公司 拍摄中转场动效插入方法、设备、存储介质
CN113747065B (zh) * 2021-09-03 2023-12-26 维沃移动通信(杭州)有限公司 拍摄方法、装置及电子设备
CN115037879A (zh) * 2022-06-29 2022-09-09 维沃移动通信有限公司 拍摄方法及其装置
CN117479008B (zh) * 2023-12-27 2024-05-03 荣耀终端有限公司 一种视频处理方法、电子设备及芯片系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120242809A1 (en) * 2011-03-22 2012-09-27 Ben White Video surveillance apparatus using dual camera and method thereof
US20160050370A1 (en) * 2012-07-03 2016-02-18 Gopro, Inc. Rolling Shutter Synchronization
CN205545654U (zh) * 2016-01-26 2016-08-31 北京数字家圆科技有限公司 一种双摄像头装置及其应用的终端设备
CN106341611A (zh) * 2016-11-29 2017-01-18 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106454132A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN108174085A (zh) * 2017-12-19 2018-06-15 信利光电股份有限公司 一种多摄像头的拍摄方法、拍摄装置、移动终端和可读存储介质

Family Cites Families (40)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110040468A (ko) 2009-10-14 2011-04-20 삼성전자주식회사 휴대 단말기의 카메라 운용 방법 및 장치
US9204026B2 (en) * 2010-11-01 2015-12-01 Lg Electronics Inc. Mobile terminal and method of controlling an image photographing therein
US9259645B2 (en) * 2011-06-03 2016-02-16 Nintendo Co., Ltd. Storage medium having stored therein an image generation program, image generation method, image generation apparatus and image generation system
US8866943B2 (en) * 2012-03-09 2014-10-21 Apple Inc. Video camera providing a composite video sequence
KR102022444B1 (ko) * 2013-02-21 2019-09-18 삼성전자주식회사 복수의 카메라를 구비한 휴대 단말에서 유효한 영상들을 합성하기 위한 방법 및 이를 위한 휴대 단말
JP6273685B2 (ja) * 2013-03-27 2018-02-07 パナソニックIpマネジメント株式会社 追尾処理装置及びこれを備えた追尾処理システム並びに追尾処理方法
JP6100089B2 (ja) * 2013-05-17 2017-03-22 キヤノン株式会社 画像処理装置、画像処理方法およびプログラム
EP3008890A4 (en) * 2013-06-13 2016-05-04 Corephotonics Ltd ZOOM OF A DIGITAL CAMERA WITH DUAL IRIS
KR102102646B1 (ko) * 2014-01-17 2020-04-21 엘지전자 주식회사 이동단말기 및 그 제어방법
US9363426B2 (en) * 2014-05-29 2016-06-07 International Business Machines Corporation Automatic camera selection based on device orientation
US10110794B2 (en) 2014-07-09 2018-10-23 Light Labs Inc. Camera device including multiple optical chains and related methods
KR102225947B1 (ko) * 2014-10-24 2021-03-10 엘지전자 주식회사 이동단말기 및 그 제어방법
JP6452386B2 (ja) * 2014-10-29 2019-01-16 キヤノン株式会社 撮像装置、撮像システム、撮像装置の制御方法
US10061486B2 (en) * 2014-11-05 2018-08-28 Northrop Grumman Systems Corporation Area monitoring system implementing a virtual environment
CN104333702A (zh) * 2014-11-28 2015-02-04 广东欧珀移动通信有限公司 一种自动对焦的方法、装置及终端
US10291842B2 (en) 2015-06-23 2019-05-14 Samsung Electronics Co., Ltd. Digital photographing apparatus and method of operating the same
JP6293706B2 (ja) * 2015-06-26 2018-03-14 京セラ株式会社 電子機器及び電子機器の動作方法
CN105049711B (zh) * 2015-06-30 2018-09-04 广东欧珀移动通信有限公司 一种拍照方法及用户终端
KR20170006559A (ko) 2015-07-08 2017-01-18 엘지전자 주식회사 이동단말기 및 그 제어방법
US9769419B2 (en) * 2015-09-30 2017-09-19 Cisco Technology, Inc. Camera system for video conference endpoints
JP6724982B2 (ja) * 2016-04-13 2020-07-15 ソニー株式会社 信号処理装置および撮像装置
US9794495B1 (en) * 2016-06-27 2017-10-17 Adtile Technologies Inc. Multiple streaming camera navigation interface system
US10706518B2 (en) 2016-07-07 2020-07-07 Corephotonics Ltd. Dual camera system with improved video smooth transition by image blending
CN106161941B (zh) 2016-07-29 2022-03-11 南昌黑鲨科技有限公司 双摄像头自动追焦方法、装置及终端
CN106454121B (zh) * 2016-11-11 2020-02-07 努比亚技术有限公司 双摄像头拍照方法及装置
CN106506957A (zh) * 2016-11-17 2017-03-15 维沃移动通信有限公司 一种拍照方法及移动终端
CN106454130A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置和电子装置
US10389948B2 (en) * 2016-12-06 2019-08-20 Qualcomm Incorporated Depth-based zoom function using multiple cameras
CN106454139B (zh) * 2016-12-22 2020-01-10 Oppo广东移动通信有限公司 拍照方法及移动终端
KR102426728B1 (ko) * 2017-04-10 2022-07-29 삼성전자주식회사 포커스 제어 방법 및 이를 지원하는 전자 장치
KR102351542B1 (ko) * 2017-06-23 2022-01-17 삼성전자주식회사 시차 보상 기능을 갖는 애플리케이션 프로세서, 및 이를 구비하는 디지털 촬영 장치
KR101983725B1 (ko) * 2017-08-03 2019-09-03 엘지전자 주식회사 전자 기기 및 전자 기기의 제어 방법
CN107333067A (zh) 2017-08-04 2017-11-07 维沃移动通信有限公司 一种摄像头的控制方法和终端
US10474988B2 (en) * 2017-08-07 2019-11-12 Standard Cognition, Corp. Predicting inventory events using foreground/background processing
US10645272B2 (en) * 2017-12-04 2020-05-05 Qualcomm Incorporated Camera zoom level and image frame capture control
US10594925B2 (en) * 2017-12-04 2020-03-17 Qualcomm Incorporated Camera zoom level and image frame capture control
US20190191079A1 (en) * 2017-12-20 2019-06-20 Qualcomm Incorporated Camera initialization for a multiple camera module
US11412132B2 (en) 2018-07-27 2022-08-09 Huawei Technologies Co., Ltd. Camera switching method for terminal, and terminal
EP4161086A1 (en) * 2019-02-19 2023-04-05 Samsung Electronics Co., Ltd. Electronic device and method for changing magnification of image using multiple cameras
US11057572B1 (en) * 2020-03-26 2021-07-06 Qualcomm Incorporated Apparatus and methods for image capture control

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120242809A1 (en) * 2011-03-22 2012-09-27 Ben White Video surveillance apparatus using dual camera and method thereof
US20160050370A1 (en) * 2012-07-03 2016-02-18 Gopro, Inc. Rolling Shutter Synchronization
CN205545654U (zh) * 2016-01-26 2016-08-31 北京数字家圆科技有限公司 一种双摄像头装置及其应用的终端设备
CN106341611A (zh) * 2016-11-29 2017-01-18 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN106454132A (zh) * 2016-11-29 2017-02-22 广东欧珀移动通信有限公司 控制方法、控制装置及电子装置
CN108174085A (zh) * 2017-12-19 2018-06-15 信利光电股份有限公司 一种多摄像头的拍摄方法、拍摄装置、移动终端和可读存储介质

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021251689A1 (en) 2020-06-10 2021-12-16 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device
EP4133715A4 (en) * 2020-06-10 2023-09-13 Samsung Electronics Co., Ltd. ELECTRONIC DEVICE AND METHOD FOR CONTROLLING ELECTRONIC DEVICE
US11831980B2 (en) 2020-06-10 2023-11-28 Samsung Electronics Co., Ltd. Electronic device and controlling method of electronic device

Also Published As

Publication number Publication date
US20230188824A1 (en) 2023-06-15
CN113905179B (zh) 2023-05-02
US11595566B2 (en) 2023-02-28
EP3800876B1 (en) 2023-09-06
US11785329B2 (en) 2023-10-10
EP3800876A4 (en) 2021-07-07
CN110506416A (zh) 2019-11-26
US11412132B2 (en) 2022-08-09
EP3800876A1 (en) 2021-04-07
US20210203836A1 (en) 2021-07-01
CN113905179A (zh) 2022-01-07
US20220337742A1 (en) 2022-10-20
CN110506416B (zh) 2021-10-22

Similar Documents

Publication Publication Date Title
US11595566B2 (en) Camera switching method for terminal, and terminal
WO2020073959A1 (zh) 图像捕捉方法及电子设备
US11669242B2 (en) Screenshot method and electronic device
US20220247857A1 (en) Full-screen display method for mobile terminal and device
CN115866121B (zh) 应用界面交互方法、电子设备和计算机可读存储介质
WO2020000448A1 (zh) 一种柔性屏幕的显示方法及终端
WO2020134869A1 (zh) 电子设备的操作方法和电子设备
WO2021036770A1 (zh) 一种分屏处理方法及终端设备
US20230276014A1 (en) Photographing method and electronic device
WO2020088633A1 (zh) 支付方法、装置和用户设备
CN113542580B (zh) 去除眼镜光斑的方法、装置及电子设备
WO2021238370A1 (zh) 显示控制方法、电子设备和计算机可读存储介质
JP2024500546A (ja) 協同表示方法、端末デバイス、及びコンピュータ可読記憶媒体
WO2021037034A1 (zh) 一种应用状态切换方法及终端设备
CN114079725B (zh) 视频防抖方法、终端设备和计算机可读存储介质
CN115967851A (zh) 快速拍照方法、电子设备及计算机可读存储介质
CN116017138B (zh) 测光控件显示方法、计算机设备和存储介质
US20220317841A1 (en) Screenshot Method and Related Device
CN115150543B (zh) 拍摄方法、装置、电子设备及可读存储介质
WO2020029213A1 (zh) 通话发生srvcc切换时,接通和挂断电话的方法
CN117762279A (zh) 控制方法、电子设备、存储介质及程序产品
CN116301483A (zh) 一种应用卡片的管理方法、电子设备和存储介质
CN117692693A (zh) 多屏显示方法以及相关设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928001

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2018928001

Country of ref document: EP

Effective date: 20210111

NENP Non-entry into the national phase

Ref country code: DE