WO2022262550A1 - 一种拍摄视频的方法及电子设备 - Google Patents

一种拍摄视频的方法及电子设备 Download PDF

Info

Publication number
WO2022262550A1
WO2022262550A1 PCT/CN2022/095357 CN2022095357W WO2022262550A1 WO 2022262550 A1 WO2022262550 A1 WO 2022262550A1 CN 2022095357 W CN2022095357 W CN 2022095357W WO 2022262550 A1 WO2022262550 A1 WO 2022262550A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
picture
area
opacity
Prior art date
Application number
PCT/CN2022/095357
Other languages
English (en)
French (fr)
Inventor
熊棉
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN202111673018.4A external-priority patent/CN115811656A/zh
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US18/247,247 priority Critical patent/US20230377306A1/en
Priority to EP22824030.5A priority patent/EP4207744A4/en
Publication of WO2022262550A1 publication Critical patent/WO2022262550A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/60Extraction of image or video features relating to illumination properties, e.g. using a reflectance or lighting model
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/14Digital output to display device ; Cooperation and interconnection of the display device with other functional units
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2622Signal amplitude transition in the zone between image portions, e.g. soft edges
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M2250/00Details of telephonic subscriber devices
    • H04M2250/52Details of telephonic subscriber devices including functional features of a camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/90Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums

Definitions

  • the present application relates to the technical field of terminals, and in particular to a method for shooting video and electronic equipment.
  • electronic devices such as mobile phones and tablet computers are generally equipped with multiple cameras, such as front cameras, rear cameras, wide-angle cameras, and the like.
  • more and more electronic devices can support multiple cameras to shoot at the same time, and users can choose the corresponding shooting mode according to their needs, for example, front shooting mode, rear shooting mode, front and rear shooting mode, picture-in-picture shooting mode, etc.
  • front shooting mode for example, front shooting mode, rear shooting mode, front and rear shooting mode, picture-in-picture shooting mode, etc.
  • picture-in-picture shooting mode etc.
  • the user may need to switch the shooting mode, for example, switch the front and back shooting mode to the picture-in-picture shooting mode.
  • the shooting mode for example, switch the front and back shooting mode to the picture-in-picture shooting mode.
  • there is a certain delay when the picture data collected by different cameras of the electronic device are displayed on the shooting preview interface.
  • the image switching in the shooting preview interface will appear relatively stiff, and even a "stuck" visual effect will appear, which will affect the user experience.
  • Embodiments of the present application provide a method for shooting a video and an electronic device.
  • the method for shooting video can support the electronic device to perform animation processing when switching the display screen in the shooting preview interface when switching the shooting mode, so that the display screen in the shooting preview interface looks smoother and more vivid when switching. Improve user experience.
  • the embodiment of the present application provides a method for shooting video, the method is applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera are located on different sides of the display screen, the method include:
  • the first area of the display screen displays the first image collected in real time from the first camera
  • the second area of the display screen displays the second image collected in real time from the second camera
  • the second area is the entire display area of the display screen
  • the second area of the display screen displays the second image collected in real time by the second camera.
  • a region is smaller than a second region
  • the opacity of the first image displayed in the first area gradually decreases from the second opacity to the first opacity
  • the third area of the display screen displays a third image
  • the third image is the second image captured by the second camera in real time
  • the third area of the display screen is the entire display area of the display screen.
  • the shooting method provided by the embodiment of the present application displays the first image and the second image in the first area and the second area of the display screen before detecting the user operation when displaying the image, which is A display screen.
  • the last image displayed on the display screen is to display the second image in the third area of the display screen, which is another display screen. That is, before and after the user operation is detected, the display images on the display screen are switched from one display image to another display image.
  • This application changes the opacity of the first image displayed in the first area after the user operation is detected, and this part of the display content is used as a transition animation of the display screen during the switching process of the display screen, that is, the image on the display screen is increased.
  • the transition animation when the display screen is switched from one display screen to another and the transition animation is a dynamic display screen, which is used to transition from one display screen to another display screen.
  • the embodiment of the present application shows a specific display screen switching, that is, switching from a picture-in-picture display mode to a full-screen display mode.
  • the method after responding to the detected user operation, the method further includes:
  • the first area displays the first image captured by the first camera when the user operation is detected, and the second area displays the second image captured by the second camera when the user operation is detected.
  • the first image Superimposed on the second image the opacity of the first image displayed in the first area gradually decreases from the second opacity to the first opacity;
  • the third area displays the second image and the third image captured by the second camera when the user operation is detected, wherein the second image is superimposed and displayed on the third image.
  • the process of switching the picture on the display screen from one display picture to another is divided into two stages, that is, in the first time period, the display mode before the picture switching (picture-in-picture display mode) to display the first image and the second image, and display a transition animation for transition.
  • the second time period the second image is displayed in the switched display mode (full-screen display mode), and at this time the second image is superimposed with the third image, so that the transition animation can be smoothly transitioned to the switched display mode to improve the smoothness of screen switching.
  • the Gaussian blur value of the first image gradually increases according to the first curve
  • the Gaussian blur value of the second image The Gaussian Blur value of is gradually larger according to the first curve.
  • a specific animation processing method in the field animation can make the display screen more natural when the real-time image data is subsequently loaded in the display area where the second image is located.
  • the opacity of the first image displayed in the first area gradually decreases from the second opacity to the first opacity according to the second curve.
  • the change of the picture transparency in the transition animation can be made more natural, and the user's visual experience can be improved.
  • the method before the third image is displayed in the third area of the display screen, the method further includes:
  • the opacity of the second image gradually decreases from the second opacity to the first opacity according to the second curve.
  • the second image gradually becomes transparent, so that the third image covered by the second image is gradually displayed on the screen, so that the third image is in the fifth The region is clearly displayed, and the transition animation is completed to display the switched display screen, so that the third image is displayed in the switched display mode.
  • the embodiment of the present application provides a method for shooting video, the method is applied to an electronic device including a display screen, a first camera and a second camera, the first camera and the second camera are located on different sides of the display screen, the Methods include:
  • the first area of the display screen displays the first image collected in real time from the first camera, or displays the second image collected in real time from the second camera, wherein the first area is the entire display area of the display screen;
  • a prefabricated picture is displayed in the second area of the display screen, and the opacity of the prefabricated picture gradually increases from the first opacity to the second opacity;
  • the second area of the display screen displays the first image captured by the first camera in real time
  • the third area of the display screen displays the second image collected by the second camera in real time.
  • the third area is the entire display area of the display screen, and the second area is smaller than the first image. Three areas.
  • the embodiment of the present application shows another specific switching of the display screen, that is, switching from the full-screen display mode to the picture-in-picture display mode, and the display mode of the transition animation may be the picture-in-picture display mode.
  • This embodiment and the embodiment provided by the first aspect are mirror image changes.
  • the technical effects of this embodiment and possible design methods can refer to the beneficial effects introduced in the first aspect and its possible design methods. Here I won't go into details.
  • the first area displays a third image
  • the third image is the first image captured by the first camera when the user operation is detected, or the third image is the second camera when the user operation is detected
  • a prefabricated picture is displayed in the second area of the display screen, and the opacity of the prefabricated picture gradually increases from the first opacity to the second opacity;
  • the second area displays the prefabricated picture and the first image captured by the first camera in real time, wherein the prefabricated picture is superimposed on the first image
  • the third area displays the third image and the first image captured by the second camera The second image, wherein the third image is superimposed and displayed on the second image.
  • the Gaussian blur value of the third image gradually increases according to the first curve.
  • the opacity of the prefabricated picture gradually increases from the first opacity to the second opacity according to the second curve.
  • the second area of the display screen displays the first image captured by the first camera in real time
  • the third area of the display screen displays the second image captured by the second camera in real time
  • the opacity of the prefabricated picture gradually decreases from the second opacity to the first opacity according to the second curve;
  • the opacity of the third image gradually decreases from the second opacity to the first opacity according to the second curve.
  • an embodiment of the present application provides an electronic device, including a memory for storing computer program instructions and a processor for executing the program instructions, wherein when the computer program instructions are executed by the processor, the electronic device is triggered to execute The method described in the first aspect/second aspect and any possible design manner thereof.
  • the embodiment of the present application provides a computer-readable storage medium
  • the computer-readable storage medium includes a stored program, wherein, when the program is running, the device where the computer-readable storage medium is located is controlled to execute the first aspect/second aspect and the method described in any possible design mode thereof.
  • the embodiment of the present application provides a computer program product, the computer program product includes executable instructions, and when the executable instructions are executed on the computer, the computer executes the first aspect/second aspect and any possible The method described in the design method.
  • FIG. 1A is a hardware architecture diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1B is a software architecture diagram of an electronic device provided by an embodiment of the present application.
  • 2A-2C are schematic diagrams of a set of user interfaces provided by the embodiment of the present application.
  • FIG. 3 is a schematic diagram of a shooting scene provided by an embodiment of the present application.
  • Fig. 4A-Fig. 4E are another set of interface diagrams provided by the embodiment of the present application.
  • Figures 5A-5K are schematic diagrams of another set of interfaces provided by the embodiment of the present application.
  • Fig. 6A-Fig. 6F are another set of interface diagrams provided by the embodiment of the present application.
  • FIGS. 7A-7B are schematic diagrams of another set of interfaces provided by the embodiment of the present application.
  • FIG. 8A is a flow chart of animation processing in the method provided by the embodiment of the present application.
  • Figure 8B- Figure 8F are another set of schematic diagrams of interfaces provided by the embodiment of the present application.
  • 9A and 9B are schematic diagrams of a set of curves provided by the embodiment of the present application.
  • FIG. 10A is another flow chart of animation processing in the method provided by the embodiment of the present application.
  • FIG. 10B-FIG. 10F are another set of schematic diagrams of interfaces provided by the embodiment of the present application.
  • first and second in the following embodiments of the present application are only used for descriptive purposes, and should not be understood as implying or implying relative importance or implicitly specifying the number of indicated technical features. Therefore, the features defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of the embodiments of the present application, unless otherwise specified, the “multiple” The meaning is two or more.
  • UI user interface
  • the term "user interface (UI)” in the following embodiments of this application is a medium interface for interaction and information exchange between an application program or an operating system and a user, and it realizes the difference between the internal form of information and the form acceptable to the user. conversion between.
  • the user interface is the source code written in a specific computer language such as java and extensible markup language (XML).
  • the source code of the interface is parsed and rendered on the electronic device, and finally presented as content that can be recognized by the user.
  • the commonly used form of user interface is the graphical user interface (graphic user interface, GUI), which refers to the user interface related to computer operation displayed in a graphical way. It may be text, icons, buttons, menus, tabs, text boxes, dialog boxes, status bars, navigation bars, Widgets, and other visible interface elements displayed on the display screen of the electronic device.
  • the shooting preview interface refers to the interface displayed by the electronic device when shooting, and can be used to display images collected by the camera and multiple controls.
  • the plurality of controls may include a flash control for turning on/off the flash, a beauty control for turning on/off the beauty function, a shutter control for starting/stopping shooting, and the like.
  • Single-lens shooting refers to the mode in which an electronic device only uses one camera to shoot.
  • the single-lens shooting may include a front shooting mode, a rear shooting mode, and the like.
  • the front-facing shooting mode refers to a mode in which the electronic device shoots through a front-facing camera.
  • the images captured by the front camera can be displayed in real time on the shooting preview interface.
  • the rear shooting mode refers to a mode in which an electronic device shoots through a rear camera.
  • the image captured by the rear camera can be displayed in real time on the shooting preview interface.
  • Multi-camera shooting refers to a mode in which an electronic device can shoot with two or more cameras.
  • the multi-lens shooting may include a front-to-back shooting mode, a front-to-front shooting mode, a rear-to-back shooting mode, a picture-in-picture shooting mode, and the like.
  • the front and rear shooting mode refers to the mode in which the electronic device can simultaneously shoot through the front camera and the rear camera.
  • the electronic device can simultaneously display the images (for example, the first image and the second image) captured by the front camera and the rear camera in the shooting preview interface, and the first image and the second image are spliced and displayed .
  • the first image and the second image can be spliced up and down; when the electronic device is placed horizontally, the first image and the second image can be spliced left and right.
  • the display area of the first image is the same as the display area of the second image.
  • the front-to-front shooting mode is similar to the front-to-back shooting mode, the difference is that the front-to-front shooting mode uses two front-facing cameras to shoot at the same time.
  • Rear-rear shooting mode refers to a mode in which an electronic device can simultaneously shoot through two rear-facing cameras.
  • the electronic device can simultaneously display the images captured by the two rear cameras in the shooting preview interface (for example, the first image, the second image), and the first image and the second image are spliced and displayed .
  • the first image and the second image can be spliced up and down; when the electronic device is placed horizontally, the first image and the second image can be spliced left and right.
  • the picture-in-picture shooting mode refers to a mode in which an electronic device can simultaneously shoot through two cameras.
  • the electronic device can simultaneously display the images captured by the two cameras (for example, the first image and the second image) in the shooting preview interface.
  • the second image is displayed in the entire display area of the shooting preview interface, the first image is superimposed on the second image, and the display area of the first image is smaller than that of the second image.
  • the first image can be positioned to the lower left of the second image.
  • the above two cameras may include two front cameras, two rear cameras, or one front camera and one rear camera.
  • the split-screen display mode means that the display screen of an electronic device can display images captured by two cameras (for example, a first image and a second image), and the first image and the second image are spliced and displayed on the display screen.
  • the picture-in-picture display mode means that the display screen of the electronic device can display images captured by two cameras (for example, the first image and the second image), the second image is displayed on the entire display area of the display screen, and the first image is displayed on the In the small window, the display area of the small window is smaller than the display area of the display screen, that is, the first image is superimposed on the second image, and the display area of the first image is smaller than the display area of the second image.
  • Full-screen mode means that the display screen of an electronic device can display an image captured by any camera in a full screen, and the display area of the image is the display area of the display screen.
  • User operation refers to the operation performed by the user when switching display screens. For example, it may be switching display screens by touching a switching button on the display screen, or switching display screens by using air gestures.
  • the direction corresponding to the user operation may refer to the direction to which the switching button on the display screen points, or may refer to the moving direction of the gesture when the user performs the air gesture. For example, if the air gesture is "moving the palm from left to right", the direction corresponding to the user operation is from left to right. For another example, if the pointing or indication of the switching button means switching from left to right, after clicking the button, the direction corresponding to the user's operation is also from left to right.
  • First camera may refer to a front camera or a rear camera, and the first camera in this embodiment of the application generally refers to a front camera.
  • the second camera may refer to the front camera or the rear camera, and the first camera in the embodiment of the present application generally refers to the rear camera.
  • the first image the image captured by the front camera
  • the second image the image captured by the rear camera.
  • the third image in the embodiment of this application, it generally refers to the image captured by the front camera after blurring, transparency processing, cropping or magnification, or a prefabricated picture (including the first prefabricated picture and the second prefabricated picture);
  • electronic devices can provide users with multiple shooting modes, and can switch between the multiple shooting modes. For example, switching from front shooting mode to rear shooting mode, switching from rear shooting mode to front shooting mode, switching from front shooting mode/rear shooting mode to front and rear shooting mode, switching from front and rear shooting mode to front shooting mode mode, rear shooting mode, rear rear shooting mode or picture-in-picture shooting mode.
  • the electronic device In different shooting modes of the electronic device, different cameras are used, and the images captured by the different cameras are different. Therefore, when the electronic device switches the shooting mode, the camera (called the preview camera) used by the electronic device will also change, and the display picture in the shooting preview interface will also change accordingly. However, it takes a certain amount of time for the electronic device to start different cameras, and it also takes a certain amount of time for the pictures captured by different cameras to be displayed in different modes on the shooting preview interface.
  • the camera called the preview camera
  • the embodiment of the present application provides a method for shooting video, which can support the electronic device to perform animation processing when switching the display screen in the shooting preview interface when switching the shooting mode, so that When the display screen in the shooting preview interface is switched, it looks smoother and more vivid, which improves the user experience.
  • the method for shooting video provided in the embodiment of the present application may be applied to an electronic device including multiple cameras.
  • the electronic device can recognize a user's preset space gesture (space gesture) through any one of the front cameras.
  • space gesture space gesture
  • air gesture air gesture
  • the above-mentioned air gestures are only the names used in the embodiments of this application. They can also be called air gestures, hover gestures, etc., and specifically refer to gestures input without touching electronic devices. Record, its name can not constitute any limitation to this embodiment.
  • Electronic devices can be cell phones, tablet computers, desktop computers, laptop computers, handheld computers, notebook computers, ultra-mobile personal computers (UMPC), netbooks, as well as cellular phones, personal digital assistants (personal digital assistants) assistant, PDA), augmented reality (augmented reality, AR) device, virtual reality (virtual reality, VR) device, artificial intelligence (artificial intelligence, AI) device, wearable device, vehicle-mounted device, smart home device and/or smart For urban equipment, the embodiment of the present application does not specifically limit the specific type of the electronic equipment.
  • FIG. 1A shows a schematic diagram of a hardware structure of an electronic device provided by an embodiment of the present application.
  • the electronic device 200 can be a processor 210, an external memory interface 220, an internal memory 221, a universal serial bus (universal serial bus, USB) interface 230, a charging management module 240, a power management module 241, and a battery 242 , antenna 1, antenna 2, mobile communication module 250, wireless communication module 260, audio module 270, speaker 270A, receiver 270B, microphone 270C, earphone jack 270D, sensor module 280, button 290, motor 291, indicator 292, multiple A camera 293, a display screen 294, and a subscriber identification module (subscriber identification module, SIM) card interface 295, etc.
  • SIM subscriber identification module
  • the above-mentioned sensor module 280 may include sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • sensors such as pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensor and bone conduction sensor.
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device 200 .
  • the electronic device 200 may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 210 may include one or more processing units, for example: the processor 210 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • the controller may be the nerve center and command center of the electronic device 200 .
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 210 for storing instructions and data.
  • the memory in processor 210 is a cache memory.
  • the memory may hold instructions or data that the processor 210 has just used or recycled. If the processor 210 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 210 is reduced, thereby improving the efficiency of the system.
  • processor 210 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship among the modules shown in this embodiment is only a schematic illustration, and does not constitute a structural limitation of the electronic device 200 .
  • the electronic device 200 may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the processor 210 may receive a plurality of consecutive images corresponding to a specific air gesture input by the user such as "palm" captured by the camera 293, and then the processor 210 may Perform comparative analysis, determine that the air gesture corresponding to the multiple consecutive images is "palm", and determine that the operation corresponding to the air gesture is, for example, start recording or stop recording, and then the processor 210 can control the camera application to execute the corresponding operate.
  • a specific air gesture input by the user such as "palm” captured by the camera 293
  • the processor 210 may Perform comparative analysis, determine that the air gesture corresponding to the multiple consecutive images is "palm”, and determine that the operation corresponding to the air gesture is, for example, start recording or stop recording, and then the processor 210 can control the camera application to execute the corresponding operate.
  • the corresponding operation may include, for example: mobilizing multiple cameras to capture images at the same time, then using the GPU to synthesize the images captured by the multiple cameras by splicing or picture-in-picture (partial superimposition), and calling the display screen 294 to The synthesized image is displayed on the shooting preview interface of the electronic device.
  • the external memory interface 220 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device 200.
  • the external memory card communicates with the processor 210 through the external memory interface 220 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 221 may be used to store computer-executable program codes including instructions.
  • the processor 210 executes various functional applications and data processing of the electronic device 200 by executing instructions stored in the internal memory 221 .
  • the processor 210 may execute instructions stored in the internal memory 221, and the internal memory 221 may include a program storage area and a data storage area.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data created during the use of the electronic device 200 (such as audio data, phonebook, etc.) and the like.
  • the internal memory 221 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the internal memory 221 may store picture files or recorded video files taken by the electronic device in different shooting modes.
  • the charging management module 240 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger. While the charging management module 240 is charging the battery 242 , it can also supply power to the terminal device through the power management module 241 .
  • the power management module 241 is used for connecting the battery 242 , the charging management module 240 and the processor 210 .
  • the power management module 241 receives the input from the battery 242 and/or the charging management module 240 to provide power for the processor 210 , internal memory 221 , external memory, display screen 294 , camera 293 , and wireless communication module 260 .
  • the power management module 241 and the charging management module 240 can also be set in the same device.
  • the wireless communication function of the electronic device 200 can be realized by the antenna 1, the antenna 2, the mobile communication module 250, the wireless communication module 260, the modem processor and the baseband processor.
  • the antenna 1 of the electronic device 200 is coupled to the mobile communication module 250, and the antenna 2 is coupled to the wireless communication module 260, so that the electronic device 200 can communicate with the network and other devices through wireless communication technology.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Each antenna in electronic device 200 may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • Antenna 1 can be multiplexed as a diversity antenna of a wireless local area network.
  • the antenna may be used in conjunction with a tuning switch.
  • the mobile communication module 250 can provide wireless communication solutions including 2G/3G/4G/5G applied on the electronic device 200 .
  • the mobile communication module 250 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 250 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the mobile communication module 250 can also amplify the signal modulated by the modem processor, convert it into electromagnetic wave and radiate it through the antenna 1 .
  • at least part of the functional modules of the mobile communication module 250 may be set in the processor 210 .
  • at least part of the functional modules of the mobile communication module 250 and at least part of the modules of the processor 210 may be set in the same device.
  • the wireless communication module 260 can provide applications on the electronic device 200 including WLAN (such as (wireless fidelity, Wi-Fi) network), Bluetooth (bluetooth, BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation ( frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • WLAN such as (wireless fidelity, Wi-Fi) network
  • Bluetooth bluetooth, BT
  • global navigation satellite system global navigation satellite system, GNSS
  • frequency modulation frequency modulation, FM
  • near field communication technology near field communication, NFC
  • infrared technology infrared, IR
  • the wireless communication module 260 may be one or more devices integrating at least one communication processing module.
  • the wireless communication module 260 receives electromagnetic waves via the antenna 2 , frequency-modulates and filters the electromagnetic wave signals, and sends the processed signals to the processor 210 .
  • the wireless communication module 260 can also receive the signal to be sent from the processor 210 , frequency-modulate it, amplify it, and convert it into electromagnetic waves through the antenna 2 to radiate out.
  • the electronic device 200 realizes the display function through the GPU, the display screen 294 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 294 is used to display images, videos and the like.
  • the display screen 294 includes a display panel.
  • the electronic device 200 can realize the shooting function through the ISP, the camera 293, the video codec, the GPU, the display screen 294 and the application processor.
  • the ISP is used for processing the data fed back by the camera 293 .
  • Camera 293 is used to capture still images or video.
  • the electronic device 200 may include N cameras 293 , where N is a positive integer greater than 2.
  • the electronic device 200 can realize the audio function through the audio module 270 , the speaker 270A, the receiver 270B, the microphone 270C, the earphone interface 270D, and the application processor. Such as music playback, recording, etc.
  • the keys 290 include a power key, a volume key and the like.
  • the key 290 may be a mechanical key. It can also be a touch button.
  • the motor 291 can generate a vibrating prompt.
  • the motor 291 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • the indicator 292 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, missed calls, notifications and so on.
  • the number of cameras 193 may be M, where M ⁇ 2, and M is a positive integer.
  • the number of cameras enabled by the electronic device in the multi-camera shooting may be N, where 2 ⁇ N ⁇ M, and N is a positive integer.
  • the types of cameras 293 may be distinguished according to hardware configurations and physical locations.
  • the multiple cameras included in the camera 293 can be respectively placed on the front and back sides of the electronic device, the camera set on the side of the display screen 294 of the electronic device can be called a front camera, and the camera set on the side of the back cover of the electronic device can be called a front camera.
  • the camera can be called a rear camera; for another example, the multiple cameras included in the camera 293 have different focal lengths and viewing angles, a camera with a short focal length and a larger viewing angle can be called a wide-angle camera, and a camera with a long focal length and a small viewing angle can be called an ordinary camera. Camera.
  • the content of the images captured by different cameras is different in that: the front camera is used to capture the scene facing the front of the electronic device, while the rear camera is used to capture the scene facing the back of the electronic device; Within the range, a larger area of the scene can be photographed, and the scene shot at the same shooting distance has a smaller image in the screen than the scene shot with a normal lens.
  • the length of the focal length and the size of the viewing angle are relative concepts, and there are no specific parameter limitations. Therefore, wide-angle cameras and ordinary cameras are also a relative concept, which can be distinguished according to physical parameters such as focal length and viewing angle.
  • the camera 293 includes at least one camera capable of acquiring 3D data of the object in the captured image, so that the processor 210 can recognize the operation instruction corresponding to the user's air gesture based on the 3D data of the object. .
  • the camera used to obtain object 3D data can be an independent low-power camera, or other common front-facing cameras or rear-facing cameras.
  • the common front-facing camera or rear-facing camera supports low-power consumption mode.
  • the frame rate of the camera is lower than that of the normal camera in non-low-power mode, and the output image is in black and white format .
  • an ordinary camera can output 30 frames of images, 60 frames of images, 90 frames of images, and 240 frames of images in 1 second, but when the low-power camera, or the ordinary front camera or rear camera is running in low power consumption mode, the camera output in 1 second
  • the camera output in 1 second
  • 2.5 frames of images can be output, and when the camera captures the first image representing the same space gesture, the above-mentioned camera can be switched to output 10 frames of images in 1 second, so as to accurately identify the space through continuous multiple image recognition
  • the gesture corresponds to an operation instruction.
  • the pixel of the image captured by the low-power camera is lower than that of the image captured by the common camera. At the same time, it reduces power consumption compared to ordinary cameras working in low power consumption mode.
  • the output ratios of the camera 293 may be different or the same.
  • the output ratio of the camera refers to the ratio of the length to the width of the image captured by the camera. Both the length and width of the image can be measured in pixels.
  • the image output ratio of the camera can also be called image output size, image size, image size, pixel size or image resolution.
  • the output ratio of common cameras can include: 4:3, 16:9 or 3:2, etc.
  • the image output ratio refers to the approximate ratio of the number of pixels in the length and width of the image captured by the camera.
  • the sizes of the images captured by different cameras displayed in the preview box may be the same, And when the images captured by multiple cameras are displayed in the form of picture-in-picture, the sizes of the images captured by different cameras displayed in the preview frame may be different, and the specific size of the images captured by the front camera is smaller than the size of the images captured by the rear camera.
  • the UI embodiment when the electronic device is in the multi-lens shooting mode, and the images captured by multiple cameras are displayed in the form of left-right splicing or up-and-down splicing, the sizes of the images captured by different cameras displayed in the preview box may be the same, And when the images captured by multiple cameras are displayed in the form of picture-in-picture, the sizes of the images captured by different cameras displayed in the preview frame may be different, and the specific size of the images captured by the front camera is smaller than the size of the images captured by the rear camera.
  • camera 293 may be used to collect depth data.
  • the camera 293 may have a (time of flight, TOF) 3D sensing module or a structured light (structured light) 3D sensing module for acquiring depth information.
  • the camera used to collect depth data may be a front camera or a rear camera.
  • the ISP is used for processing the data fed back by the camera 293 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 293 .
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when an electronic device selects a frequency point, a digital signal processor is used to perform Fourier transform on the frequency point energy, etc.
  • Video codecs are used to compress or decompress digital video.
  • An electronic device may support one or more video codecs.
  • the electronic device can play or record video in multiple encoding formats, for example: moving picture experts group (moving picture experts group, MPEG) 1, MPEG2, MPEG3, MPEG4, etc.
  • the NPU is a neural-network (NN) computing processor.
  • NPU neural-network
  • Applications such as intelligent cognition of electronic devices can be realized through NPU, such as: image recognition, face recognition, speech recognition, text understanding, etc.
  • the electronic device realizes the display function through the GPU, the display screen 294, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 294 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 210 may include one or more GPUs that execute program instructions to generate or change display information.
  • the display screen 294 is used to display images, videos and the like.
  • Display 294 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the display screen 294 can be used to display images taken by any one camera 293, for example, in the preview box, multiple frames of images taken by one camera are displayed, or in a saved video file, images from one camera 293 are displayed. Multi-frame images, or display a photo from a camera 293 in a saved picture file.
  • the display screen 294 can display multiple frames of images from multiple cameras in the preview frame.
  • Frame images when the electronic device saves the video files or picture files taken by the multiple cameras, the display screen can display multiple frames of images from multiple cameras 293 in the saved video files, or display in the saved picture files A photo synthesized from multiple photos from multiple cameras 293.
  • the display screen 294 can display multiple images from multiple cameras 293 by splicing or picture-in-picture, so that the multiple images from the multiple cameras 293 can be presented to the user at the same time.
  • the processor 210 in the multi-camera shooting mode, can synthesize multiple frames of images from multiple cameras 293 .
  • multiple video streams from multiple cameras 293 are combined into one video stream, and the video encoder in the processor 210 can encode the combined data of one video stream to generate a video file.
  • each frame of image in the video file may contain multiple images from multiple cameras 293 .
  • the display screen 294 can display multiple images from multiple cameras 293, so as to show the user multiple images of different ranges, different resolutions, or different detail information at the same moment or in the same scene. image screen.
  • the processor 210 can correlate the image frames from different cameras 293, so that when the captured pictures or videos are played, the display screen 294 can display the associated image frames Also displayed in the viewfinder.
  • videos simultaneously recorded by different cameras 293 may be stored as different videos, and pictures simultaneously recorded by different cameras 293 may be stored as different pictures respectively.
  • the multiple cameras 293 can respectively capture images at the same frame rate, that is, the multiple cameras 293 capture the same number of image frames at the same time.
  • Videos from different cameras 293 may be stored as different video files, and the different video files are related to each other.
  • the image frames are stored in the video file according to the order in which the image frames are collected, and the different video files include the same number of image frames.
  • the display screen 294 can display the sequence of the image frames included in the associated video files according to the preset or user-instructed layout, so that the corresponding frames in the same sequence in different video files Multiple frames of images are displayed on the same interface.
  • the multiple cameras 293 can respectively capture images at the same frame rate, that is, the multiple cameras 293 capture the same number of image frames at the same time.
  • the processor 210 can stamp time stamps on each frame of images from different cameras 293, so that when the recorded video is played, the display screen 294 can simultaneously display multiple frames of images from multiple cameras 293 on the same screen according to the time stamps. interface.
  • the display screen 294 can simultaneously display different images from multiple cameras 293 by splicing left and right, splicing up and down, or picture-in-picture, so that images from the multiple cameras 293 Different images of can be presented to the user at the same time.
  • splicing left and right splicing up and down
  • picture-in-picture a picture-in-picture
  • the processor 210 such as a controller or a GPU can synthesize different images from multiple cameras 293 .
  • multiple video streams from multiple cameras 293 are combined into one video stream, and the video encoder in the processor 210 can encode the combined data of one video stream to generate a video file.
  • each frame of image in the video file may contain multiple images from multiple cameras 293 .
  • the display screen 294 can display multiple images from multiple cameras 293 to show the user multiple image frames with different content, different depth of field or different pixels at the same moment or in the same scene. .
  • multiple photos from multiple cameras 293 are merged into one, and the video encoder in the processor 210 can encode the combined photo data to generate a picture file.
  • a photo in the picture file can contain multiple photos from multiple cameras 293 .
  • the display screen 294 can display multiple photos from multiple cameras 293 to show the user multiple image frames with different contents, different depths of field or different pixels at the same moment or in the same scene.
  • the processor 210 can correlate the image frames from different cameras 293, so that when the captured pictures or videos are played, the display screen 294 can display the associated image frames Also displayed in the preview box.
  • the videos simultaneously recorded by different cameras 293 can be stored as different video files, and the photos taken by different cameras 293 can be stored as different picture files respectively.
  • the multiple cameras 293 may separately capture images at the same frame rate, that is, the multiple cameras 293 capture the same number of image frames at the same time.
  • Videos from different cameras 293 may be stored as different video files, and the different video files are related to each other.
  • the image frames are stored in the video file according to the order in which the image frames are collected, and the different video files include the same number of image frames.
  • the display screen 294 can display the sequence of the image frames included in the associated video files according to the preset or user-instructed layout, so that the corresponding frames in the same sequence in different video files Multiple frames of images are displayed on the same interface.
  • the multiple cameras 293 may separately capture images at the same frame rate, that is, the multiple cameras 293 capture the same number of image frames at the same time.
  • the processor 210 can stamp time stamps on each frame of images from different cameras 293, so that when the recorded video is played, the display screen 294 can simultaneously display multiple frames of images from multiple cameras 293 on the same screen according to the time stamps. interface.
  • the electronic device usually shoots in the user's hand-held mode, and the user's hand-held mode usually makes the picture obtained by shooting shake.
  • the processor 210 may perform anti-shake processing on image frames collected by different cameras 293 respectively. Then, the display screen 294 displays the image after anti-shake processing.
  • the SIM card interface 295 is used for connecting a SIM card.
  • the SIM card can be inserted into the SIM card interface 295 or pulled out from the SIM card interface 295 to realize contact and separation with the electronic device.
  • the electronic device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 295 can support Nano SIM card, Micro SIM card, SIM card etc. Multiple cards can be inserted into the same SIM card interface 295 at the same time. The types of the multiple cards may be the same or different.
  • the SIM card interface 295 is also compatible with different types of SIM cards.
  • the SIM card interface 295 is also compatible with external memory cards.
  • the electronic device interacts with the network through the SIM card to realize functions such as calling and data communication.
  • the electronic device adopts an eSIM, that is, an embedded SIM card.
  • the eSIM card can be embedded in the electronic device and cannot be separated from the electronic device.
  • the display screen 294 is used to display images, videos and the like.
  • the electronic device may include 1 or N display screens 294, where N is a positive integer greater than 1.
  • the display screen 294 can be used to display images captured by any one or more cameras 293, for example, displaying multiple frames of images captured by a camera in the shooting preview interface, or displaying images in saved video files. Multi-frame images from a camera 293, or display a photo from a camera 293 in a saved picture file.
  • the SIM card interface 295 is used for connecting a SIM card.
  • the SIM card can be connected and separated from the electronic device 200 by inserting it into the SIM card interface 295 or pulling it out from the SIM card interface 295 .
  • the electronic device 200 may support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • SIM card interface 295 can support Nano SIM card, Micro SIM card, SIM card etc.
  • Fig. 1B is a block diagram of the software structure of the electronic device according to the embodiment of the present invention.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively the application program layer, the application program framework layer, the Android runtime (Android runtime) and the system library, and the kernel layer from top to bottom.
  • the application layer can consist of a series of application packages.
  • the application package may include application programs such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • application programs such as camera, gallery, calendar, call, map, navigation, WLAN, Bluetooth, music, video, and short message.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include window manager, content provider, view system, phone manager, resource manager, notification manager, etc.
  • a window manager is used to manage window programs.
  • the window manager can get the size of the display screen, determine whether there is a status bar, lock the screen, capture the screen, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Said data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the phone manager is used to provide communication functions of electronic devices. For example, the management of call status (including connected, hung up, etc.).
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window.
  • prompting text information in the status bar issuing a prompt sound, vibrating the electronic device, and flashing the indicator light, etc.
  • the Android Runtime includes core library and virtual machine. The Android runtime is responsible for the scheduling and management of the Android system.
  • the core library consists of two parts: one part is the function function that the java language needs to call, and the other part is the core library of Android.
  • the application layer and the application framework layer run in virtual machines.
  • the virtual machine executes the java files of the application program layer and the application program framework layer as binary files.
  • the virtual machine is used to perform functions such as object life cycle management, stack management, thread management, security and exception management, and garbage collection.
  • a system library can include multiple function modules. For example: surface manager (surface manager), media library (Media Libraries), 3D graphics processing library (eg: OpenGL ES), 2D graphics engine (eg: SGL), etc.
  • the surface manager is used to manage the display subsystem and provides the fusion of 2D and 3D layers for multiple applications.
  • the media library supports playback and recording of various commonly used audio and video formats, as well as still image files, etc.
  • the media library can support a variety of audio and video encoding formats, such as: MPEG4, H.264, MP3, AAC, AMR, JPG, PNG, etc.
  • the 3D graphics processing library is used to implement 3D graphics drawing, image rendering, compositing, and layer processing, etc.
  • 2D graphics engine is a drawing engine for 2D drawing.
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes touch operations into original input events (including touch coordinates, time stamps of touch operations, and other information). Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event. Take the touch operation as a touch click operation, and the control corresponding to the click operation is the control of the camera application icon as an example.
  • the camera application calls the interface of the application framework layer to start the camera application, and then starts the camera driver by calling the kernel layer.
  • Camera 293 captures still images or video.
  • the touch operation received by the touch sensor may be replaced by the operation of the camera 293 collecting the gesture input by the user.
  • the corresponding hardware interrupt is sent to the kernel layer.
  • the kernel layer processes the air gesture operation into the original input event (including the image of the air gesture, the time stamp of the air gesture operation and other information).
  • Raw input events are stored at the kernel level.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the operation corresponding to the input event. Taking the air gesture operation as an operation of switching the shooting mode as an example, the camera application calls the interface of the application framework layer, and then starts other camera drivers by calling the kernel layer, so as to switch to other cameras 293 to capture still images or videos.
  • the mobile phone may display a main interface 301 .
  • the main interface 301 may include an icon 302 of the camera application.
  • the mobile phone may receive an operation of clicking the icon 302 by the user, and in response to the operation, the mobile phone may start the camera application and display the shooting preview interface 303 of the camera application.
  • the camera application is an image capturing application on electronic devices such as smartphones and tablet computers, which may be a system application or a third-party application, and this application does not limit the name of the application. That is to say, the user can click the icon 302 of the camera application to open the shooting preview interface 303 of the camera application.
  • the user can also invoke the camera application in other applications to open the shooting preview interface 303 , for example, the user clicks on a shooting control in a social application to open the shooting preview interface 303 .
  • This social application can support users to share captured pictures or videos with others.
  • the shooting preview interface 303 may be a user interface of the default shooting mode of the camera application, for example, it may be a user interface provided when the camera application is in the front shooting mode. It can be understood that the default shooting mode can also be other, such as a rear shooting mode, a front and rear shooting mode, and the like. Alternatively, the shooting preview interface 303 may be a user interface of the shooting mode that the camera application was in when the camera application was exited last time.
  • FIG. 2B is illustrated by taking the shooting preview interface 303 as an example corresponding to the shooting preview interface when the camera application is in the front shooting mode.
  • the shooting preview interface 303 may include a preview image 304 , shooting mode options 305 , flash control, shutter control, and the like.
  • the preview image 304 is an image collected by the camera 293 in real time.
  • the electronic device can refresh the image displayed on the shooting preview interface 303 (ie, the preview image 304 ) in real time, so that the user can preview the image currently captured by the camera 293 .
  • the shooting mode option 305 is used to provide multiple shooting modes for the user to choose.
  • the various shooting modes may include: photo taking 305a, video recording 305b, multi-lens video recording 305c, real-time blurring, panorama and so on.
  • the electronic device may receive an operation of the user sliding left/right on the shooting mode option 305, and in response to the operation, the electronic device may enable the shooting mode selected by the user. It should be noted that, not limited to what is shown in FIG. 2B , more or fewer options than those shown in FIG. 2B may be displayed in the shooting mode option 305 .
  • the shooting mode corresponding to the photographing 305a is commonly used single-lens shooting, which may include a front shooting mode, a rear shooting mode, and the like. That is, when the camera 305a is selected, the electronic device can take a photo through a front camera or a rear camera.
  • the front shooting mode and the rear shooting mode please refer to the previous article, and I will not repeat them here.
  • the shooting modes corresponding to the multi-lens video recording 305c may include multiple shooting modes, for example, multiple shooting modes under multi-lens shooting, multiple shooting modes under single-lens shooting, and the like. That is, when the multi-camera recording 305c is selected, the electronic device can perform single-lens shooting through one camera, or perform multi-lens shooting through multiple cameras.
  • multiple shooting modes under multi-lens shooting reference may be made to the specific description above, and details will not be repeated here.
  • the photograph 305a is in a selected state. That is, the electronic device is currently in a shooting mode. If the user wishes to enable the multi-camera recording mode, he can slide the shooting mode option 305 to the left, and select the multi-camera video recording 305c. When it is detected that the user slides the shooting mode option 305 to the left and selects the operation of the multi-camera recording 305c, the electronic device can start the multi-camera recording mode and display the shooting preview interface 303 as shown in FIG. 2C .
  • the electronic device after entering the multi-mirror video recording mode, the electronic device can turn on the front camera and the rear camera, and the shooting preview interface 303 simultaneously displays an image 306a collected by the front camera and an image 306b collected by the rear camera, and the image 306a and image 306b are spliced and displayed. Wherein, since the electronic device is placed vertically, the image 306a and the image 306b are spliced up and down.
  • the front camera and the rear camera can be enabled by default, and the image captured by the front camera and the image captured by the rear camera can be spliced and displayed on the shooting preview interface.
  • image for example, as shown in Figure 2C display method.
  • the cameras enabled by default are not limited to the front camera and the rear camera, and may also be the rear camera and the rear camera, the front camera or the rear camera, and the like.
  • the image display mode is not limited to the splicing mode, but may also be a picture-in-picture mode, etc., which is not specifically limited here.
  • FIG. 3 shows a schematic diagram of a scene in which a user holds an electronic device with a selfie stick and takes a horizontal screen shot, wherein the electronic device is placed horizontally in the selfie stick. It can be seen that when the electronic device is in the shooting scene shown in FIG. 3 or other scenes where it is inconvenient for the user to directly touch the display screen, it is not convenient for the user to control the electronic device, for example, it is not convenient for the user to start or stop recording and switch the shooting mode. In this case, the user can control the electronic device to start or stop recording, and switch the shooting mode of the electronic device through air gestures.
  • the electronic device may display a shooting preview interface 401 .
  • the shooting preview interface 401 includes a preview image 401a, a preview image 401b, a mirror control 402, a recording control 403, and the like.
  • the preview image 401a is an image collected by a rear camera
  • the preview image 401b is an image collected by a front camera.
  • the preview image 401a and preview image 401b are spliced left and right, because the electronic device is placed horizontally in the selfie stick; when the electronic device is vertically placed in the selfie stick, the preview image 401a and preview image 401b can be spliced up and down.
  • the air mirror changer control 402 is for the user to quickly turn on/off the air mirror changer function. Among them, after the function of changing the mirror through the air is turned on, the user can control the electronic device through a specific gesture in the air.
  • the space-for-mirror control 402 indicates that the space-for-mirror function is in an on state (also referred to as a first state).
  • Recording control 403 is available for the user to quickly start/stop recording video. In FIG. 4A, the recording control 403 indicates that the electronic device is in a non-recording state.
  • the user when the user wants to start recording, he can first input an air gesture facing the electronic device, such as inputting the air gesture of "raise hand” (it can be understood that the user keeps facing the display screen and maintains the state of "raising hand") , also known as the first air gesture).
  • the front-facing camera of the electronic device can collect an air gesture input by the user (that is, the preview image 401 b ), and display it on the shooting preview interface 401 .
  • the electronic device can also analyze and process the collected preview image 401b, and when the gesture of “raising hand” is recognized, it can display the shooting preview interface 401 as shown in FIG. 4B .
  • the shooting preview interface 401 shown in FIG. 4B is similar to the shooting preview interface 401 shown in FIG.
  • a prompt message 404 is displayed in the shooting preview interface 401 shown in FIG. ” state (which can be understood as the state of being ready to further recognize the user’s air gesture), the user can input the air gesture as needed.
  • the prompt information 404 may be an icon of an air gesture.
  • the prompt information 404 may also include text information prompting the user to complete the gesture operation within a first preset time, for example, "the gesture operation needs to be completed within 3 seconds".
  • the prompt information 404 may also include a time progress bar, which may be used to indicate the time when the electronic device enters the "ready” state.
  • the electronic device starts counting from the moment when it enters the "preparation” state (for example, the first moment), and the time progress bar is blank at this time; the electronic device stops counting after the first preset time at the first moment, and at this time The time progress bar is filled.
  • the user needs to input an air gesture before the time progress bar is filled (which can be understood as within the first preset time), so as to control the electronic device.
  • the user can continue to input the air gesture of "raise hand” (also called the second air gesture) until the time progress bar is filled to more than two-thirds (or any other ratio, such as one-half, two-fifths, etc.).
  • the electronic device may enter the recording preparation state and display the shooting preview interface 401 as shown in FIG. 4D .
  • the first preset time is 3 seconds
  • the user can keep the "hands up” gesture for at least 2 seconds, and the electronic device can enter the ready-to-record state.
  • the shooting preview interface 401 may only display a preview image 401 a , a preview image 401 b and a countdown reminder 405 .
  • the preview image 401b shows that the user has put down his hand (that is, the air gesture of "raising hand” is no longer input).
  • the countdown reminder 405 is used to remind the user that the electronic device will enter the recording state after the third preset time, for example, enter the recording state after 2 seconds.
  • the user can be reminded that the electronic device is about to start recording, so that the user is ready for recording. Understandably, after the electronic device enters the state of preparing for recording, the user does not need to continue to input air gestures, and can pose in any pose to prepare for shooting.
  • FIG. 4E shows a shooting preview interface 401 when the electronic device starts recording.
  • the shooting preview interface 401 shown in FIG. 4E may include a recording time 406, a recording control 407, a screenshot control 408, and the like.
  • the recording time 406 is used for the recording duration of the video, such as "00:01".
  • the electronic device detects a touch operation on the recording control 407, it can stop or pause recording video.
  • the electronic device detects a touch operation on the screenshot control 408, it can capture the image currently displayed on the shooting preview interface 401 (including the preview image 401a and the preview image 401b).
  • the electronic device when the electronic device detects the air gesture of "raise hand” input by the user (ie, the first air gesture) for the first time, it may enter the "ready” state.
  • the electronic device may perform the operation corresponding to the air gesture (for example, , the air gesture of "raising your hand” may correspond to the operation of starting recording).
  • the electronic device does not detect any further air gestures input by the user within the first preset time, the electronic device returns to the original state (that is, the state the electronic device was in before entering the "ready” state). At this time, if the user wants to control the electronic device again, he needs to re-input the "hands up” gesture in the air to make the electronic device enter the "ready” state again.
  • first air gesture and second air gesture may be the same (for example, both are “raise hand") or different, which are not specifically limited here.
  • the above air gesture of "raise hand” can also be replaced with other air gestures, such as “like” air gesture, "victory” air gesture, etc.
  • the electronic device may display a shooting preview interface 401 .
  • the shooting preview interface 401 shown in FIG. 5A includes the preview image 401b collected by the front camera. Camera.
  • the user wants to switch the shooting mode of the electronic device, he may first input an air gesture facing the electronic device, for example, input an air gesture of "raise hand".
  • the front-facing camera of the electronic device can collect an air gesture input by the user (that is, the preview image 401 b ), and display it on the shooting preview interface 401 .
  • the electronic device can also analyze and process the captured preview image 401b, and when the air gesture of "raise hand" is recognized, display the shooting preview interface 401 as shown in FIG. 5B.
  • the shooting preview interface 401 shown in FIG. 5B is similar to the shooting preview interface 401 shown in FIG. 5A , the difference is that the shooting preview interface 401 shown in FIG. 5B includes prompt information 404 . Wherein, for the relevant introduction of the prompt information 404, refer to the foregoing, and details are not repeated here.
  • the electronic device when the electronic device detects the air gesture of "raise hand” input by the user (that is, the first air gesture) for the first time, it can enter the "ready” state. After the electronic device enters the "ready” state, the electronic device can determine the operation to be performed according to further detected gesture operations. That is, on the shooting preview interface 401 shown in FIG. 5B , the user can input different air gestures to switch the shooting mode of the electronic device, such as controlling the electronic device to switch the shooting mode from front shooting to other shooting modes, such as Rear shooting mode, picture-in-picture shooting mode, etc.
  • the electronic device can be switched from a front-facing shooting mode to a rear-facing shooting mode or a picture-in-picture shooting mode.
  • the following will introduce the interfaces involved in the process of switching the electronic device from the front shooting mode to the rear shooting mode or the picture-in-picture shooting mode based on the shooting scenes shown in FIGS. 5A-5B.
  • the electronic device may input "flip palm” in front of the display screen when the shooting preview interface 401 shown in FIG. 5B is displayed on the electronic device.
  • Air gestures also called third air gestures.
  • the electronic device may switch the shooting mode from the front shooting mode to the rear shooting mode and display a shooting preview interface 401 as shown in FIG. 5D .
  • the image 401 a captured by the rear camera may be displayed on the shooting preview interface 401 .
  • gesture of "flipping the palm” can also switch the electronic device from the rear shooting mode to the front shooting mode.
  • the specific process is the same as the process of switching the electronic device from the front shooting mode to the rear shooting mode. Similar and will not be repeated here.
  • the electronic device can also switch from the front-facing shooting mode to the picture-in-picture shooting mode. Based on the shooting scenes shown in FIGS. 5A-5B , the interfaces involved in the process of switching the electronic device from the front shooting mode to the picture-in-picture shooting mode will be introduced below.
  • the electronic device may switch the shooting mode from the front shooting mode to the picture-in-picture shooting mode and display the shooting preview interface 401 as shown in FIG. 5F .
  • a fist air gesture also called the fourth air gesture.
  • the image 401 a captured by the rear camera and the image 401 b captured by the front camera can be simultaneously displayed on the shooting preview interface 401 .
  • the image 401b is superimposed and displayed on the image 401a, and the display position of the image 401a is the position where the entire shooting preview interface 401 is located.
  • the electronic device can also switch from the rear shooting mode to the picture-in-picture shooting mode.
  • the interface involved in the process of switching the electronic device from the rear shooting mode to the picture-in-picture shooting mode will be introduced below based on the shooting scenes shown in FIGS. 5A-5B .
  • Fig. 5H is similar to Fig. 5B, the difference is that the image displayed in the shooting preview interface shown in Fig. 5H is captured by the rear camera
  • the preview image 401a, the image displayed in the shooting preview interface shown in FIG. 5B is the preview image 401b captured by the front camera.
  • the electronic device may switch the shooting mode from the front shooting mode to the picture-in-picture shooting mode and display the shooting preview interface 401 as shown in FIG. 5I.
  • the image 401a captured by the rear camera and the image 401b captured by the front camera can be simultaneously displayed on the shooting preview interface 401 .
  • the image 401b is superimposed and displayed on the image 401a, and the display position of the image 401a is the position where the entire shooting preview interface 401 is located.
  • the image shown in Figure 5I is the same as the image shown in Figure 5F.
  • the picture-in-picture interface in the shooting preview interface of the picture-in-picture mode displays What is displayed is the preview image 401b collected by the front camera
  • the rear viewfinder interface in the shooting preview interface of the picture-in-picture mode displays the preview image 401a collected by the rear camera.
  • the electronic device can also switch the display positions of the two images in the picture-in-picture mode.
  • the interface involved in the process of the electronic device changing the positions of the two images in the picture-in-picture mode will be introduced below.
  • the user may input an air gesture of "turning the palm” facing the display screen when the electronic device displays the prompt message 404 .
  • the electronic device can switch the display position of the image and display the shooting preview interface 401 as shown in FIG. 5K .
  • the electronic device can also switch from the picture-in-picture shooting mode to other shooting modes, and this process will be described in detail below with reference to the accompanying drawings.
  • FIG. 6A shows a shooting preview interface 401 when the electronic device is in a picture-in-picture shooting mode.
  • the shooting preview interface 401 is similar to the shooting preview interface 401 shown in FIG. 5F , so details will not be described here.
  • the user wants to switch the shooting mode of the electronic device, he may first input an air gesture facing the electronic device, for example, input an air gesture of "raise hand".
  • the front-facing camera of the electronic device can collect an air gesture input by the user (that is, the preview image 401 b ), and display it on the shooting preview interface 401 .
  • the electronic device can also analyze and process the captured preview image 401b, and when recognizing the “hands up” gesture in the air, display the shooting preview interface 401 as shown in FIG. 6B .
  • the shooting preview interface 401 shown in FIG. 6B is similar to the shooting preview interface 401 shown in FIG. 6A , the difference is that the shooting preview interface 401 shown in FIG. 6B includes prompt information 404 . Wherein, for the relevant introduction of the prompt information 404, refer to the foregoing, and details are not repeated here.
  • the user can input different air gestures to switch the shooting mode of the electronic device, for example, control the electronic device to switch the shooting mode from the picture-in-picture shooting mode to the rear shooting mode, front Set the shooting mode, etc.
  • the electronic device can switch from the picture-in-picture shooting mode to the rear shooting mode.
  • the interface involved in the process of switching the electronic device from the picture-in-picture shooting mode to the rear shooting mode will be introduced below based on the shooting scenes shown in FIGS. 6A-6B .
  • the electronic device may input "extend palm” when the electronic device displays the shooting preview interface 401 shown in FIG. Then make a fist" gesture in the air.
  • the electronic device can switch the shooting mode from the picture-in-picture shooting mode to the rear shooting mode and display the shooting preview interface 401 as shown in FIG. 6D .
  • the image 401b collected by the front camera may no longer be displayed, and only the image 401a collected by the rear camera may be displayed.
  • the electronic device can be switched from a picture-in-picture shooting mode to a front-facing shooting mode.
  • the shooting preview interface 401 of the picture-in-picture shooting mode is shown in FIG. 6E, and FIG. 6E is similar to FIG.
  • the electronic device can switch the shooting mode from the picture-in-picture shooting mode to the front shooting mode and display the shooting preview interface 401 as shown in FIG. 6F .
  • the image 401a captured by the rear-facing camera may no longer be displayed, and only the image 401b captured by the front-facing camera may be displayed.
  • the interfaces shown in FIG. 5A-FIG. 6F are all related interfaces based on switching the shooting mode of the electronic device during recording.
  • the electronic device when it has not yet recorded, it can also recognize the user's air gestures and perform corresponding operations.
  • the principle is similar to the principle of electronic devices recognizing air gestures and performing corresponding operations during recording, so I won't go into details here .
  • the electronic device When the electronic device is placed vertically, it can also switch from the front/rear shooting mode to the picture-in-picture shooting mode, or switch from the picture-in-picture shooting mode to the front/rear shooting mode.
  • the principle is the same as when the electronic device is placed horizontally The principle is similar.
  • the relevant interface for switching from the front and back shooting mode to the picture-in-picture shooting mode when the electronic device is placed vertically is described below.
  • the electronic device may display a shooting preview interface 401 when the electronic device is placed upright and in a front-facing shooting mode.
  • the shooting preview interface 401 includes an image 401b captured by the front camera.
  • the front-facing camera of the electronic device can collect an air gesture input by the user (that is, the preview image 401 b ), and display it on the shooting preview interface 401 .
  • the electronic device can also analyze and process the captured preview image 401b, and when it recognizes the "hands up" gesture in the air, display the shooting preview interface 401 as shown in FIG. 7B.
  • the shooting preview interface 401 shown in FIG. 7B is similar to the shooting preview interface 401 shown in FIG. 7A , the difference is that the shooting preview interface 401 shown in FIG. 7B includes prompt information 404 . Wherein, for the relevant introduction of the prompt information 404, refer to the foregoing, and details are not repeated here.
  • the electronic device can switch from the front-facing shooting mode to the picture-in-picture shooting mode when it detects the air gesture of "stretch out the palm and make a fist" input by the user.
  • the electronic device can switch from the picture-in-picture shooting mode to the front/rear when detecting the air gesture of "stretch out the palm and make a fist" input by the user. shooting mode.
  • the processing flow and principle of the shooting preview interface 401 by the electronic device are the same, and the following will be combined with FIG. 8A - Figure 8G, when the electronic device is in a horizontal state, introduces the process of the electronic device processing the shooting preview interface 401 and a series of interfaces in the process when the user switches from the picture-in-picture shooting mode to the rear shooting mode .
  • the processing flow of the shooting preview interface 401 and a series of interfaces in the processing flow are described by taking the command input by the user as "extend your palm and make a fist" as an example.
  • the electronic device detects the air gesture of "stretch out the palm and make a fist" input by the user (that is, the third air gesture) in response to detecting the air gesture of "extend the palm and then make a fist" input by the user, the electronic device
  • the shooting preview interface 401 will gradually switch from the picture-in-picture shooting mode to the post-shooting mode. Referring to FIG. 8A, FIG.
  • the switching time of the whole process of switching from the medium-picture shooting mode to the rear shooting mode is the first switching cycle T1 (for example, the first switching cycle T1 is 600ms), and the switching cycle can be divided into two processing time periods, which are combined with the following The figure details the different processing time periods.
  • the picture in the shooting preview interface 401 is the picture at 0 ms (the first moment) in the first switching period T1 of the picture-in-picture shooting mode switched to the rear shooting mode.
  • the shooting preview interface 401 includes a picture-in-picture interface 401c and a rear viewfinder interface 401d.
  • the display area where the picture-in-picture interface 401c is located can be the first area, and the image displayed in the picture-in-picture interface 401c can be the first image;
  • the display area where the viewfinder interface 401d is located may be the second area, and the image displayed on the rear viewfinder interface 401d may be the second image.
  • the display interface of the rear viewfinder interface 401d is the interface where the entire shooting preview interface 401 is located.
  • the picture-in-picture interface 401c is superimposed and displayed on the rear viewfinder interface 401d.
  • the picture-in-picture interface 401c is located at the lower left corner of the rear viewfinder interface 401d .
  • the picture-in-picture interface 401c is an image collected by the front camera
  • the rear viewfinder interface 401d is an image collected by the rear camera.
  • the shooting preview interface 401 shown in FIG. 8C is the picture at 300ms (the second moment) in the first switching cycle.
  • the shooting preview interface 401 includes a rear viewfinder interface 401d (at this time, the rear viewfinder The display area where the interface 401d is located may be the third area), the picture-in-picture interface 401c disappears, and the rear viewfinder interface 401d in FIG. 8C becomes more blurred than the rear viewfinder interface 401d in FIG. 8B .
  • the shooting preview interface 401 shown in FIG. 8D is the picture at 150 ms in the first switching period.
  • the shooting preview interface 401 includes a picture-in-picture interface 401c and a rear viewfinder interface 401d, as shown in FIG. 8D
  • the shooting preview interface 401 is similar to the shooting preview interface 401 shown in FIG. 8B, the difference is that the picture-in-picture interface 401c in FIG. 8D is blurred compared with the picture-in-picture interface 401c in FIG. Compared with the picture-in-picture interface 401c in FIG.
  • the picture-in-picture interface 401c becomes more transparent (the opacity is reduced), and the rear viewfinder interface 401d in FIG. 8D becomes more transparent than the rear viewfinder interface 401d in FIG. 8B .
  • More blurred where the blurring degree of the rear viewing interface 401d in FIG. 8D is between the blurring degree of the rear viewing interface 401d in FIG. 8B and the blurring degree of the rear viewing interface 401d in FIG. 8C.
  • the time period from 0ms (first moment) to 300ms (second moment) is called the first time period.
  • the electronic device performs transparency processing on the picture-in-picture interface 401c.
  • the opacity (opacity) of the picture-in-picture interface 401c at 0ms (first moment) is 100% (second opacity)
  • the opacity of the picture-in-picture interface 401c at 300ms (second moment) is 0 ( first opacity).
  • the opacity of the picture-in-picture interface 401c changes from 100% to 0 (from the second opacity to the first opacity ), in order to make the visual effect of the change process of the opacity of the picture-in-picture interface 401c better, the change of the opacity of the picture-in-picture interface 401c is a process of gradual change over time, and the change trend of the opacity of the picture-in-picture interface 401c can refer to second curve. Referring to Fig. 9B, Fig.
  • the second curve may be 06-Sharp curve (06-sharp curve).
  • the x-axis represents time
  • the y-axis represents transparency (a concept opposite to opacity).
  • the curvature of the second curve first increases and then decreases with time
  • the second curve can represent a transformation process from slow to fast, and then from fast to slow, that is, the front viewfinder interface
  • the transparency change process of 401c is within the time period of 0ms-300ms. It changes slowly in the beginning of the first time period, changes quickly in the middle of the first time period, and changes rapidly in the first time period. Changes are slower at the end of the period.
  • the electronic device also performs Gaussian blur processing on the picture-in-picture interface 401c, wherein the Gaussian blur value of the picture-in-picture interface 401c at 0 ms (first moment) is 0 (first blur value), The Gaussian blur value of the picture-in-picture interface 401c at 300 ms (second moment) is 10 (second blur value).
  • the Gaussian blur value of the picture-in-picture interface 401c changed from 0 to 10 (from the first blur value to the second blur value), and the change of the Gaussian blur value of the picture-in-picture interface 401c in the first time period
  • the trend can refer to the first curve. Referring to FIG. 9A, FIG.
  • the first curve may be 05-Extreme Deceleration Curve (05-Extreme Deceleration Curve).
  • the x-axis represents time
  • the y-axis represents Gaussian blur value. From the change trend of the first curve, it can be seen that the curvature of the first curve becomes smaller and smaller with time, and the first curve can represent two transformation processes from rapid to slow, that is, the Gaussian blur value of the picture-in-picture interface 401c is During the period of 0ms-300ms, the trend of the Gaussian blur value increasing with time is also from rapid to slow.
  • FIG. 8E shows the picture of the shooting preview interface 401 in the first switching period of 450ms.
  • the real-time video picture captured by the camera and the last frame of image the real-time video picture is blocked by the last frame of image displayed in the rear viewfinder interface 401d, but because the last frame of image in Figure 8E is compared to that in Figure 8D
  • the opacity of the last frame image is reduced, so the picture seen at this time is the picture formed by superimposing the real-time video picture captured by the rear camera and the last frame image.
  • FIG. 8F shows the screen of the shooting preview interface 401 at 600ms (the third moment) in the first switching period.
  • the display area where the interface 401d is located may be the third area, and the image in the rear viewfinder interface 401d is the third image).
  • the display interface of the rear viewfinder interface 401d is the interface where the entire shooting preview interface 401 is located, and the images displayed on the rear viewfinder interface 401d are real-time video images collected by the rear camera.
  • the time period from 300ms (second moment) to 600ms (third moment) is called the second time period, and the shooting preview interface 401 at 600ms (third moment) is different from the shooting preview at 300ms (second moment) Interface 401, this is because the electronic device performs transparency processing on the rear viewfinder interface 401d within the second time period.
  • the opacity of the last frame of image in the rear viewfinder interface 401d is 100% (second opacity) at 300ms (second moment).
  • the opacity of the last frame of image in the rear viewfinder interface 401d is 0 (first opacity) at 600ms (third moment).
  • the opacity of the rear viewfinder interface 401d changes from 100% to 0 (from the second opacity to the first opacity).
  • FIG. 8C, FIG. 8E, and FIG. 8F these three figures sequentially show the process of the last frame of image in the rear viewfinder interface 401d changing from completely opaque to completely transparent within the second time period, and after 300ms,
  • the video stream captured by the rear camera is upstreamed at the display position where the rear viewfinder interface 401d is located.
  • the video stream that is upstreamed at the display position where the rear viewfinder interface 401d is located is blocked by the completely opaque last frame image;
  • FIG. 8E as the opacity of the last frame of image decreases, in FIG. 8E you can see the picture where the video stream overlaps with the last frame of image; as shown in FIG. 8F , with the last The transparency of one frame of image gradually changes to completely transparent, and the video picture captured by the rear camera is gradually and clearly presented in the rear viewfinder interface 401d in the shooting preview interface 401 .
  • the electronic device can also switch from a picture-in-picture shooting mode to a front-facing shooting mode.
  • the interface of the picture-in-picture shooting mode can be switched from the picture shown in Figure 5J to the interface shown in Figure 5K by inputting the "flip palm" gesture in the air, and then according to the method introduced in the above-mentioned embodiment, Switch the picture-in-picture shooting mode to the front shooting mode. Since the principle of switching the picture-in-picture shooting mode to the front shooting mode is the same as that of switching the picture-in-picture shooting mode to the rear shooting mode, details will not be described here.
  • the electronic device can also switch from the front/rear shooting mode to the picture-in-picture shooting mode, the processing flow and principle of switching from the front shooting mode to the picture-in-picture shooting mode and from the rear shooting mode to the picture-in-picture shooting mode same.
  • the following is an example of switching from the rear shooting mode to the front and rear shooting mode.
  • the following will introduce the processing flow of the shooting preview interface 401 by the electronic device when the user switches from the front and rear shooting mode to the picture-in-picture shooting mode with reference to Figures 10A-10G and a series of screens in that process flow.
  • the shooting preview interface in the forward and backward shooting mode is the interface shown in Figure 5F
  • the command input by the user is "stretch out the palm and then make a fist”
  • the shooting preview interface of the picture-in-picture shooting mode switched to is shown in Figure 6A
  • the interface shown is taken as an example to describe the processing flow of the shooting preview interface 401 and a series of interfaces in the processing flow.
  • the shooting preview interface 401 will gradually switch from the rear shooting mode to the picture-in-picture shooting mode.
  • the switching time of the entire process of switching from the rear shooting mode to the picture-in-picture shooting mode is the second switching period T2 (for example, the second switching period T2 is 600ms), and the second switching period T2 can be divided into two processing time periods , the different processing time periods will be described in detail below in conjunction with the accompanying drawings.
  • the picture in the shooting preview interface 401 shown in FIG. 10B is the picture at 0 ms (the first moment) in the second switching period T2 when the rear shooting mode is switched to the picture-in-picture shooting mode.
  • the picture may be an image of the last frame in the video captured by the rear shooting mode.
  • the shooting preview interface 401 includes a rear viewfinder interface 401d, and the display interface of the rear viewfinder interface 401d is the entire display interface where the shooting preview interface 401 is located.
  • the shooting preview interface 401 shown in FIG. 10C is the picture at 300ms (the second moment) in the second switching cycle T2.
  • the shooting preview interface 401 includes a picture-in-picture interface 401c and a rear viewfinder interface 401d, wherein the picture-in-picture interface 401c is superimposed and displayed on the rear viewfinder interface 401d, and the picture displayed in the picture-in-picture interface 401c is a prefabricated picture.
  • the picture-in-picture interface 401c may be displayed at the lower left corner of the rear viewfinder interface 401d.
  • the rear viewfinder interface 401d in FIG. 10C becomes more blurred than the rear viewfinder interface 401d in FIG. 10B .
  • FIG. 10D shows the picture of the shooting preview interface 401 at 150ms in the second switching period T2.
  • the shooting preview interface 401 shown in FIG. 10C is similar to the shooting preview interface 401 shown in FIG. 10C, the difference is that: the opacity of the picture-in-picture interface 401c in FIG. 10D is higher than that of the picture-in-picture interface 401c in FIG. FIG. 10E and FIG. 10C sequentially show the process of the picture-in-picture interface 401c changing from transparent to opaque.
  • the blur degree of the rear viewfinder interface 401d in FIG. 10D is between that of the rear viewfinder interface 401d in FIG. 10B and FIG. 10C.
  • FIG. 10B, FIG. 10D and FIG. 10C sequentially show the process of the rear viewfinder interface 401d becoming more and more blurred .
  • the time period from 0ms (first moment) to 300ms (second moment) in the second switching cycle T2 is called the first time period.
  • Gaussian blur processing is performed, wherein the Gaussian blur value of the rear viewfinder interface 401d at 0ms (first moment) is 0 (first blur value), and the Gaussian blur value of the rear viewfinder interface 401d at 300ms (second moment) is 100 (third fuzzy value).
  • the Gaussian blur value of the rear viewfinder interface 401d is getting higher and higher, and the change trend of the Gaussian blur value of the rear viewfinder interface 401d within the first period of time can be referred to the first curve.
  • the electronic device also performs transparency processing on the picture-in-picture interface 401c, wherein the opacity of the picture-in-picture interface 401c is 0 at 0 ms (the first moment), and the picture-in-picture interface 401c is At 300ms (the second moment) the opacity is 100%.
  • the opacity of the picture-in-picture interface 401c is getting lower and lower, so that the picture-in-picture interface 401c is gradually and clearly superimposed on the rear viewfinder interface 401d.
  • the change trend of transparency can refer to the second curve.
  • FIG. 10F shows the picture of the shooting preview interface 401 at 600ms (the third moment) in the second switching period T2.
  • the shooting preview interface 401 includes a picture-in-picture interface 401c and a rear viewfinder interface 401d.
  • the picture-in-picture interface 401c is superimposedly displayed on the front viewfinder interface 401d and the rear viewfinder interface 401d.
  • the picture-in-picture interface 401c is preset at the lower left corner of the shooting preview interface 401 .
  • the picture displayed in the picture-in-picture interface 401c is the real-time video picture taken by the front camera
  • the picture displayed in the rear viewfinder interface 401d is the real-time video picture taken by the rear camera.
  • FIG. 10E shows a picture of the shooting preview interface 401 in the second switching period of 450 ms.
  • the shooting preview interface 401 includes a picture-in-picture interface 401c and a rear viewfinder interface 401d, and the picture-in-picture interface 401c Superimposed and displayed on the rear viewfinder interface 401d.
  • the picture displayed in the picture-in-picture interface 401c is the real-time video picture and the prefabricated picture taken by the front camera
  • the picture displayed in the rear viewfinder interface 401d is the real-time video picture and the last frame image taken by the rear camera .
  • the real-time video image in the picture-in-picture interface 401c is blocked by the prefabricated picture displayed in the picture-in-picture interface 401c, but since the prefabricated picture in FIG. 10E has lower opacity than the prefabricated picture in FIG.
  • the seen picture in the picture-in-picture interface 401c is a picture formed by superimposing the real-time video picture captured by the front camera and the prefabricated picture.
  • the real-time video image in the rear viewfinder interface 401d is blocked by the last frame image displayed in the rear viewfinder interface 401d, but since the prefabricated picture in FIG. 10E has lower opacity than the prefabricated picture in FIG. 10C , this The picture in the rear viewfinder interface 401d seen at the time is the picture formed by superimposing the real-time video picture captured by the rear camera and the last frame of image.
  • the time period from 300ms (second moment) to 600ms (third moment) is called the second time period, and the shooting preview interface 401 at 600ms (third moment) is different from the shooting preview at 300ms (second moment) interface 401, because the electronic device performs transparency processing on the picture-in-picture interface 401c and the rear viewfinder interface 401d within the second time period.
  • the opacity of the prefabricated picture in the picture-in-picture interface 401c is 100% (second opacity) at 300ms (second moment)
  • the last frame image in the rear viewfinder interface 401d is at 300ms
  • the opacity at the second moment is 100% (second opacity).
  • the opacity of the prefabricated picture in the picture-in-picture interface 401c is 0 (first opacity) at 600ms (the third moment), and the last frame image in the rear viewfinder interface 401d is at 600ms (the third moment).
  • the opacity change trend of the picture-in-picture interface 401c and the rear viewfinder interface 401d reference may be made to the second curve.
  • FIG. 10C, Fig. 10E, and Fig. 10F sequentially show the process of changing the prefabricated picture in the picture-in-picture interface 401c from completely opaque to completely transparent in the second time period, and the process of changing the prefabricated picture in the rear viewfinder interface 401d within the second time period.
  • the process of changing the last frame of the image from fully opaque to fully transparent. Since after 300ms, the video stream captured by the front camera is upstreamed at the display position where the picture-in-picture interface 401c is located, and the video stream captured by the rear camera is upstreamed at the display position where the rear viewfinder interface 401d is located, as shown in FIG.
  • Each functional unit in each embodiment of the embodiment of the present application may be integrated into one processing unit, or each unit may physically exist separately, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a computer-readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art or all or part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage
  • the medium includes several instructions to enable a computer device (which may be a personal computer, server, or network device, etc.) or a processor to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: flash memory, mobile hard disk, read-only memory, random access memory, magnetic disk or optical disk, and other various media capable of storing program codes.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供了一种拍摄视频的方法及电子设备,涉及终端技术领域。该方法应用于包括显示屏、第一摄像头和第二摄像头的电子设备,第一摄像头和第二摄像头位于显示屏的不同侧,第一摄像头与显示屏位于电子设备的同一侧。该方法包括:显示屏的第一区域显示来自第一摄像头实时采集的第一图像,显示屏的第二区域显示来自第二摄像头实时采集的第二图像,其中,第二区域为显示屏的整个显示区域,第一区域小于第二区域;响应于检测到的用户操作,第一区域中所显示的第一图像的不透明度由第二不透明度逐渐变小至第一不透明度;显示屏的第三区域显示第三图像,第三图像为第二摄像头实时采集的第二图像,显示屏的第三区域为显示屏的整个显示区域。

Description

一种拍摄视频的方法及电子设备
本申请要求于2021年6月16日提交国家知识产权局、申请号为202110676709.3、申请名称为“一种基于故事线模式的用户视频创作方法及电子设备”的中国专利申请的优先权,以及于2021年11月29日提交国家知识产权局、申请号为202111437971.9、申请名称为“一种拍摄视频的方法及电子设备”的中国专利申请的优先权,以及于2021年12月31日提交国家知识产权局、申请号为202111673018.4、申请名称为“一种拍摄视频的方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种拍摄视频的方法及电子设备。
背景技术
随着电子技术的发展,手机、平板电脑等电子设备一般都配置有多个摄像头,如前置摄像头、后置摄像头、广角摄像头等等。为了带来进一步的拍摄创作体验,越来越多的电子设备可以支持多个摄像头同时拍摄,用户可以根据自己的需求选择相应的拍摄模式,例如,前置拍摄模式、后置拍摄模式、前后拍摄模式、画中画拍摄模式等。从而记录精彩时刻、感人场景等美好画面。
在视频拍摄的过程中,用户可能需要对拍摄模式进行切换,例如将前后拍摄模式切换为画中画拍摄模式。但是,当进行不同拍摄模式的切换时,电子设备不同摄像头所采集到的画面数据在显示在拍摄预览界面上时,具有一定的延迟,当用户进行不同拍摄模式的切换时,若直接显示各种模式对应的摄像头所采集到的画面数据,在进行模式切换时,拍摄预览界面中的画面切换会显得比较僵硬,甚至出现“卡顿”的视觉效果,影响用户体验。
发明内容
本申请实施例提供一种拍摄视频的方法及电子设备。该拍摄视频的方法可以支持电子设备在进行拍摄模式切换时,为拍摄预览界面中的显示画面进行切换时进行动画处理,使得拍摄预览界面中的显示画面在切换时,看起来更加流畅和生动,提高用户的使用体验。
第一方面,本申请实施例提供一种拍摄视频的方法,方法应用于包括显示屏、第一摄像头和第二摄像头的电子设备,第一摄像头和第二摄像头位于显示屏的不同侧,该方法包括:
显示屏的第一区域显示来自第一摄像头实时采集的第一图像,显示屏的第二区域显示来自第二摄像头实时采集的第二图像,其中,第二区域为显示屏的整个显示区域,第一区域小于第二区域;
响应于检测到的用户操作,第一区域中所显示的第一图像的不透明度由第二不透明度逐渐变小至第一不透明度;
显示屏的第三区域显示第三图像,第三图像为第二摄像头实时采集的第二图像, 显示屏的第三区域为显示屏的整个显示区域。
在此基础上,本申请实施例所提供的拍摄方法在显示图像时,在检测到用户操作之前,是在显示屏的第一区域和第二区域中显示第一图像和第二图像,此为一个显示画面。而在检测到用户操作之后,显示屏最后显示的画面是在显示屏的第三区域中显示第二图像,此为另一个显示画面。也就是,在检测到用户操作前后,显示屏上的显示画面从一个显示画面切换到了另一个显示画面。本申请通过在检测到用户操作后,改变第一区域中所显示的第一图像的不透明度,该部分显示内容作为显示屏的显示画面在切换过程中的转场动画,即增加了显示屏上的显示画面从一个显示画面切换到了另一个显示画面时的转场动画,且该转场动画为一个动态的显示画面,用于过渡从上述的一个显示画面切换到另一个显示画面。通过在进行画面切换时,增加转场动画,有利于提高画面切换的流畅性。本申请实施例示出了一种具体的显示画面的切换,即从画中画显示模式切换为全屏显示模式。
在第一方面的一种可能的设计方式中,在响应于检测到的用户操作之后,该方法还包括:
在第一时间段内,第一区域显示在检测到用户操作时第一摄像头所采集的第一图像,第二区域显示在检测到用户操作时第二摄像头所采集的第二图像,第一图像叠加显示在第二图像上,第一区域中所显示的第一图像的不透明度由第二不透明度逐渐变小至第一不透明度;
在第二时间段内,第三区域显示在检测到用户操作时第二摄像头所采集的第二图像与第三图像,其中,第二图像叠加显示在第三图像上。
在此基础上,通过将显示屏上的画面从一个显示画面切换到另一个显示画面的过程分为两个阶段,即在第一时间段内,以画面切换前的显示模式(画中画显示模式)显示第一图像和第二图像,且显示转场动画进行过渡。在第二时间段内,以切换后的显示模式(全屏显示模式)显示第二图像,并且此时第二图像上叠加有第三图像,使得可以从转场动画平滑的过渡到切换后的显示模式,以提高画面切换的流畅性。
在第一方面的一种可能的设计方式中,在第一时间段,第一图像的高斯模糊值根据第一曲线逐渐变大,在第一时间段内和第二时间段内,第二图像的高斯模糊值根据第一曲线逐渐变大。
在此基础上,通过设置第一时间段内,第一图像的高斯模糊值逐渐变大,以及第一时间段和第二时间段内,第二图像的高斯模糊值逐渐变大,此为转场动画中的一种具体的动画处理方式,可以使得后续在第二图像所在的显示区域加载实时的图像数据时,显示画面更自然。
在第一方面的一种可能的设计方式中,第一区域中所显示的第一图像的不透明度根据第二曲线由第二不透明度逐渐变小至第一不透明度。
在此基础上,通过设置第二曲线作为第一图像的透明度变化参考,通过选取合适的第二曲线,可以使得转场动画中的画面透明度的变化更自然,提升用户的视觉体验。
在第一方面的一种可能的设计方式中,在显示屏的第三区域显示第三图像之前,还包括:
在第二时间段内,第二图像的不透明度根据第二曲线由第二不透明度到第一不透 明度逐渐变小。
在此基础上,通过设置第二图像的不透明度逐渐变小,从而使得第二图像逐渐变得透明,使得被第二图像覆盖的第三图像逐渐显示在屏幕上,使得第三图像在第五区域清楚地显示,完成转场动画到切换后的显示画面的显示,使得第三图像以切换后的显示模式进行显示。
第二方面,本申请实施例提供一种拍摄视频的方法,该方法应用于包括显示屏、第一摄像头和第二摄像头的电子设备,第一摄像头和第二摄像头位于显示屏的不同侧,该方法包括:
显示屏的第一区域显示来自第一摄像头实时采集的第一图像,或者显示来自第二摄像头实时采集的第二图像,其中,第一区域为显示屏的整个显示区域;
响应于检测到的用户操作,显示屏的第二区域显示预制图片,预制图片的不透明度由第一不透明度逐渐变大至第二不透明度;
显示屏的第二区域显示第一摄像头实时采集的第一图像,显示屏的第三区域显示第二摄像头实时采集的第二图像,第三区域为显示屏的整个显示区域,第二区域小于第三区域。
在此基础上,本申请实施例示出了另一种具体的显示画面的切换,即从全屏显示模式切换为画中画显示模式,转场动画的显示模式可以为画中画显示模式。本实施例与第一方面所提供的实施例为镜像的变化过程,本实施例及可能的设计方式中的技术效果可以参考第一方面及其可能的设计方式中所介绍的有益效果,在此不作赘述。
在第二方面的一种可能的设计方式中,在响应于检测到的用户操作之后,还包括:
在第一时间段内,第一区域显示第三图像,第三图像为在检测到用户操作时第一摄像头所采集的第一图像,或者,第三图像为在检测到用户操作时第二摄像头所采集的第二图像,显示屏的第二区域显示预制图片,预制图片的不透明度由第一不透明度逐渐变大至第二不透明度;
在第二时间段内,第二区域显示预制图片与第一摄像头实时采集的第一图像,其中,预制图片叠加显示在第一图像上,第三区域显示第三图像与第二摄像头所采集的第二图像,其中,第三图像叠加显示在第二图像上。
在第二方面的一种可能的设计方式中,在第一时间段,第三图像的高斯模糊值根据第一曲线逐渐变大。
在第二方面的一种可能的设计方式中,预制图片的不透明度根据第二曲线由第一不透明度逐渐变大至第二不透明度。
在第二方面的一种可能的设计方式中,在显示屏的第二区域显示第一摄像头实时采集的第一图像,显示屏的第三区域显示第二摄像头实时采集的第二图像之前,还包括:
在第二时间段内,预制图片的不透明度根据第二曲线由第二不透明度到第一不透明度逐渐变小;
在第二时间段内,第三图像的不透明度根据第二曲线由第二不透明度到第一不透明度逐渐变小。
第三方面,本申请实施例提供一种电子设备,包括用于存储计算机程序指令的存 储器和用于执行程序指令的处理器,其中,当该计算机程序指令被处理器执行时,触发电子设备执行如第一方面/第二方面及其任一种可能的设计方式所述的方法。
第四方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质包括存储的程序,其中,在程序运行时控制计算机可读存储介质所在设备执行如第一方面/第二方面及其任一种可能的设计方式所述的方法。
第五方面,本申请实施例提供一种计算机程序产品,计算机程序产品包含可执行指令,当可执行指令在计算机上执行时,使得计算机执行如第一方面/第二方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的第三方面所提供的电子设备,第四方面所提供的计算机可读存储介质,第五方面所提供的计算机程序产品所能达到的有益效果,可参考如第一方面/第二方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1A为本申请实施例提供的电子设备的硬件架构图;
图1B为本申请实施例提供的电子设备的软件架构图;
图2A-图2C为本申请实施例提供的一组用户界面示意图;
图3为本申请实施例提供的一种拍摄场景示意图;
图4A-图4E为本申请实施例提供的另一组界面示意图;
图5A-图5K为本申请实施例提供的另一组界面示意图;
图6A-图6F为本申请实施例提供的另一组界面示意图;
图7A-图7B为本申请实施例提供的另一组界面示意图;
图8A为本申请实施例提供的方法中进行动画处理的一种流程图;
图8B-图8F为本申请实施例提供的另一组界面示意图;
图9A、图9B为本申请实施例提供的一组曲线示意图;
图10A为本申请实施例提供的方法中进行动画处理的另一种流程图;
图10B-图10F为本申请实施例提供的另一组界面示意图。
具体实施方式
本申请以下实施例中所使用的术语只是为了描述特定实施例的目的,而并非旨在作为对本申请的限制。如在本申请的说明书和所附权利要求书中所使用的那样,单数表达形式“一个”、“一种”、“所述”、“上述”、“该”和“这一”旨在也包括复数表达形式,除非其上下文中明确地有相反指示。还应当理解,“/”表示或的意思,例如,A/B可以表示A或B;文本中的“和/或”仅仅是一种描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。
下面将结合附图对本申请实施例中的技术方案进行清楚、详尽地描述。
在本申请中提及“实施例”意味着,结合实施例描述的特定特征、结构或特性可以包含在本申请的至少一个实施例中。在说明书中的各个位置出现该短语并不一定均是指相同的实施例,也不是与其它实施例互斥的独立的或备选的实施例。本领域技术人员显式地和隐式地理解的是,本申请所描述的实施例可以与其它实施例相结合。
本申请以下实施例中的术语“第一”、“第二”仅用于描述目的,而不能理解为暗示 或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征,在本申请实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
本申请以下实施例中的术语“用户界面(user interface,UI)”,是应用程序或操作系统与用户之间进行交互和信息交换的介质接口,它实现信息的内部形式与用户可以接受形式之间的转换。用户界面是通过java、可扩展标记语言(extensible markup language,XML)等特定计算机语言编写的源代码,界面源代码在电子设备上经过解析,渲染,最终呈现为用户可以识别的内容。用户界面常用的表现形式是图形用户界面(graphic user interface,GUI),是指采用图形方式显示的与计算机操作相关的用户界面。它可以是在电子设备的显示屏中显示的文本、图标、按钮、菜单、选项卡、文本框、对话框、状态栏、导航栏、Widget等可视的界面元素。
为了下述各实施例的描述清楚简洁及便于本领域技术人员容易理解,首先给出相关概念或技术的简要介绍。
拍摄预览界面,指电子设备拍摄时所显示的界面,可用于显示摄像头所采集的图像以及多个控件。多个控件可包括用于开启/关闭闪光灯的闪光灯控件,用于开启/关闭美颜功能的美颜控件,可用于启动/停止拍摄的快门控件等。
单镜拍摄,指电子设备仅通过一个摄像头进行拍摄的模式。其中,单镜拍摄可包括前置拍摄模式、后置拍摄模式等。
具体的,前置拍摄模式,指电子设备通过前置摄像头进行拍摄的模式。电子设备处于前置拍摄模式时,可在拍摄预览界面实时显示该前置摄像头所采集的图像。
后置拍摄模式,指电子设备通过后置摄像头进行拍摄的模式。电子设备处于后置拍摄模式时,可在拍摄预览界面实时显示该后置摄像头所采集的图像。
多镜拍摄,指电子设备可通过两个及以上的摄像头进行拍摄的模式。其中,多镜拍摄可包括前后拍摄模式、前前拍摄模式、后后拍摄模式、画中画拍摄模式等。
前后拍摄模式,指电子设备可通过前置摄像头和后置摄像头同时进行拍摄的模式。电子设备处于前后拍摄模式时,电子设备可在拍摄预览界面中同时显示前置摄像头和后置摄像头所拍摄的图像(例如,第一图像、第二图像),第一图像与第二图像拼接显示。其中,当电子设备竖置时,第一图像与第二图像可上下拼接;当电子设备横置时,第一图像与第二图像可左右拼接。在默认情况下,第一图像的显示面积与第二图像的显示面积一致。
前前拍摄模式,其与前后拍摄模式类似,区别在于前前拍摄模式是采用两个前置摄像头同时进行拍摄。
后后拍摄模式,指电子设备可通过两个后置摄像头同时进行拍摄的模式。电子设备处于后后拍摄模式时,电子设备可在拍摄预览界面中同时显示两个后置摄像头所拍摄的图像,(例如,第一图像、第二图像),第一图像与第二图像拼接显示。其中,当电子设备竖置时,第一图像与第二图像可上下拼接;当电子设备横置时,第一图像与第二图像可左右拼接。
画中画拍摄模式,指电子设备可通过两个摄像头同时进行拍摄的模式。电子设备处于画中画拍摄模式时,电子设备可在拍摄预览界面中同时显示两个摄像头所拍摄的 图像(例如,第一图像、第二图像)。其中,第二图像显示于拍摄预览界面的整个显示区域,第一图像叠放于第二图像上,第一图像的显示面积小于第二图像的显示面积。默认情况下,第一图像可位于第二图像的左下方。上述两个摄像头可包括两个前置摄像头、两个后置摄像头或者一个前置摄像头和一个后置摄像头。
分屏显示模式(拼接显示),指电子设备的显示屏可以显示两个摄像头所拍摄的图像(例如,第一图像、第二图像),第一图像与第二图像拼接显示在显示屏上。
画中画显示模式,指电子设备的显示屏可以显示两个摄像头所拍摄的图像(例如,第一图像、第二图像),第二图像显示于显示屏的整个显示区域,第一图像显示于小窗中,小窗的显示面积小于显示屏的显示面积,即第一图像叠放于第二图像上,第一图像的显示面积小于第二图像的显示面积。
全屏模式:指电子设备的显示屏可以全屏显示任意一个摄像头所拍摄的图像,该图像的显示面积为显示屏的显示面积。
用户操作:指用户进行显示画面切换时所进行的操作,例如,可以是触摸显示屏上的切换按钮对显示画面进行切换,也可以是指采用隔空手势对显示画面进行切换。用户操作所对应的方向可以是指:显示屏上切换按钮所指向的方向,也可以是指用户在进行隔空手势时,手势的移动方向。例如,隔空手势为“手掌从左向右移动”,则用户操作所对应的方向为从左向右。再例如,切换按钮的指向或者表示的含义为“从左向右”进行切换,则点击该按钮后,用户操作所对应的方向也是指从左向右。
第一摄像头:可以是指前置摄像头或者后置摄像头,本申请实施例中的第一摄像头一般代指前置摄像头。
第二摄像头:可以是指前置摄像头或者后置摄像头,本申请实施例中的第一摄像头一般代指后置摄像头。
第一图像:前置摄像头所拍摄的图像;第二图像:后置摄像头所拍摄的图像。
第三图像:本申请实施例中一般指前置摄像头所拍摄的图像进行模糊处理、透明度处理、裁减或者放大后的图像,或者为预制图片(包括第一预制图片和第二预制图片);第四图像:本申请实施例中一般指后置摄像头所拍摄的图像进行模糊处理、透明度处理、裁减或者放大后的图像。
需要说明的是,上述“单镜拍摄”、“多镜拍摄”、“前置拍摄模式”、“后置拍摄模式”、“前后拍摄模式”、“前前拍摄模式”、“后后拍摄模式”、“画中画拍摄模式”只是本申请实施例所使用的一些名称,其代表的含义在本申请实施例中已经记载,其名称并不能对本实施例构成任何限制。
目前,电子设备可以为用户提供多种拍摄模式,并且可以在多种拍摄模式之间进行切换。例如由前置拍摄模式切换为后置拍摄模式、由后置拍摄模式切换为前置拍摄模式、由前置拍摄模式/后置拍摄模式切换为前后拍摄模式、由前后拍摄模式切换为前置拍摄模式、后置拍摄模式、后后拍摄模式或者画中画拍摄模式中的任一种拍摄模式。
电子设备在不同拍摄模式下,所使用的摄像头不同,而不同摄像头采集的画面不同。因此,当电子设备切换拍摄模式时,电子设备所使用的摄像头(称为预览摄像头)也会发生变化,拍摄预览界面中的显示画面也会随之变化。但是,电子设备启动不同的摄像头需要一定的时间,不同摄像头所拍摄到的画面在拍摄预览界面上以不同的模 式进行显示也需要一定的时间。
如此,当进行不同拍摄模式的切换时,电子设备不同摄像头所采集到的画面数据在显示在拍摄预览界面上时,具有一定的延迟,当用户进行不同拍摄模式的切换时,若直接显示各种模式对应的摄像头所采集到的画面数据,在进行模式切换时,拍摄预览界面中的画面切换会显得比较僵硬,甚至出现“卡顿”的视觉效果,影响用户体验。
为了解决上述问题,本申请实施例提供了一种拍摄视频的方法,该拍摄视频的方法可以支持电子设备在进行拍摄模式切换时,为拍摄预览界面中的显示画面进行切换时进行动画处理,使得拍摄预览界面中的显示画面在切换时,看起来更加流畅和生动,提高用户的使用体验。
本申请实施例提供的拍摄视频的方法可以应用于包括多个摄像头的电子设备。该电子设备可以通过前置摄像头中的任意一个摄像头识别用户的预设隔空手势(隔空手势)。本申请实施例对用于切换拍摄模式的预设隔空手势(隔空手势)不作限定。上述隔空手势只是本申请实施例所使用的名称,其还可以被称为悬空手势、悬浮手势等,具体是指不接触电子设备而输入的手势,其代表的含义在本申请实施例中已经记载,其名称并不能对本实施例构成任何限制。
为了更加清楚、详细地介绍本申请实施例提供的拍摄视频的方法,下面先介绍本申请实施例提供实施该方法所涉及的电子设备。
电子设备可以是手机、平板电脑、桌面型计算机、膝上型计算机、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)设备、虚拟现实(virtual reality,VR)设备、人工智能(artificial intelligence,AI)设备、可穿戴式设备、车载设备、智能家居设备和/或智慧城市设备,本申请实施例对该电子设备的具体类型不作特殊限制。
参阅图1A,图1A示出了本申请实施例所提供的电子设备的硬件结构示意图。
如图1A所示,该电子设备200可以处理器210,外部存储器接口220,内部存储器221,通用串行总线(universal serial bus,USB)接口230,充电管理模块240,电源管理模块241,电池242,天线1,天线2,移动通信模块250,无线通信模块260,音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,传感器模块280,按键290,马达291,指示器292,多个摄像头293,显示屏294,以及用户标识模块(subscriber identification module,SIM)卡接口295等。
其中,上述传感器模块280可以包括压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器和骨传导传感器等传感器。
可以理解的是,本实施例示意的结构并不构成对电子设备200的具体限定。在另一些实施例中,电子设备200可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器210可以包括一个或多个处理单元,例如:处理器210可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit, GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备200的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器210中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器210中的存储器为高速缓冲存储器。该存储器可以保存处理器210刚用过或循环使用的指令或数据。如果处理器210需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器210的等待时间,因而提高了系统的效率。
在一些实施例中,处理器210可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备200的结构限定。在另一些实施例中,电子设备200也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
在本申请实施例中,处理器210可以接收到摄像头293拍摄的用户输入的特定隔空手势例如为“手掌”所对应的多个连续的图像,然后处理器210可以对该多个连续的图像进行对比分析,确定该多个连续的图像对应的隔空手势为“手掌”,并确定该隔空手势对应的操作例如是开始录制或停止录制,之后处理器210可以控制相机应用程序执行对应的操作。该对应的操作例如可以包括:去调动多个摄像头同时采集图像,然后通过GPU将多个摄像头分别采集到的图像,通过拼接或画中画(局部叠加)等方式合成,并调用显示屏294将合成后的图像显示于电子设备的拍摄预览界面中。
外部存储器接口220可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备200的存储能力。外部存储卡通过外部存储器接口220与处理器210通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器221可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器210通过运行存储在内部存储器221的指令,从而执行电子设备200的各种功能应用以及数据处理。例如,在本申请实施例中,处理器210可以通过执行存储在内部存储器221中的指令,内部存储器221可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备200使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器221可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
在本申请实施例中,内部存储器221可以存储有电子设备在不同拍摄模式下拍摄的图片文件或录制的视频文件等。
充电管理模块240用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。充电管理模块240为电池242充电的同时,还可以通过电源管理模块241为终端设备供电。
电源管理模块241用于连接电池242,充电管理模块240与处理器210。电源管理模块241接收电池242和/或充电管理模块240的输入,为处理器210,内部存储器221,外部存储器,显示屏294,摄像头293,和无线通信模块260等供电。在一些实施例中,电源管理模块241和充电管理模块240也可以设置于同一个器件中。
电子设备200的无线通信功能可以通过天线1,天线2,移动通信模块250,无线通信模块260,调制解调处理器以及基带处理器等实现。在一些实施例中,电子设备200的天线1和移动通信模块250耦合,天线2和无线通信模块260耦合,使得电子设备200可以通过无线通信技术与网络以及其他设备通信。
天线1和天线2用于发射和接收电磁波信号。电子设备200中的每个天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。例如:可以将天线1复用为无线局域网的分集天线。在另外一些实施例中,天线可以和调谐开关结合使用。
移动通信模块250可以提供应用在电子设备200上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块250可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块250可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。
移动通信模块250还可以对经调制解调处理器调制后的信号放大,经天线1转为电磁波辐射出去。在一些实施例中,移动通信模块250的至少部分功能模块可以被设置于处理器210中。在一些实施例中,移动通信模块250的至少部分功能模块可以与处理器210的至少部分模块被设置在同一个器件中。
无线通信模块260可以提供应用在电子设备200上的包括WLAN(如(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
无线通信模块260可以是集成至少一个通信处理模块的一个或多个器件。无线通信模块260经由天线2接收电磁波,将电磁波信号调频以及滤波处理,将处理后的信号发送到处理器210。无线通信模块260还可以从处理器210接收待发送的信号,对其进行调频,放大,经天线2转为电磁波辐射出去。
电子设备200通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏294用于显示图像,视频等。该显示屏294包括显示面板。
电子设备200可以通过ISP,摄像头293,视频编解码器,GPU,显示屏294以及 应用处理器等实现拍摄功能。ISP用于处理摄像头293反馈的数据。摄像头293用于捕获静态图像或视频。在一些实施例中,电子设备200可以包括N个摄像头293,N为大于2的正整数。
电子设备200可以通过音频模块270,扬声器270A,受话器270B,麦克风270C,耳机接口270D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
按键290包括开机键,音量键等。按键290可以是机械按键。也可以是触摸式按键。马达291可以产生振动提示。马达291可以用于来电振动提示,也可以用于触摸振动反馈。指示器292可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
多个摄像头293用于采集图像。在本申请实施例中,摄像头193的数量可以为M个,M≥2,M为正整数。电子设备在多镜拍摄中开启的摄像头的数量可以为N,2≤N≤M,N为正整数。
在本申请实施例中,摄像头293的类型可以根据硬件配置以及物理位置进行区分。例如,摄像头293所包含的多个摄像头可以分别置于电子设备的正反两面,设置在电子设备的显示屏294那一面的摄像头可以称为前置摄像头,设置在电子设备的后盖那一面的摄像头可以称为后置摄像头;又例如,摄像头293所包含的多个摄像头的焦距、视角不同,焦距短、视角越大的摄像头可以称为广角摄像头,焦距长、视角小的摄像头可以称为普通摄像头。不同摄像头采集到的图像的内容的不同之处在于:前置摄像头用于采集电子设备正面面对的景物,而后置摄像头用于采集电子设备背面面对的景物;广角摄像头在较短的拍摄距离范围内,能拍摄到较大面积的景物,在相同的拍摄距离处所拍摄的景物,比使用普通镜头所拍摄的景物在画面中的影像小。其中,焦距的长短、视角的大小为相对概念,并无具体的参数限定,因此广角摄像头和普通摄像头也是一个相对概念,具体可以根据焦距、视角等物理参数进行区分。
特别的,在本申请实施例中,摄像头293中至少包含一个可以获取拍摄图像中的物体的3D数据的摄像头,以便处理器210可以根据物体的3D数据,识别用户的隔空手势对应的操作指令。
该用于获取物体3D数据的摄像头可以是独立的一个低功耗摄像头,也可以是其他普通的前置摄像头或后置摄像头,该普通的前置摄像头或后置摄像头支持低功耗模式,当低功耗摄像头工作时,或者普通前置摄像头或后置摄像头工作在低功耗模式时,摄像头的帧率比普通摄像头在非低功耗模式下工作的帧率低且输出的图像为黑白格式。通常普通的摄像头1秒可以输出30帧图像、60帧图像、90帧图像、240帧图像,但是该低功耗摄像头,或者普通前置摄像头或后置摄像头运行低功耗模式时,摄像头1秒可以输出例如2.5帧图像,而当摄像头拍摄到表示同一个隔空手势的第一张图像时,上述摄像头可以切换为1秒输出10帧图像,以通过连续的多张图像识别准确识别该隔空手势对应的操作指令,此外,该低功耗摄像头的采集的图像的像素低于普通摄像头采集的图像的像素。同时相比普通摄像头在给低功耗模式下工作降低了功耗。
摄像头293的出图比例可以不同,也可以相同。摄像头的出图比例是指该摄像头采集到的图像的长度与宽度的比值。该图像的长度和宽度均可以用像素数来衡量。摄像头的出图比例也可以被叫做出图尺寸、图像大小、图像尺寸、像素尺寸或图像分辨 率。常见的摄像头的出图比例可包括:4:3、16:9或3:2等等。出图比例是指摄像头所采集图像在长度上和宽度上的像素数的大致比例。在本申请实施例中,当电子设备处于多镜拍摄模式下,多个摄像头分别采集的图像以左右拼接或者上下拼接的形式来显示时,预览框中显示的不同摄像头拍摄的图像尺寸可以相同,而当多个摄像头分别采集的图像以画中画的形式来显示时,预览框中显示的不同摄像头拍摄的图像尺寸可以不同,具体的前置摄像头拍摄的图像尺寸小于后置摄像头拍摄的尺寸,具体可以参考后文的UI实施例的相关描述,在此暂不赘述。
在一些实施例中,摄像头293可以用于采集深度数据。例如,摄像头293可以具有(time of flight,TOF)3D感测模块或结构光(structured light)3D感测模块,用于获取深度信息。用于采集深度数据的摄像头可以为前置摄像头,也可为后置摄像头。
ISP用于处理摄像头293反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头293中。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当电子设备在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。电子设备可以支持一种或多种视频编解码器。这样,电子设备可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
NPU为神经网络(neural-network,NN)计算处理器,通过借鉴生物神经网络结构,例如借鉴人脑神经元之间传递模式,对输入信息快速处理,还可以不断的自学习。通过NPU可以实现电子设备的智能认知等应用,例如:图像识别,人脸识别,语音识别,文本理解等。
电子设备通过GPU,显示屏294,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏294和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器210可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏294用于显示图像,视频等。显示屏294包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode的,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。在一些实施例中,电子设备可以包括1个或N个显示屏294,N为大于1的正整数。
在本申请实施例中,显示屏294可用于显示来自任意一个摄像头293拍摄的图像,例如在预览框中显示来自一个摄像头拍摄的多帧图像,或者在已保存的视频文件中显示来自一个摄像头293的多帧图像,或者在已保存的图片文件中显示来自一个摄像头 293的一张照片。
当电子设备在相机预览框中显示来自一个摄像头拍摄的多帧图像时,接收到用户输入的特定隔空手势例如“手掌”,则显示屏294可用在预览框中显示来自多个摄像头拍摄的多帧图像,当电子设备保存该多个摄像头拍摄的视频文件或者图片文件后,显示屏可以在已保存的视频文件中显示来自多个摄像头293的多帧图像,或者在已保存的图片文件中显示来自多个摄像头293的多张照片合成的一张照片。
在一些实施例中,在多镜拍摄模式下,显示屏294可以通过拼接或画中画等方式对来自多个摄像头293多路图像进行显示,以使得来自该多个摄像头293的多路图像可以同时呈现给用户。
在一些实施例中,在多镜拍摄模式下,处理器210(例如控制器或GPU)可以对来自多个摄像头293的多帧图像进行合成。例如,将来自多个摄像头293的多路视频流合并为一路视频流,处理器210中的视频编码器可以对合成的一路视频流数据进行编码,从而生成一个视频文件。这样,该视频文件中的每一帧图像可以包含来自多个摄像头293的多个图像。在播放该视频文件的某一帧图像时,显示屏294可以显示来自多个摄像头293的多路图像,以为用户展示同一时刻或同一场景下,不同范围、不同清晰度或不同细节信息的多个图像画面。
在一些实施例中,在多镜拍摄模式下,处理器210可以分别对来自不同摄像头293的图像帧进行关联,以便在播放已拍摄的图片或视频时,显示屏294可以将相关联的图像帧同时显示在取景框中。该种情况下,不同摄像头293同时录制的视频可以分别存储为不同的视频,不同摄像头293同时录制的图片可以分别存储为不同的图片。
在一些实施例中,在多路录像模式下,多个摄像头293可以采用相同的帧率分别采集图像,即多个摄像头293在相同时间内采集到的图像帧的数量相同。来自不同摄像头293的视频可以分别存储为不同的视频文件,该不同视频文件之间相互关联。该视频文件中按照采集图像帧的先后顺序来存储图像帧,该不同视频文件中包括相同数量的图像帧。在播放已录制的视频时,显示屏294可以根据预设的或用户指示的布局方式,按照相关联的视频文件中包括的图像帧的先后顺序进行显示,从而将不同视频文件中同一顺序对应的多帧图像显示在同一界面上。
在一些实施例中,在多路录像模式下,多个摄像头293可以采用相同的帧率分别采集图像,即多个摄像头293在相同时间内采集到的图像帧的数量相同。处理器210可以分别为来自不同摄像头293的每一帧图像打上时间戳,以便在播放已录制的视频时,显示屏294可以根据时间戳,同时将来自多个摄像头293的多帧图像显示在同一界面上。
在一些实施例中,在多镜拍摄场景下,显示屏294可以通过左右拼接、上下拼接或画中画等方式对来自多个摄像头293的不同图像进行同时显示,以使得来自该多个摄像头293的不同图像可以同时呈现给用户。具体的可以参考后文的UI实施例的相关描述,在此暂不赘述。
在一些实施例中,在多镜拍摄模式下,处理器210例如控制器或GPU可以对来自多个摄像头293的不同图像进行合成。例如,将来自多个摄像头293的多路视频流合并为一路视频流,处理器210中的视频编码器可以对合成的一路视频流数据进行编码, 从而生成一个视频文件。这样,该视频文件中的每一帧图像可以包含来自多个摄像头293的多个图像。在播放该视频文件的某一帧图像时,显示屏294可以显示来自多个摄像头293的多个图像,以为用户展示同一时刻或同一场景下,不同内容、不同景深或不同像素的多个图像画面。又例如,将来自多个摄像头293的多张照片合并为一张,处理器210中的视频编码器可以对合成的一张照片数据进行编码,从而生成一个图片文件。这样,该图片文件中的一张照片可以包含来自多个摄像头293的多个照片。在观看该照片时,显示屏294可以显示来自多个摄像头293的多个照片,以为用户展示同一时刻或同一场景下,不同内容、不同景深或不同像素的多个图像画面。
在一些实施例中,在多镜拍摄模式下,处理器210可以分别对来自不同摄像头293的图像帧进行关联,以便在播放已拍摄的图片或视频时,显示屏294可以将相关联的图像帧同时显示在预览框中。该种情况下,不同摄像头293同时录制的视频可以分别存储为不同的视频文件,不同摄像头293同时拍摄的照片可以分别存储为不同的图片文件。
在一些实施例中,在多镜录像模式下,多个摄像头293可以采用相同的帧率分别采集图像,即多个摄像头293在相同时间内采集到的图像帧的数量相同。来自不同摄像头293的视频可以分别存储为不同的视频文件,该不同视频文件之间相互关联。该视频文件中按照采集图像帧的先后顺序来存储图像帧,该不同视频文件中包括相同数量的图像帧。在播放已录制的视频时,显示屏294可以根据预设的或用户指示的布局方式,按照相关联的视频文件中包括的图像帧的先后顺序进行显示,从而将不同视频文件中同一顺序对应的多帧图像显示在同一界面上。
在一些实施例中,在多镜录像模式下,多个摄像头293可以采用相同的帧率分别采集图像,即多个摄像头293在相同时间内采集到的图像帧的数量相同。处理器210可以分别为来自不同摄像头293的每一帧图像打上时间戳,以便在播放已录制的视频时,显示屏294可以根据时间戳,同时将来自多个摄像头293的多帧图像显示在同一界面上。
为使用方便,电子设备通常在用户的手持模式下进行拍摄,而用户手持模式下通常会使得拍摄获得的画面发生抖动。在一些实施例中,在多镜拍摄模式下,处理器210可以分别对不同摄像头293采集到图像帧分别进行防抖处理。而后,显示屏294再根据防抖处理后的图像进行显示。
SIM卡接口295用于连接SIM卡。SIM卡可以通过插入SIM卡接口295,或从SIM卡接口295拔出,实现和电子设备的接触和分离。电子设备可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口295可以支持Nano SIM卡,Micro SIM卡,SIM卡等。同一个SIM卡接口295可以同时插入多张卡。所述多张卡的类型可以相同,也可以不同。SIM卡接口295也可以兼容不同类型的SIM卡。SIM卡接口295也可以兼容外部存储卡。电子设备通过SIM卡和网络交互,实现通话以及数据通信等功能。在一些实施例中,电子设备采用eSIM,即:嵌入式SIM卡。eSIM卡可以嵌在电子设备中,不能和电子设备分离。
显示屏294用于显示图像,视频等。在一些实施例中,电子设备可以包括1个或N个显示屏294,N为大于1的正整数。在本申请实施例中,显示屏294可用于显示 来自任意一个或多个摄像头293拍摄的图像,例如在拍摄预览界面中显示来自一个摄像头拍摄的多帧图像,或者在已保存的视频文件中显示来自一个摄像头293的多帧图像,或者在已保存的图片文件中显示来自一个摄像头293的一张照片。
SIM卡接口295用于连接SIM卡。SIM卡可以通过插入SIM卡接口295,或从SIM卡接口295拔出,实现和电子设备200的接触和分离。电子设备200可以支持1个或N个SIM卡接口,N为大于1的正整数。SIM卡接口295可以支持Nano SIM卡,Micro SIM卡,SIM卡等。
图1B是本发明实施例的电子设备的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,安卓运行时(Android runtime)和系统库,以及内核层。
应用程序层可以包括一系列应用程序包。
如图1B所示,应用程序包可以包括相机,图库,日历,通话,地图,导航,WLAN,蓝牙,音乐,视频,短信息等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图1B所示,应用程序框架层可以包括窗口管理器,内容提供器,视图系统,电话管理器,资源管理器,通知管理器等。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,截取屏幕等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。所述数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
电话管理器用于提供电子设备的通信功能。例如通话状态的管理(包括接通,挂断等)。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,电子设备振动,指示灯闪烁等。
Android Runtime包括核心库和虚拟机。Android runtime负责安卓系统的调度和管理。
核心库包含两部分:一部分是java语言需要调用的功能函数,另一部分是安卓的核心库。
应用程序层和应用程序框架层运行在虚拟机中。虚拟机将应用程序层和应用程序 框架层的java文件执行为二进制文件。虚拟机用于执行对象生命周期的管理,堆栈管理,线程管理,安全和异常的管理,以及垃圾回收等功能。
系统库可以包括多个功能模块。例如:表面管理器(surface manager),媒体库(Media Libraries),三维图形处理库(例如:OpenGL ES),2D图形引擎(例如:SGL)等。
表面管理器用于对显示子系统进行管理,并且为多个应用程序提供了2D和3D图层的融合。
媒体库支持多种常用的音频,视频格式回放和录制,以及静态图像文件等。媒体库可以支持多种音视频编码格式,例如:MPEG4,H.264,MP3,AAC,AMR,JPG,PNG等。
三维图形处理库用于实现三维图形绘图,图像渲染,合成,和图层处理等。
2D图形引擎是2D绘图的绘图引擎。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合捕获拍照场景,示例性说明电子设备软件以及硬件的工作流程。
当触摸传感器接收到触摸操作,相应的硬件中断被发给内核层。内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。以该触摸操作是触摸单击操作,该单击操作所对应的控件为相机应用图标的控件为例,相机应用调用应用框架层的接口,启动相机应用,进而通过调用内核层启动摄像头驱动,通过摄像头293捕获静态图像或视频。在本申请实施例中,上述触摸传感器接收到触摸操作可以由摄像头293采集到用户输入的隔空手势的操作来代替。具体的,当摄像头293采集到隔空手势的操作,相应的硬件中断被发给内核层。内核层将隔空手势操作加工成原始输入事件(包括隔空手势的图像,隔空手势操作的时间戳等信息)。原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的操作。以该隔空手势操作是切换拍摄模式的操作为例,相机应用调用应用框架层的接口,进而通过调用内核层启动其他摄像头驱动,从而切换为其他摄像头293来捕获静态图像或视频。
接下来,将结合附图对本申请提供的拍摄视频的方法所应用的一些拍摄模式的切换进行说明。
如图2A所示,手机可显示主界面301。主界面301可包括相机应用的图标302。手机可接收用户点击该图标302的操作,响应于该操作,手机可开启相机应用并显示相机应用的拍摄预览界面303。可以理解的是,相机应用是智能手机、平板电脑等电子设备上的一款图像拍摄的应用,其可以是系统应用也可以是第三方应用,本申请对该应用的名称不做限制。也即是说,用户可以点击相机应用的图标302来打开相机应用的拍摄预览界面303。不限于此,用户还可以在其他应用中调用相机应用以打开拍摄预览界面303,例如用户在社交类应用中点击拍摄控件来打开拍摄预览界面303。该社交类应用,可支持用户向他人分享所拍摄的图片或视频等。
需要说明的是,拍摄预览界面303可以是相机应用的默认拍摄模式的用户界面,例如可以是相机应用处于前置拍摄模式时提供的用户界面。可以理解的是,该默认拍 摄模式也可以为其他,例如后置拍摄模式、前后拍摄模式等。又或者,该拍摄预览界面303可以为相机应用上一次退出时,相机应用所处的拍摄模式的用户界面。
图2B以拍摄预览界面303为相机应用处于前置拍摄模式时对应的拍摄预览界面为例进行说明。如图2B所示,拍摄预览界面303可包括预览图像304、拍摄模式选项305、闪光灯控件、快门控件等。其中,预览图像304即为摄像头293实时采集到的图像。需要说明的是,电子设备可实时刷新拍摄预览界面303所显示的图像(即预览图像304),以便于用户预览摄像头293当前采集到的图像。拍摄模式选项305用于提供多种拍摄模式供用户选择。多种拍摄模式可包括:拍照305a、录像305b、多镜录像305c、实时虚化、全景等。电子设备可接收用户左滑/右滑该拍摄模式选项305的操作,响应于该操作,电子设备可开启用户选择的拍摄模式。需要说明的是,不限于图2B所示,拍摄模式选项305中可以显示比图2B所示更多或者更少的选项。
其中,拍照305a所对应的拍摄模式即常用的单镜拍摄,可包括前置拍摄模式、后置拍摄模式等。也即,当拍照305a被选中时,电子设备可通过前置摄像头或后置摄像头进行拍照。其中关于前置拍摄模式、后置拍摄模式的具体介绍请参照前文,在此暂不赘述。
多镜录像305c所对应的拍摄模式可包括多种,例如多镜拍摄下的多种拍摄模式、单镜拍摄下的多种拍摄模式等。也即,当多镜录像305c被选中时,电子设备既可以通过一个摄像头进行单镜拍摄,也可以通过多个摄像头进行多镜拍摄。其中,关于多镜拍摄下的多种拍摄模式的介绍可以参考前文的具体描述,在此暂不赘述。
如图2B所示,拍照305a处于被选中状态。也即,电子设备当前处于拍摄模式。若用户希望开启多镜录像模式,则可左滑拍摄模式选项305,并选择多镜录像305c。当检测到用户左滑拍摄模式选项305并选中多镜录像305c的操作时,电子设备可开启多镜录像模式并显示如图2C所示的拍摄预览界面303。如图2C所示,进入多镜录像模式后,电子设备可开启前置摄像头和后置摄像头,拍摄预览界面303同时显示由前置摄像头采集图像306a,及由后置摄像头采集图像306b,且图像306a与图像306b拼接显示。其中,由于电子设备竖置,该图像306a与图像306b上下拼接。
在一种可选的实施方式中,电子设备开启多镜录像后,可默认启用前置摄像头和后置摄像头,并在拍摄预览界面上拼接显示前置摄像头所采集的图像和后置摄像头所采集的图像(例如如图2C所示显示方法)。可以理解的是,默认开启的摄像头不限于为前置摄像头和后置摄像头,也可以为后置摄像头和后置摄像头、前置摄像头或者后置摄像头等。此外,图像的显示方式不仅限于拼接方式,还可以为画中画方式等,在此不做具体限制。
电子设备开启多镜录像后,便可进行多镜录像。参阅图3,图3示出了用户拿自拍杆持电子设备进行横屏拍摄的场景示意图,其中,电子设备被横置于自拍杆中。可以看出,当电子设备处于图3所示的拍摄场景或是其他用户不便于直接触摸显示屏的场景时,不便于用户控制电子设备,例如不便于用户启动或停止录制及切换拍摄模式。这种情况下,用户可通过隔空手势控制电子设备开始或停止录制,以及切换电子设备的拍摄模式。
接下来,将结合附图说明在图3所示的拍摄场景中,用户利用隔空手势控制电子 设备的流程。
下面将结合图4A-图4E,介绍用户通过隔空手势控制电子设备开始录制的一系列界面。
如图4A所示,电子设备可显示拍摄预览界面401。其中,拍摄预览界面401包括预览图像401a、预览图像401b、隔空换镜控件402、录制控件403等。其中,预览图像401a为后置摄像头所采集的图像,预览图像401b为前置摄像头所采集的图像。此外,预览图像401a及预览图像401b左右拼接,这是因为电子设备被横置于自拍杆中;当电子设备被竖置于自拍杆中时,预览图像401a及预览图像401b可以上下拼接。隔空换镜控件402可供用户快速开启/关闭隔空换镜功能。其中,隔空换镜功能开启后,用户可通过特定的隔空手势控制电子设备。在图4A中,隔空换镜控件402指示隔空换镜功能处于开启状态(又可以称为第一状态)。录制控件403可供用户快速开始/停止录制视频。在图4A中,录制控件403指示电子设备处于未录制状态。
如图4A所示,当用户希望开始录制时,可先面对电子设备输入隔空手势,例如输入“举手”的隔空手势(可以理解为用户保持面向显示屏保持“举手”的状态,又可以称为第一隔空手势)。电子设备的前置摄像头可采集用户输入的隔空手势(即预览图像401b),并在拍摄预览界面401中显示。此外,电子设备还可对采集到的预览图像401b进行分析处理,并在识别到该“举手”的隔空手势时,显示如图4B所示的拍摄预览界面401。图4B所示的拍摄预览界面401与图4A所示的拍摄预览界面401类似,区别在于:图4B所示的拍摄预览界面401中显示有提示信息404,用于提示用户电子设备已进入“预备”状态(可以理解为准备好进一步识别用户的隔空手势的状态),用户可按需输入隔空手势。示例性的,如图4B所示,该提示信息404可以为隔空手势的图标。在一种可能的设计中,该提示信息404还可包括提示用户需要在第一预设时间内完成手势操作的文字信息,例如“需要在3秒内完成手势操作”。
同时,提示信息404还可包括时间进度条,该时间进度条可用于指示电子设备进入“预备”状态的时间。其中,电子设备从进入“预备”状态的时刻(例如,第一时刻)开始计时,此时该时间进度条为空白;电子设备在第一时刻的第一预设时间后停止计时,此时该时间进度条被填满。电子设备进入“预备”状态后,用户需要在该时间进度条被填满前(可以理解为在第一预设时间内)输入隔空手势,以实现对电子设备的控制。
如图4C所示,在基于图4B所示的拍摄场景中用户可持续输入“举手”的隔空手势(又可以称为第二隔空手势)直至时间进度条被填充超过三分之二(或者其他任意比例,例如二分之一、五分之二等)。响应于电子设备检测到用户持续输入“举手”的隔空手势直至时间进度条被填充超过三分之二,电子设备可进入准备录制状态,并显示如图4D所示的拍摄预览界面401。示例性的,若第一预设时间为3秒,则在电子设备进入“预备”状态后,用户可通过保持“举手”的姿势至少2秒,电子设备可进入准备录制状态。
如图4D所示,电子设备进入准备录制状态后,拍摄预览界面401可仅显示预览图像401a、预览图像401b及倒计时提醒405。其中,预览图像401b显示用户已经把手放下(即不再输入“举手”的隔空手势)。该倒计时提醒405用于提醒用户电子设备 将在第三预设时间后进入录制状态,例如在2秒后进入录制状态。通过在拍摄预览界面401上显示该倒计时提醒405,可提醒用户电子设备即将开始录制,便于用户做好录制准备。可以理解地,在电子设备进入准备录制状态后,用户则可无需继续输入隔空手势,可摆出任意姿势准备拍摄。
倒计时结束后,电子设备可开始录制。参阅图4E,图4E示出了电子设备开始录制时的拍摄预览界面401。图4E所示的拍摄预览界面401可包括录制时间406、录制控件407、截图控件408等。其中,录制时间406用于视频的录制时长,例如“00:01”。电子设备检测到作用于该录制控件407的触摸操作时,可停止或暂停录制视频。电子设备检测到作用于该截图控件408的触摸操作时,可截取拍摄预览界面401当前显示的图像(包括预览图像401a及预览图像401b)。
根据前文可知,电子设备第一次检测到用户输入的“举手”的隔空手势(即第一隔空手势)时,可进入“预备”状态。这种情况下,若电子设备在第一预设时间内检测到用户进一步输入的隔空手势(例如,“举手”的隔空手势),电子设备可执行该隔空手势对应的操作(例如,“举手”的隔空手势可对应于开始录制的操作)。若电子设备在第一预设时间内未检测到用户进一步输入的隔空手势,则电子设备恢复原始状态(即电子设备进入“预备”状态前所处的状态)。此时,若用户再次希望控制电子设备,则需重新输入“举手”的隔空手势使电子设备重新进入“预备”状态。
需要说明的是,上述的第一隔空手势和第二隔空手势可以相同(例如,均为“举手”),也可以不同,在此不做具体限制。另外,上述“举手”的隔空手势也可替换为其他隔空手势,例如“点赞”的隔空手势、“胜利”的隔空手势等。
下面将结合附图5A-图5F,介绍用户通过隔空手势控制电子设备在录制过程中由前置拍摄模式切换为其他拍摄模式的一系列界面。
如图5A所示,电子设备可显示拍摄预览界面401。图5A所示的拍摄预览界面401中包括前置摄像头所采集的预览图像401b,需要说明的是,当前置摄像头有多个时,此时工作的前置摄像头可为电子设备默认的前置主摄像头。
如图5A所示,若用户希望切换电子设备的拍摄模式,可先面对电子设备输入隔空手势,例如输入“举手”的隔空手势。电子设备的前置摄像头可采集用户输入的隔空手势(即预览图像401b),并在拍摄预览界面401中显示。此外,电子设备还可对采集到的预览图像401b进行分析处理,并在识别到该“举手”的隔空手势时,显示如图5B所示的拍摄预览界面401。图5B所示的拍摄预览界面401与图5A所示的拍摄预览界面401类似,区别在于:图5B所示的拍摄预览界面401包括提示信息404。其中,关于提示信息404的相关介绍参见前文,此处不再具体赘述。
前文已经说明,在电子设备第一次检测到用户输入的“举手”的隔空手势(即第一隔空手势)时,可进入“预备”状态。电子设备进入“预备”状态后,电子设备可根据进一步检测到的手势操作确定所需执行的操作。也即,在基于图5B所示的拍摄预览界面401上,用户可以输入不同的隔空手势以切换电子设备的拍摄模式,例如控制电子设备将拍摄模式由前置拍摄切换为其他拍摄模式,例如后置拍摄模式、画中画拍摄模式等。
电子设备可由前置拍摄模式切换为后置拍摄模式或者画中画拍摄模式。下面将基 于图5A-图5B所示的拍摄场景,介绍电子设备由前置拍摄模式切换为后置拍摄模式或者画中画拍摄模式过程中涉及到的界面。
如图5C所示,若用户希望电子设备由前置拍摄模式切换为后置拍摄模式,则可在电子设备显示图5B所示的拍摄预览界面401时,面对显示屏输入“翻转手掌”的隔空手势(也可称为第三隔空手势)。响应于检测到用户的“翻转手掌”的隔空手势,电子设备可将拍摄模式由前置拍摄模式切换为后置拍摄模式并显示如图5D所示的拍摄预览界面401。如图5D所示,电子设备切换为后置拍摄模式后,可在拍摄预览界面401显示后置摄像头所采集的图像401a。
可以理解的是,上述“翻转手掌”的隔空手势同样可使电子设备由后置拍摄模式切换为前置拍摄模式,其具体过程与电子设备由前置拍摄模式切换为后置拍摄模式的过程相似,此处不再赘述。
电子设备还可由前置拍摄模式切换为画中画拍摄模式。下面将基于图5A-图5B所示的拍摄场景,介绍电子设备由前置拍摄模式切换为画中画拍摄模式过程中涉及到的界面。
如图5E所示,若用户希望电子设备由前置拍摄模式切换为画中画拍摄模式,则可在电子设备显示图5B所示的拍摄预览界面401时,面对显示屏输入“伸出手掌然后握拳”的隔空手势(也可称为第四隔空手势)。响应于检测到用户输入“伸出手掌然后握拳”的隔空手势,电子设备可将拍摄模式由前置拍摄模式切换为画中画拍摄模式并显示如图5F所示的拍摄预览界面401。如图5F所示,电子设备切换为画中画拍摄模式后可在拍摄预览界面401中同时显示后置摄像头所采集的图像401a和前置摄像头所采集的图像401b。其中,图像401b叠加显示在图像401a上,图像401a的显示位置为整个拍摄预览界面401所在的位置。
电子设备还可由后置拍摄模式切换为画中画拍摄模式。下面将基于图5A-图5B所示的拍摄场景,介绍电子设备由后置拍摄模式切换为画中画拍摄模式过程中涉及到的界面。
如图5G所示,若用户希望电子设备由后置拍摄模式切换为画中画拍摄模式,则可在电子设备显示图5H所示的拍摄预览界面401时,面对显示屏输入“伸出手掌然后握拳”的隔空手势(也可称为第四隔空手势),其中图5H与图5B类似,区别在于:图5H中所示的拍摄预览界面中显示的图像是后置摄像头所采集的预览图像401a,图5B中所示的拍摄预览界面中显示的图像是前置摄像头所采集的预览图像401b。响应于检测到用户输入“伸出手掌然后握拳”的隔空手势,电子设备可将拍摄模式由前置拍摄模式切换为画中画拍摄模式并显示如图5I所示的拍摄预览界面401。如图5I所示,电子设备切换为画中画拍摄模式后可在拍摄预览界面401中同时显示后置摄像头所采集的图像401a和前置摄像头所采集的图像401b。其中,图像401b叠加显示在图像401a上,图像401a的显示位置为整个拍摄预览界面401所在的位置。图5I所示的图像与图5F所示的图像相同。
需要说明的是,在默认模式下,前置拍摄模式切换为画中画拍摄模式以及后置拍摄模式切换为画中画拍摄模式时,画中画模式的拍摄预览界面中的画中画界面显示的是前置摄像头所采集的预览图像401b,画中画模式的拍摄预览界面中的后取景界面显 示的是后置摄像头所采集的预览图像401a。
电子设备还可切换画中画模式下,两幅图像的显示位置。下面将介绍电子设备更换画中画模式下两幅图像的位置过程中涉及到的界面。
如图5J所示,若用户希望切换图像的显示位置,则可在电子设备显示提示信息404时,面对显示屏输入“翻转手掌”的隔空手势。响应于检测到用户输入的“翻转手掌”的隔空手势,电子设备可切换图像的显示位置,并显示如图5K所示的拍摄预览界面401。如图5K所示,电子设备切换图像的显示位置后,图像401a与图像401b的位置互换,即图像401a由原本显示于拍摄预览界面401的整个区域变为叠加于图像401b上,图像401b由原本叠加于图像401a上变为显示于拍摄预览界面401的整个区域。
电子设备还可从画中画拍摄模式切换为其他拍摄模式,下面将结合附图具体说明该过程。
图6A示出了电子设备处于画中画拍摄模式时的拍摄预览界面401。该拍摄预览界面401与图5F所示的拍摄预览界面401类似,在此暂不赘述。
如图6A所示,若用户希望切换电子设备的拍摄模式,可先面对电子设备输入隔空手势,例如输入“举手”的隔空手势。电子设备的前置摄像头可采集用户输入的隔空手势(即预览图像401b),并在拍摄预览界面401中显示。此外,电子设备还可对采集到的预览图像401b进行分析处理,并在识别到该“举手”的隔空手势时,显示如图6B所示的拍摄预览界面401。图6B所示的拍摄预览界面401与图6A所示的拍摄预览界面401类似,区别在于:图6B所示的拍摄预览界面401包括提示信息404。其中,关于提示信息404的相关介绍参见前文,此处不再具体赘述。
在基于图6B所示的拍摄预览界面401上,用户可以输入不同的隔空手势以切换电子设备的拍摄模式,例如控制电子设备将拍摄模式由画中画拍摄模式切换为后置拍摄模式、前置拍摄模式等。
电子设备可由画中画拍摄模式切换为后置拍摄模式。下面将基于图6A-图6B所示的拍摄场景,介绍电子设备由画中画拍摄模式切换为后置拍摄模式过程中涉及到的界面。
如图6C所示,若用户希望电子设备由画中画拍摄模式切换为后置拍摄模式,则可在电子设备显示图6B所示的拍摄预览界面401时,面对显示屏输入“伸出手掌然后握拳”的隔空手势。响应于检测到用户的“伸出手掌然后握拳”的隔空手势,电子设备可将拍摄模式由画中画拍摄模式切换为后置拍摄模式并显示如图6D所示的拍摄预览界面401。如图6D所示,电子设备进入后置拍摄模式后,可不再显示前置摄像头采集的图像401b,仅显示后置摄像头采集的图像401a。
电子设备可由画中画拍摄模式切换为前置拍摄模式。此时的画中画拍摄模式的拍摄预览界面401如图6E所示,图6E与图6C类似,区别在于:图像401a与图像401b的位置互换,即图像401a由原本显示于拍摄预览界面401的整个区域变为叠加于图像401b上,图像401b由原本叠加于图像401a上变为显示于拍摄预览界面401的整个区域。
如图6E所示,若用户希望电子设备由画中画拍摄模式切换为后置拍摄模式,则可在电子设备显示提示信息404时,面对显示屏输入“伸出手掌然后握拳”的隔空手势。 响应于检测到用户的“伸出手掌然后握拳”的隔空手势,电子设备可将拍摄模式由画中画拍摄模式切换为前置拍摄模式并显示如图6F所示的拍摄预览界面401。如图6F所示,电子设备进入前置拍摄模式后,可不再显示后置摄像头采集的图像401a,仅显示前置摄像头采集的图像401b。
需要说明的是,图5A-图6F所示的界面,均为基于电子设备处于录制中切换拍摄模式的相关界面。实际上,在电子设备还未录制时,也可识别用户的隔空手势并执行相应的操作,其原理与电子设备在录制中识别隔空手势并执行相应操作的原理类似,在此暂不赘述。
可以理解地,上述内容均为基于电子设备被横置的拍摄场景的相关介绍,下面将结合附图介绍电子设备被竖置时,用户利用隔空手势控制电子设备的方法。
电子设备被竖置时,同样可以从前置/后置拍摄模式切换为画中画拍摄模式,或者从画中画拍摄模式切换为前置/后置拍摄模式,其原理与电子设备被横置时的原理类似。
下面以电子设备处于前后拍摄模式为例,说明电子设备被竖置时由前后拍摄模式切换为画中画拍摄模式的相关界面。
如图7A所示,电子设备可显示电子设备被竖置且处于前置拍摄模式时的拍摄预览界面401。该拍摄预览界面401包括前置摄像头所拍摄的图像401b。
若用户希望切换电子设备的拍摄模式,先面对电子设备输入隔空手势,例如输入“举手”的隔空手势。电子设备的前置摄像头可采集用户输入的隔空手势(即预览图像401b),并在拍摄预览界面401中显示。此外,电子设备还可对采集到的预览图像401b进行分析处理,并在识别到该“举手”的隔空手势时,显示如图7B所示的拍摄预览界面401。图7B所示的拍摄预览界面401与图7A所示的拍摄预览界面401类似,区别在于:图7B所示的拍摄预览界面401包括提示信息404。其中,关于提示信息404的相关介绍参见前文,此处不再具体赘述。
基于图7B所示的拍摄预览界面401,电子设备可在检测到用户输入的“伸出手掌然后握拳”的隔空手势时,由前置拍摄模式切换为画中画拍摄模式。
当拍摄预览界面为画中画拍摄模式的显示画面时,电子设备可在检测到用户输入的“伸出手掌然后握拳”的隔空手势时,由画中画拍摄模式切换为前置/后置拍摄模式。
需要说明的是,电子设备被竖置时其他拍摄模式间的切换,其原理及内容与电子设备被横置时其他拍摄模式间的切换的原理及内容类似,在此暂不赘述。
上述实施例对本申请提供的拍摄视频的方法所应用的一些拍摄模式的切换进行了说明,接下来,将结合附图对拍摄模式切换中拍摄预览界面的变化进行说明。
用户在从画中画拍摄模式切换到前置拍摄模式或者切换到后置拍摄模式时,两种模式的切换过程中,电子设备对拍摄预览界面401的处理流程和原理相同,下面将结合图8A-图8G,以电子设备处于横置状态时,介绍用户从画中画拍摄模式切换到后置拍摄模式的过程中,电子设备对拍摄预览界面401的处理流程以及该处理流程中的一系列界面。
本实施例中,以用户输入的命令为“伸出手掌然后握拳”为例,对拍摄预览界面401的处理流程以及该处理流程中的一系列界面进行说明。电子设备在检测到用户输入的“伸出手掌然后握拳”的隔空手势(也即第三隔空手势)后,响应于检测到用户输入“伸 出手掌然后握拳”的隔空手势,电子设备的拍摄预览界面401将逐渐由画中画拍摄模式切换为后置拍摄模式,参阅图8A,图8A示出了电子设备对拍摄预览界面401进行动画处理的一种流程图,拍摄预览界面从画中画拍摄模式切换到后置拍摄模式的整个过程的切换时间为第一切换周期T1(例如,该第一切换周期T1为600ms),该切换周期可以分为两个处理时间段,下面结合附图对不同的处理时间段进行详细的介绍。
如图8B所示,拍摄预览界面401中的画面为画中画拍摄模式切换到后置拍摄模式的第一切换周期T1中0ms(第一时刻)时的画面,此时的画面可以为画中画拍摄模式所拍摄到的视频中最后一帧的图像。拍摄预览界面401包括画中画界面401c和后取景界面401d,此时,画中画界面401c所在的显示区域可以为第一区域,画中画界面401c中显示的图像可以为第一图像;后取景界面401d所在的显示区域可以为第二区域,后取景界面401d中显示的图像可以为第二图像。后取景界面401d的显示界面为整个拍摄预览界面401所在的界面,画中画界面401c叠加显示在后取景界面401d上,在预设模式中,画中画界面401c位于后取景界面401d的左下角。其中,画中画界面401c为前置摄像头所采集的图像,后取景界面401d为后置摄像头所采集的图像。
参阅图8C,图8C所示的拍摄预览界面401为第一切换周期中300ms(第二时刻)时的画面,如图8C所示,拍摄预览界面401包括后取景界面401d(此时,后取景界面401d所在的显示区域可以为第三区域),画中画界面401c消失,其中,图8C中的后取景界面401d相比于图8B中的后取景界面401d变得更模糊。
参阅图8D,图8D所示的拍摄预览界面401为第一切换周期中150ms时的画面,如图8C所示,拍摄预览界面401包括画中画界面401c和后取景界面401d,图8D所示的拍摄预览界面401与图8B所示的拍摄预览界面401类似,区别在于图8D中的画中画界面401c相比于图8B中的画中画界面401c变得更为模糊,且图8D中的画中画界面401c相比于图8B中的画中画界面401c变得更透明(不透明度降低了),图8D中的后取景界面401d相比于图8B中的后取景界面401d变得更模糊,其中,图8D中的后取景界面401d的模糊程度,是介于图8B中的后取景界面401d的模糊程度与图8C中的后取景界面401d的模糊程度之间。
在0ms(第一时刻)到300ms(第二时刻)这一时间段称为第一时间段,参考图8A,在第一时间段内,电子设备对画中画界面401c进行了透明度处理。其中,画中画界面401c在0ms(第一时刻)时的不透明度(opacity)为100%(第二不透明度),画中画界面401c在300ms(第二时刻)时的不透明度为0(第一不透明度)。即,在第一时间段内的0ms(第一时刻)到300ms(第二时刻)内,画中画界面401c的不透明度从100%变为了0(从第二不透明度变为了第一不透明度),为了使得画中画界面401c的不透明度的变化过程视觉效果更好,画中画界面401c的不透明度变化是一个随着时间渐变的过程,画中画界面401c的不透明度变化趋势可以参考第二曲线。参阅图9B,图9B示出了第二曲线的变化趋势,第二曲线可以为06-Sharp curve(06-锐利曲线)。其中,图9B中的坐标系,x轴代表时间,y轴表示透明度(与不透明度为相反的概念)。从第二曲线的变化趋势可以看出,第二曲线的曲率随着时间先增大后减小,第二曲线可以表示一个由缓到急,再由急到缓的变换过程,即前取景界面401c的透明度变化过程在0ms-300ms这一时间段内,在第一时间段的开头时间段内变化得较 慢,在第一时间段的中间时间段内变化得较快,在第一时间段的末尾时间段内变化得较慢。
在第一时间段内,电子设备还对画中画界面401c进行了高斯模糊处理,其中,画中画界面401c在0ms(第一时刻)时的高斯模糊值为0(第一模糊值),画中画界面401c在300ms(第二时刻)时的高斯模糊值为10(第二模糊值)。在第一时间段内,画中画界面401c的高斯模糊值从0变为了10(从第一模糊值变为了第二模糊值),在第一时间段内画中画界面401c高斯模糊值的变化趋势可以参考第一曲线。参阅图9A,图9A示出了第一曲线的变化趋势,第一曲线可以为05-Extreme Deceleration Curve(05-急缓曲线)。其中,图9A中的坐标系,x轴代表时间,y轴表示高斯模糊值。从第一曲线的变化趋势可以看出,第一曲线的曲率随着时间越来越小,第一曲线可以表示二个由急到缓的变换过程,即画中画界面401c的高斯模糊值在0ms-300ms这一时间段内,高斯模糊值随着时间增加而增加的趋势也是由急到缓。
参阅图8E,图8E示出了拍摄预览界面401在第一切换周期中450ms的画面,如图8E所示,拍摄预览界面401包括后取景界面401d,后取景界面401d中显示的画面为后置摄像头所拍摄到的实时视频画面与最后一帧图像,该实时视频画面被后取景界面401d中所显示的最后一帧图像遮挡,但由于图8E中的最后一帧图像相比于图8D中的最后一帧图像,其不透明度降低,因此此时看到的画面为后置摄像头所拍摄到的实时视频画面与最后一帧图像叠加后所形成的画面。
参考图8F,图8F示出了拍摄预览界面401在第一切换周期中600ms(第三时刻)时的画面,如图8E所示,拍摄预览界面401包括后取景界面401d(此时,后取景界面401d所在的显示区域可以为第三区域,后取景界面401d中的图像为第三图像)。其中,后取景界面401d的显示界面为整个拍摄预览界面401所在的界面,后取景界面401d上显示的画面为后置摄像头所采集到的实时视频画面。
将300ms(第二时刻)到600ms(第三时刻)这一时间段称为第二时间段,600ms(第三时刻)时的拍摄预览界面401不同于将300ms(第二时刻)时的拍摄预览界面401,这是由于在第二时间段内,电子设备对后取景界面401d进行了透明度处理。参阅图8A和图8C,后取景界面401d中的最后一帧图像在300ms(第二时刻)时的不透明度为100%(第二不透明度)。参阅图8A和图8F,后取景界面401d中的最后一帧图像在600ms(第三时刻)时的不透明度为0(第一不透明度)。即,在第二时间段内,后取景界面401d的不透明度从100%变为了0(从第二不透明度变为了第一不透明度)。后取景界面401d的不透明度变化趋势可以参考第二曲线。
参阅图8C、图8E、图8F,这三幅图依次示出了在第二时间段内,后取景界面401d中的最后一帧图像由完全不透明变为了完全透明的过程,而在300ms之后,后置摄像头所拍摄视频流在后取景界面401d所在的显示位置进行上流,如图8C所示,在后取景界面401d所在的显示位置进行上流的视频流被完全不透明的最后一帧图像所遮挡;如图8E所示,随着最后一帧图像的不透明度降低,在图8E中可以看到视频流与最后一帧图像重叠的画面;如图8F所示,随着后取景界面401d中的最后一帧图像的透明度逐渐发生变化至完全透明,后置摄像头所拍摄到的视频画面逐渐清晰地呈现在拍摄预览界面401中的后取景界面401d中。
此外,电子设备还可以从画中画拍摄模式切换到前置拍摄模式。具体的,可以通过输入“翻转手掌”的隔空手势,先将画中画拍摄模式的界面从图5J所示的画面切换为图5K所示的界面,然后按照上述实施例所介绍的方法,将画中画拍摄模式切换到前置拍摄模式。由于将画中画拍摄模式切换到前置拍摄模式的原理与画中画拍摄模式切换到后置拍摄模式的原理相同,在此不作赘述。
电子设备还可以从前置/后置拍摄模式切换到画中画拍摄模式,从前置拍摄模式切换到画中画拍摄模式与从后置拍摄模式切换到画中画拍摄模式的处理流程和原理相同。下面以后置拍摄模式切换到前后拍摄模式为例,下面将结合附图10A-图10G,介绍用户从前后拍摄模式切换到画中画拍摄模式的过程中,电子设备对拍摄预览界面401的处理流程以及该处理流程中的一系列界面。
本实施例中,以前后拍摄模式时的摄预览界面为图5F所示的界面,用户输入的命令为“伸出手掌然后握拳”,切换到的画中画拍摄模式的摄预览界面为图6A所示的界面为例,对拍摄预览界面401的处理流程以及该处理流程中的一系列界面进行说明。
电子设备在检测到用户输入的“伸出手掌然后握拳”的隔空手势(也即第三隔空手势)后,响应于检测到用户输入“伸出手掌然后握拳”的隔空手势,电子设备的拍摄预览界面401将逐渐由后置拍摄模式切换为画中画拍摄模式,参阅图10A,图10A示出了电子设备对拍摄预览界面401进行动画处理的另一种流程图,拍摄预览界面从后置拍摄模式切换到画中画拍摄模式的整个过程的切换时间为第二切换周期T2(例如,该第二切换周期T2为600ms),该第二切换周期T2可以分为两个处理时间段,下面结合附图对不同的处理时间段进行详细的介绍。
参阅图10B,图10B所示的拍摄预览界面401中的画面为后置拍摄模式切换到画中画拍摄模式的第二切换周期T2中0ms(第一时刻)时的画面,此时所显示的画面可以为后置拍摄模式所拍摄到的视频中最后一帧的图像。拍摄预览界面401包括后取景界面401d,后取景界面401d的显示界面为摄预览界面401所在的整个显示界面。
参阅图10C,图10C所示的拍摄预览界面401为第二切换周期T2中300ms(第二时刻)时的画面,如图10C所示,拍摄预览界面401包括画中画界面401c和后取景界面401d,其中,画中画界面401c叠加显示于后取景界面401d上,画中画界面401c中所显示的画面为一张预制图片。在默认模式中,画中画界面401c可以显示在后取景界面401d的左下角。此外,图10C中的后取景界面401d相比于图10B中的后取景界面401d变得更为模糊。
参考图10D,图10D示出了拍摄预览界面401在第二切换周期T2中150ms时的画面,如图10D所示,拍摄预览界面401包括画中画界面401c和后取景界面401d,图10D所示的拍摄预览界面401与图10C所示的拍摄预览界面401类似,区别点在于:图10D中画中画界面401c的不透明度高于图10C中画中画界面401c的不透明度,图10B、图10E和图10C依次示出了画中画界面401c由透明到不透明的过程。图10D中后取景界面401d的模糊程度介于图10B和图10C中后取景界面401d的模糊程度之间,图10B、图10D和图10C依次示出了后取景界面401d越来越模糊的过程。
在第二切换周期T2中的0ms(第一时刻)到300ms(第二时刻)这一时间段称为第一时间段,参考图10A,在第一时间段内,电子设备对后取景界面401d进行了高斯 模糊处理,其中,后取景界面401d在0ms(第一时刻)时的高斯模糊值为0(第一模糊值),后取景界面401d在300ms(第二时刻)时的高斯模糊值为100(第三模糊值)。在第一时间段内,后取景界面401d的高斯模糊值越来越高,在第一时间段内后取景界面401d高斯模糊值的变化趋势可以参考第一曲线。
此外,在第一时间段内,电子设备还对画中画界面401c进行了透明度处理,其中,画中画界面401c在0ms(第一时刻)时的不透明度为0,画中画界面401c在300ms(第二时刻)时的不透明度为100%。在第一时间段内,画中画界面401c的不透明度越来越低,使得画中画界面401c逐渐清晰地叠加显示于后取景界面401d上,在第一时间段内画中画界面401c的不透明度的变化趋势可以参考第二曲线。
参阅图10F,图10F示出了拍摄预览界面401在第二切换周期T2中600ms(第三时刻)时的画面,如图10D所示,拍摄预览界面401包括画中画界面401c和后取景界面401d。其中,画中画界面401c叠加显示于前取景界面401d和后取景界面401d上,在预设模式中,画中画界面401c预设于拍摄预览界面401的左下角处。画中画界面401c中显示的画面为前置摄像头所拍摄到的实时视频画面,后取景界面401d中显示的画面为后置摄像头所拍摄到的实时视频画面。
参阅图10E,图10E示出了拍摄预览界面401在第二切换周期中450ms的画面,如图10E所示,拍摄预览界面401包括画中画界面401c和后取景界面401d,画中画界面401c叠加显示于后取景界面401d上。其中,画中画界面401c中显示的画面为前置摄像头所拍摄到的实时视频画面与预制图片,后取景界面401d中显示的画面为后置摄像头所拍摄到的实时视频画面与最后一帧图像。
画中画界面401c中的实时视频画面被画中画界面401c中所显示的预制图片遮挡,但由于图10E中的预制图片相比于图10C中的预制图片,其不透明度降低,因此此时看到的画中画界面401c中的画面为前置摄像头所拍摄到的实时视频画面与预制图片叠加后所形成的画面。
后取景界面401d中的实时视频画面被后取景界面401d中所显示的最后一帧图像所遮挡,但由于图10E中的预制图片相比于图10C中的预制图片,其不透明度降低,因此此时看到的后取景界面401d中的画面为后置摄像头所拍摄到的实时视频画面与最后一帧图像叠加后所形成的画面。
将300ms(第二时刻)到600ms(第三时刻)这一时间段称为第二时间段,600ms(第三时刻)时的拍摄预览界面401不同于将300ms(第二时刻)时的拍摄预览界面401,这是由于在第二时间段内,电子设备对画中画界面401c和后取景界面401d进行了透明度处理。参阅图10A和图10C,画中画界面401c中的预制图片在300ms(第二时刻)时的不透明度为100%(第二不透明度),后取景界面401d中的最后一帧图像在300ms(第二时刻)时的不透明度为100%(第二不透明度)。参阅图10A和图10F,画中画界面401c中的预制图片在600ms(第三时刻)时的不透明度为0(第一不透明度),后取景界面401d中的最后一帧图像在600ms(第三时刻)时的不透明度为0(第一不透明度)。即,在第二时间段内,画中画界面401c的不透明度从100%变为了0(从第二不透明度变为了第二不透明度),后取景界面401d的不透明度也从100%变为了0(从第二不透明度变为了第二不透明度)。画中画界面401c和后取景界面401d 的不透明度变化趋势可以参考第二曲线。
参阅图10C、图10E、图10F,这三幅图依次示出了在第二时间段内,画中画界面401c中的预制图片由完全不透明变为了完全透明的过程,以及后取景界面401d中的最后一帧图像由完全不透明变为了完全透明的过程。由于在300ms之后,前置摄像头所拍摄视频流在画中画界面401c所在的显示位置进行上流,后置摄像头所拍摄视频流在后取景界面401d所在的显示位置进行上流,如图10C所示,在画中画界面401c所在的显示位置进行上流的视频流被完全不透明的预制图片所遮挡,在后取景界面401d所在的显示位置进行上流的视频流被完全不透明的最后一帧图像所遮挡;如图10E所示,随着预制图片和最后一帧图像的不透明度降低,在图10E中可以看到视频流与预制图片重叠的画面,以及视频流与最后一帧图像重叠的画面。如图10F所示,随着画中画界面401c中的预制图片的透明度逐渐发生变化至完全透明,前置摄像头所拍摄到的视频画面逐渐清晰地呈现在拍摄预览界面401中的画中画界面401c中;随着后取景界面401d中的最后一帧图像的透明度逐渐发生变化至完全透明,后置摄像头所拍摄到的视频画面逐渐清晰地呈现在拍摄预览界面401中的后取景界面401d中。
通过以上的实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在本申请实施例各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)或处理器执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:快闪存储器、移动硬盘、只读存储器、随机存取存储器、磁碟或者光盘等各种可以存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。

Claims (13)

  1. 一种拍摄视频的方法,其特征在于,所述方法应用于包括显示屏、第一摄像头和第二摄像头的电子设备,所述第一摄像头和所述第二摄像头位于所述显示屏的不同侧,所述方法包括:
    所述显示屏的第一区域显示来自所述第一摄像头实时采集的第一图像,所述显示屏的第二区域显示来自所述第二摄像头实时采集的第二图像,其中,所述第二区域为显示屏的整个显示区域,所述第一区域小于所述第二区域;
    响应于检测到的用户操作,所述第一区域中所显示的所述第一图像的不透明度由第二不透明度逐渐变小至第一不透明度;
    所述显示屏的第三区域显示第三图像,所述第三图像为所述第二摄像头实时采集的第二图像,所述显示屏的第三区域为所述显示屏的整个显示区域。
  2. 根据权利要求1所述的方法,其特征在于,在所述响应于检测到的用户操作之后,还包括:
    在第一时间段内,所述第一区域显示在检测到所述用户操作时所述第一摄像头所采集的第一图像,所述第二区域显示在检测到所述用户操作时所述第二摄像头所采集的第二图像,所述第一图像叠加显示在所述第二图像上,所述第一区域中所显示的所述第一图像的不透明度由第二不透明度逐渐变小至第一不透明度;
    在第二时间段内,所述第三区域显示在检测到所述用户操作时所述第二摄像头所采集的第二图像与所述第三图像,其中,所述第二图像叠加显示在所述第三图像上。
  3. 根据权利要求2所述的方法,其特征在于,在所述第一时间段,所述第一图像的高斯模糊值根据第一曲线逐渐变大,在所述第一时间段内和所述第二时间段内,所述第二图像的高斯模糊值根据所述第一曲线逐渐变大。
  4. 根据权利要求2或3所述的方法,其特征在于,所述第一区域中所显示的所述第一图像的不透明度根据第二曲线由第二不透明度逐渐变小至第一不透明度。
  5. 根据权利要求2至4任意一项所述的方法,其特征在于,在所述显示屏的第三区域显示所述第三图像之前,还包括:
    在所述第二时间段内,所述第二图像的不透明度根据第二曲线由第二不透明度到第一不透明度逐渐变小。
  6. 一种拍摄视频的方法,其特征在于,所述方法应用于包括显示屏、第一摄像头和第二摄像头的电子设备,所述第一摄像头和所述第二摄像头位于所述显示屏的不同侧,所述方法包括:
    所述显示屏的第一区域显示来自所述第一摄像头实时采集的第一图像,或者显示来自所述第二摄像头实时采集的第二图像,其中,所述第一区域为显示屏的整个显示区域;
    响应于检测到的用户操作,所述显示屏的第二区域显示预制图片,所述预制图片的不透明度由第一不透明度逐渐变大至第二不透明度;
    所述显示屏的第二区域显示所述第一摄像头实时采集的第一图像,所述显示屏的第三区域显示所述第二摄像头实时采集的第二图像,所述第三区域为所述显示屏的整个显示区域,所述第二区域小于所述第三区域。
  7. 根据权利要求6所述的方法,其特征在于,在所述响应于检测到的用户操作之后,还包括:
    在第一时间段内,所述第一区域显示第三图像,所述第三图像为在检测到所述用户操作时所述第一摄像头所采集的第一图像,或者,所述第三图像为在检测到所述用户操作时所述第二摄像头所采集的第二图像,所述显示屏的第二区域显示预制图片,所述预制图片的不透明度由第一不透明度逐渐变大至第二不透明度;
    在第二时间段内,所述第二区域显示所述预制图片与所述第一摄像头实时采集的第一图像,其中,所述预制图片叠加显示在所述第一图像上,所述第三区域显示所述第三图像与所述第二摄像头所采集的第二图像,其中,所述第三图像叠加显示在所述第二图像上。
  8. 根据权利要求7所述的方法,其特征在于,在所述第一时间段,所述第三图像的高斯模糊值根据第一曲线逐渐变大。
  9. 根据权利要求7或8所述的方法,其特征在于,所述预制图片的不透明度根据第二曲线由第一不透明度逐渐变大至第二不透明度。
  10. 根据权利要求7至9任意一项所述的方法,其特征在于,在所述显示屏的第二区域显示所述第一摄像头实时采集的第一图像,所述显示屏的第三区域显示所述第二摄像头实时采集的第二图像之前,还包括:
    在所述第二时间段内,所述预制图片的不透明度根据第二曲线由第二不透明度到第一不透明度逐渐变小;
    在所述第二时间段内,所述第三图像的不透明度根据第二曲线由第二不透明度到第一不透明度逐渐变小。
  11. 一种电子设备,其特征在于,包括用于存储计算机程序指令的存储器和用于执行程序指令的处理器,其中,当该计算机程序指令被所述处理器执行时,触发所述电子设备执行权利要求1-10任一项所述的方法。
  12. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质包括存储的程序,其中,在所述程序运行时控制所述计算机可读存储介质所在设备执行权利要求1-10中任意一项所述的方法。
  13. 一种计算机程序产品,其特征在于,所述计算机程序产品包含可执行指令,当所述可执行指令在计算机上执行时,使得计算机执行权利要求1-10中任意一项所述的方法。
PCT/CN2022/095357 2021-06-16 2022-05-26 一种拍摄视频的方法及电子设备 WO2022262550A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US18/247,247 US20230377306A1 (en) 2021-06-16 2022-05-26 Video Shooting Method and Electronic Device
EP22824030.5A EP4207744A4 (en) 2021-06-16 2022-05-26 VIDEO PHOTOGRAPHY METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
CN202110676709.3 2021-06-16
CN202110676709 2021-06-16
CN202111437971.9 2021-11-29
CN202111437971 2021-11-29
CN202111673018.4 2021-12-31
CN202111673018.4A CN115811656A (zh) 2021-06-16 2021-12-31 一种拍摄视频的方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022262550A1 true WO2022262550A1 (zh) 2022-12-22

Family

ID=84526919

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/095357 WO2022262550A1 (zh) 2021-06-16 2022-05-26 一种拍摄视频的方法及电子设备

Country Status (3)

Country Link
US (1) US20230377306A1 (zh)
EP (1) EP4207744A4 (zh)
WO (1) WO2022262550A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USD992593S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface
USD992592S1 (en) * 2021-01-08 2023-07-18 Samsung Electronics Co., Ltd. Display screen or portion thereof with graphical user interface

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006279968A (ja) * 2006-04-05 2006-10-12 Hitachi Ltd 映像アクセス装置及び映像アクセスプログラムを記録した記録媒体
JP2015219817A (ja) * 2014-05-20 2015-12-07 オリンパス株式会社 表示装置、表示方法、およびプログラム
CN105868246A (zh) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 一种图片显示方法及装置
CN105915673A (zh) * 2016-05-31 2016-08-31 努比亚技术有限公司 一种视频特效切换的方法和移动终端
US20180275833A1 (en) * 2017-03-22 2018-09-27 Mz Ip Holdings, Llc System and method for managing and displaying graphical elements
CN110069313A (zh) * 2019-04-29 2019-07-30 珠海豹好玩科技有限公司 图像切换方法、装置、电子设备及存储介质
CN112135049A (zh) * 2020-09-24 2020-12-25 维沃移动通信有限公司 图像处理方法、装置及电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102010955B1 (ko) * 2013-01-07 2019-08-14 삼성전자 주식회사 프리뷰 제어 방법 및 이를 구현하는 휴대 단말
KR20140114501A (ko) * 2013-03-14 2014-09-29 삼성전자주식회사 영상 데이터 처리 방법 및 이를 지원하는 전자 장치
KR102187236B1 (ko) * 2013-12-17 2020-12-04 삼성전자 주식회사 프리뷰 방법 및 이를 구현하는 전자 장치

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006279968A (ja) * 2006-04-05 2006-10-12 Hitachi Ltd 映像アクセス装置及び映像アクセスプログラムを記録した記録媒体
JP2015219817A (ja) * 2014-05-20 2015-12-07 オリンパス株式会社 表示装置、表示方法、およびプログラム
CN105868246A (zh) * 2015-12-15 2016-08-17 乐视网信息技术(北京)股份有限公司 一种图片显示方法及装置
CN105915673A (zh) * 2016-05-31 2016-08-31 努比亚技术有限公司 一种视频特效切换的方法和移动终端
US20180275833A1 (en) * 2017-03-22 2018-09-27 Mz Ip Holdings, Llc System and method for managing and displaying graphical elements
CN110069313A (zh) * 2019-04-29 2019-07-30 珠海豹好玩科技有限公司 图像切换方法、装置、电子设备及存储介质
CN112135049A (zh) * 2020-09-24 2020-12-25 维沃移动通信有限公司 图像处理方法、装置及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4207744A4

Also Published As

Publication number Publication date
EP4207744A4 (en) 2024-05-15
US20230377306A1 (en) 2023-11-23
EP4207744A1 (en) 2023-07-05

Similar Documents

Publication Publication Date Title
WO2021000881A1 (zh) 一种分屏方法及电子设备
WO2021147482A1 (zh) 一种长焦拍摄的方法及电子设备
WO2022262550A1 (zh) 一种拍摄视频的方法及电子设备
WO2022262475A1 (zh) 拍摄方法、图形用户界面及电子设备
WO2023093169A1 (zh) 拍摄的方法和电子设备
WO2024055797A9 (zh) 一种录像中抓拍图像的方法及电子设备
WO2021042878A1 (zh) 一种拍摄方法及电子设备
CN115442509B (zh) 拍摄方法、用户界面及电子设备
CN115484387B (zh) 一种提示方法及电子设备
WO2022262549A1 (zh) 一种拍摄视频的方法及电子设备
WO2022262547A1 (zh) 一种拍摄视频的方法及电子设备
WO2022262453A1 (zh) 一种异常提示方法及电子设备
WO2022262540A1 (zh) 一种拍摄方法及电子设备
CN115811656A (zh) 一种拍摄视频的方法及电子设备
WO2023231696A1 (zh) 一种拍摄方法及相关设备
WO2023035868A1 (zh) 拍摄方法及电子设备
WO2023093167A1 (zh) 拍摄方法和电子设备
CN115484394B (zh) 一种隔空手势的引导使用方法及电子设备
WO2023040775A1 (zh) 一种预览方法、电子设备及系统
WO2023078133A1 (zh) 视频播放方法和装置
EP4361805A1 (en) Method for generating theme wallpaper, and electronic device
EP4116811A1 (en) Display method and electronic device
US20240193842A1 (en) Display Method and Electronic Device
CN116737291A (zh) 桌面应用的处理方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22824030

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022824030

Country of ref document: EP

Effective date: 20230329

NENP Non-entry into the national phase

Ref country code: DE