WO2022252660A1 - 一种视频拍摄方法及电子设备 - Google Patents

一种视频拍摄方法及电子设备 Download PDF

Info

Publication number
WO2022252660A1
WO2022252660A1 PCT/CN2022/074128 CN2022074128W WO2022252660A1 WO 2022252660 A1 WO2022252660 A1 WO 2022252660A1 CN 2022074128 W CN2022074128 W CN 2022074128W WO 2022252660 A1 WO2022252660 A1 WO 2022252660A1
Authority
WO
WIPO (PCT)
Prior art keywords
interface
template
electronic device
segment
movie
Prior art date
Application number
PCT/CN2022/074128
Other languages
English (en)
French (fr)
Inventor
王龙
易婕
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to US17/911,215 priority Critical patent/US20240129620A1/en
Priority to EP22757810.1A priority patent/EP4124019A4/en
Publication of WO2022252660A1 publication Critical patent/WO2022252660A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/633Control of cameras or camera modules by using electronic viewfinders for displaying additional information relating to control or operation of the camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/667Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2621Cameras specially adapted for the electronic generation of special effects during image pickup, e.g. digital cameras, camcorders, video cameras having integrated special effects capability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/272Means for inserting a foreground image in a background image, i.e. inlay, outlay
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording

Definitions

  • the present application relates to the technical field of photographing, and in particular to a video photographing method and electronic equipment.
  • electronic devices such as mobile phones can not only provide a picture shooting function, but also provide a video recording function.
  • the video recording function is usually limited by the edge of the picture frame, and the dynamic video can only be recorded by the user moving the electronic device.
  • ordinary users do not have the professional ability to control the viewfinder of mobile phones and record dynamic videos.
  • the present application provides a video shooting method and an electronic device.
  • a micro movie can be recorded without the user moving the electronic device to control the viewfinder. Thereby reducing the difficulty of shooting micro movies in the dual-lens video scene.
  • an embodiment of the present application provides a video shooting method, and the method is applied to an electronic device including multiple cameras.
  • the electronic device displays a first interface; wherein, the first interface is a viewfinder interface before the electronic device starts recording, and the first interface includes real-time images collected by two cameras among the plurality of cameras.
  • the electronic device displays multiple template options on the first interface.
  • the first operation is used to trigger the electronic device to record a micro-movie, and each template option corresponds to a dynamic effect template for image processing, and the dynamic effect template is used to process preview images collected by at least two cameras among multiple cameras and obtain corresponding animation effects .
  • the electronic device displays a second interface in response to the user's second operation on the first template option among the multiple template options; wherein, the second interface is used to play the animation effect of the first dynamic effect template corresponding to the first template option.
  • the electronic device uses the first motion template to process the first real-time image captured by the first camera and the second real-time image captured by the second camera, so as to record a micro movie.
  • the first camera is a camera in the plurality of cameras
  • the second camera is a camera in the plurality of cameras except the first camera.
  • the real-time images collected by the two cameras can be dynamically processed to obtain a dynamic effect.
  • Animated microfilm In this way, micro-movie recording in a dual-lens recording scene can be realized, so that rich dual-lens video content can be recorded.
  • complex operations such as framing, which can reduce the difficulty of recording micro movies.
  • the processing of the real-time images collected by the first camera and the real-time images collected by the second camera by using the first dynamic effect template includes: displaying the third interface by the electronic device.
  • the third interface is a viewfinder interface before the electronic device starts recording, so recording preparations can be performed in the third interface.
  • the third interface includes the third real-time image collected by the first camera and the fourth real-time image collected by the second camera.
  • the micro-movie recorded by using the first motion effect template is composed of a plurality of movie clips, the first motion effect template includes a plurality of motion effect sub-templates, and the plurality of movie clips are in one-to-one correspondence with the plurality of motion effect sub-templates.
  • the electronic device receives a fourth user operation on the third interface, and the fourth operation is used to trigger the electronic device to record a first movie segment, where the first movie segment is any one of the plurality of movie segments.
  • the first segment corresponds to the first animation sub-template.
  • the electronic device displays a fourth interface.
  • the fourth interface is a viewfinder interface where the electronic device is recording, and the fourth interface includes a first preview image and a second preview image.
  • the first preview image is obtained by the electronic device using the first animation sub-template to perform animation processing on the first real-time image
  • the second preview image is obtained by the electronic device using the first animation sub-template to perform animation processing on the second real-time image of.
  • the motion effect template includes multiple motion effect sub-templates, which can be used for motion effect processing in multiple movie clips.
  • micro-movies with richer animation effects can be processed.
  • the mobile phone can perform dynamic effect processing on the real-time images captured by the two cameras in real time according to the dynamic effect template selected by the user, and display the processed preview image on the viewfinder interface during recording. In this way, the difficulty of recording a video with a dynamic effect can be reduced.
  • the result after the dynamic effect processing can be presented to the user in real time, which is conducive to previewing the recorded result in real time.
  • the electronic device displays the fourth interface, it further includes: the electronic device displays a third interface in response to the first event; where the first event is the recording of the first movie segment completed event.
  • the jump to the third interface can be triggered again, so that recording preparations can be performed before each movie segment is recorded. In this way, it is beneficial to further improve the effect of micro-movie recording.
  • the third interface further includes a first window, and the first window is used to play the animation effect corresponding to the first animation sub-template.
  • the animation effect of the corresponding animation sub-template can be viewed during the recording preparation stage.
  • the framing can be adjusted more accurately according to the animation effect. Thereby improving the recording effect.
  • each animation sub-template includes a first sub-template and a second sub-template; wherein, the first sub-template is used for the electronic device to perform animation processing on the first real-time image , the second sub-template is used for the electronic device to perform motion effect processing on the second real-time image.
  • different cameras may be processed using corresponding sub-templates. Therefore, the preview images collected by the two cameras at the same time can be processed to obtain different animation effects, which can further improve the processing effect.
  • the electronic device after processing the first real-time image collected by the first camera and the second real-time image collected by the second camera by using the first motion template, the electronic device further includes: The second event is to save the first video file; wherein, the second event is used to trigger the electronic device to save the processed video; the first video file includes multiple segments of the first video stream and multiple segments of the second video stream. Multiple segments of the first video stream correspond one-to-one to multiple movie clips, and multiple second video streams correspond to multiple movie clips one-to-one. Each segment of the first video stream includes a plurality of frames of first preview images processed in the corresponding movie segment, and each segment of the second video stream includes a plurality of frames of second preview images obtained in the corresponding movie segment.
  • the video file of the micro-movie can be generated after the recording of the micro-movie is completed. In this way, the micro-movie recorded this time can be played later.
  • the third interface includes p first segment options; wherein, p ⁇ 0, p is a natural number, and p is the segment number of recorded movie segments; each segment A segment option corresponds to a recorded movie segment; the third interface also includes a first control. It also includes: the electronic device receives a user's selection operation on the second segment option, and the second segment option is one of the p first segment options; the second segment option corresponds to the second movie segment. In response to the user's selection operation on the second segment option, the electronic device plays the multi-frame first preview images processed in the second movie segment and the multi-frame second preview images processed in the second movie segment on the third interface.
  • the electronic device In response to the user's click operation on the first control, the electronic device displays first prompt information on the second interface; the first prompt information is used to prompt whether to retake the second movie segment. In response to the user's fifth operation on the first prompt information, the electronic device displays a fourth interface for reshooting the second movie segment.
  • re-recording can be performed on the recorded movie segment.
  • the quality of each movie segment in the recorded micro-movie can be ensured.
  • the user will be prompted whether to repeat, so as to avoid user misoperation.
  • the third interface includes q third segment options; wherein, q ⁇ 0, q is a natural number, and q is the number of unrecorded movie segments; each The third segment option corresponds to an unrecorded movie segment; the third interface also includes the first control.
  • the method further includes: the electronic device selects a fourth fragment option in response to the third event; the fourth fragment option is one of the q third fragment options; the fourth The Fragment option corresponds to the first movie fragment.
  • the fourth operation is the user's click operation on the first control when the fourth fragment option is selected.
  • the movie segments to be recorded can be determined according to the user's selection, without being limited to the sequence of the movie segments.
  • the recording of micro-film can be made more flexible.
  • the electronic device after displaying the fourth interface, it further includes: the electronic device does not respond to the sixth operation of the user on the fourth interface, and the sixth operation is used to trigger the electronic device to exchange the fourth interface.
  • the sixth operation includes a long press operation or a drag operation.
  • different motion templates are suitable for different cameras; the method further includes: the electronic device starts the first camera and the second camera in response to the user's third operation on the second interface; The first camera and the second camera are cameras applicable to the first motion effect template.
  • the degree of matching between the camera used and the camera applicable to the motion effect template can be improved.
  • the effect of motion processing can be improved.
  • the first interface includes a second control, and the second control is used to trigger the electronic device to display multiple template options; the first operation is a click operation on the second control or a long Press Actions.
  • the first interface includes a third control; wherein the electronic device displays the second interface in response to the user's second operation on the first template option among the multiple template options,
  • the method includes: the electronic device selects the first template option in response to the user's selection operation on the first template option among the plurality of template options.
  • the electronic device displays the second interface in response to the user's click operation on the third control.
  • the playback of the dynamic effect template can be triggered further according to the user's click operation on the third control. In this way, it can be guaranteed to play the dynamic effect template that the user really needs to view.
  • an embodiment of the present application provides an electronic device, and the electronic device includes multiple cameras, a display screen, a memory, and one or more processors.
  • the display screen, the memory and the processor are coupled.
  • the memory is used to store computer program codes, and the computer program codes include computer instructions.
  • the electronic device executes the following steps: displaying a first interface; wherein, the first The interface is a viewfinder interface before video recording is started, and the first interface includes real-time images collected by two cameras among the plurality of cameras.
  • multiple template options are displayed on the first interface; the first operation is used to trigger the recording of micro-movies, and each template option corresponds to a dynamic effect template for image processing, and the dynamic effect template uses The method is used to process preview images collected by at least two cameras among the plurality of cameras and obtain corresponding animation effects.
  • a second interface is displayed; wherein, the second interface is used to play the animation effect of the first dynamic effect template corresponding to the first template option.
  • the first dynamic template is used to process the first real-time image collected by the first camera and the second real-time image collected by the second camera to record a micro movie; wherein, the first camera is One of the multiple cameras, the second camera is one of the multiple cameras except the first camera.
  • the electronic device when the computer instructions are executed by the processor, the electronic device further executes the following steps: displaying a third interface; where the third interface is the viewfinder interface; the third interface includes the third real-time image collected by the first camera and the fourth real-time image collected by the second camera; the micro-movie recorded using the first motion template is composed of multiple movie clips, and the first motion template Including multiple animation sub-templates, multiple movie clips correspond to multiple animation sub-templates; receive the user's fourth operation on the third interface, the fourth operation is used to trigger the recording of the first movie clip, the first movie clip It is any movie clip among multiple movie clips; the first clip corresponds to the first dynamic effect sub-template; in response to the fourth operation, a fourth interface is displayed; wherein, the fourth interface is a viewfinder interface being recorded, and in the fourth interface It includes a first preview image and a second preview image; the first preview image is obtained by performing animation processing on the first real-time image by using the first animation sub-template, and the
  • the electronic device when the computer instructions are executed by the processor, the electronic device is further executed with the following steps: displaying a third interface in response to the first event; wherein, The first event is an event that recording of the first movie segment is completed.
  • the third interface further includes a first window, and the first window is used to play the animation effect corresponding to the first animation sub-template.
  • each animation sub-template includes a first sub-template and a second sub-template; wherein, the first sub-template is used to perform animation processing on the first real-time image, and the first sub-template The second sub-template is used for dynamic processing on the second real-time image.
  • the electronic device when the computer instructions are executed by the processor, the electronic device is further executed with the following steps: saving the first video file in response to the second event; wherein , the second event is used to trigger saving the processed video; the first video file includes multiple first video streams and multiple second video streams; multiple first video streams correspond to multiple movie clips one-to-one, and multiple second video streams and A plurality of movie clips correspond to each other; each first video stream includes multiple frames of first preview images processed in the corresponding movie clip, and each second video stream includes multiple frames of second preview images processed in the corresponding movie clip.
  • the third interface includes p first segment options; wherein, p ⁇ 0, p is a natural number, and p is the segment number of recorded movie segments; each segment A segment option corresponds to a recorded movie segment; the third interface also includes the first control;
  • the electronic device When the computer instructions are executed by the processor, the electronic device further executes the following steps: receiving a user's selection operation on a second segment option, where the second segment option is one of the p first segment options; The second segment option corresponds to the second movie segment; in response to the user's selection operation on the second segment option, the multi-frame first preview image processed in the second movie segment and the multi-frame first preview image obtained in the second movie segment are played in the third interface.
  • Multi-frame second preview images in response to the user's click operation on the first control, the first prompt information is displayed in the second interface; the first prompt information is used to prompt whether to retake the second movie segment; in response to the user's click on the first
  • the fifth operation of the prompt message is to display a fourth interface for reshooting the second movie segment.
  • the third interface includes q third segment options; wherein, q ⁇ 0, q is a natural number, and q is the number of unrecorded movie segments; each The third segment option corresponds to an unrecorded movie segment; the third interface also includes the first control;
  • the electronic device When the computer instructions are executed by the processor, the electronic device further executes the following steps: selecting a fourth segment option in response to a third event; the fourth segment option is one of the q third segment options; The fourth fragment option corresponds to the first movie fragment; wherein, the fourth operation is the user's click operation on the first control when the fourth fragment option is selected.
  • the electronic device when the computer instruction is executed by the processor, the electronic device further executes the following step: not responding to the sixth operation of the user on the fourth interface, the second The sixth operation is used to trigger the exchange of the viewfinder frame of the first camera and the viewfinder frame of the second camera in the fourth interface.
  • the sixth operation includes a long press operation or a drag operation.
  • the electronic device When the computer instructions are executed by the processor, the electronic device further executes the following steps: responding to the user’s third operation on the second interface, starting the first camera and the second camera; Camera is the camera to which the first animation template applies.
  • the first interface includes a second control, and the second control is used to trigger the display of multiple template options; the first operation is a click operation or a long press operation on the second control .
  • the first interface includes a third control
  • the electronic device When the computer instructions are executed by the processor, the electronic device further executes the following steps: in response to the user's selection operation on the first template option among the multiple template options, selecting the first template option; In the case of the template option, in response to the user's click operation on the third control, the second interface is displayed.
  • an embodiment of the present application provides a chip system, which is applied to an electronic device including multiple cameras, a display screen, and a memory; the chip system includes one or more interface circuits and one or more processors The interface circuit and the processor are interconnected through a line; the interface circuit is used to receive a signal from the memory of the electronic device and send the signal to the processor, and the signal includes the signal stored in the memory computer instructions; when the processor executes the computer instructions, the electronic device executes the method described in the first aspect and any possible design manner thereof.
  • the present application provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the electronic device, the electronic device executes the first aspect and any possible design method thereof. the method described.
  • the present application provides a computer program product.
  • the computer program product runs on a computer, the computer executes the method described in the first aspect and any possible design manner thereof.
  • Figure 1a is a schematic diagram of a dual-lens video recording interface in a vertical screen form provided by an embodiment of the present application;
  • Fig. 1b is a schematic diagram of a dual-lens video recording interface in a vertical screen form provided by the embodiment of the present application;
  • FIG. 2 is a schematic diagram of an entry interface of a dual-lens video recording provided by an embodiment of the present application
  • Fig. 3a is a schematic diagram of another dual-camera video entry interface provided by the embodiment of the present application.
  • Fig. 3b is a schematic diagram of another dual-camera video entry interface provided by the embodiment of the present application.
  • FIG. 4 is a schematic diagram of a hardware structure of a mobile phone provided by an embodiment of the present application.
  • FIG. 5 is a flow chart of a video shooting method provided in an embodiment of the present application.
  • FIG. 6 is a schematic diagram of a mobile phone video recording interface provided by an embodiment of the present application.
  • FIG. 7 is a flow chart of another video shooting method provided by the embodiment of the present application.
  • FIG. 8 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 9 is a flow chart of another video shooting method provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 12 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 13 is a flow chart of another video shooting method provided by the embodiment of the present application.
  • Figure 14a is a schematic diagram of the composition of a video file provided by the embodiment of the present application.
  • Fig. 14b is a flowchart of another video shooting method provided by the embodiment of the present application.
  • Fig. 15 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • Fig. 16 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • Fig. 17 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 18 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 19 is a schematic diagram of a video saving interface provided by an embodiment of the present application.
  • FIG. 20 is a schematic diagram of the composition of another video file provided by the embodiment of the present application.
  • FIG. 21 is a flow chart of a video re-recording method provided by an embodiment of the present application.
  • Fig. 22 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 23 is a schematic diagram of another mobile phone video recording interface provided by the embodiment of the present application.
  • FIG. 24 is a schematic structural diagram of a chip system provided by an embodiment of the present application.
  • first and second are used for descriptive purposes only, and cannot be understood as indicating or implying relative importance or implicitly specifying the quantity of indicated technical features. Thus, a feature defined as “first” and “second” may explicitly or implicitly include one or more of these features. In the description of this embodiment, unless otherwise specified, “plurality” means two or more.
  • Dual-camera recording refers to the method of turning on two cameras to record video at the same time.
  • the viewfinder interface displayed by the mobile phone has two images collected by two cameras.
  • the electronic device is a mobile phone as an example, and the scene of dual-lens video recording is described with reference to FIG. 1a and FIG. 1b.
  • the mobile phone can display a viewfinder interface 101 shown in (a) in FIG. live image 103 .
  • the mobile phone can display the viewfinder interface 104 shown in (b) in FIG. A real-time image 106 collected by a camera).
  • the mobile phone can display the viewfinder interface 107 shown in (c) in FIG. Acquired real-time images 109 .
  • the viewfinder interface 101 shown in (a) in FIG. 1a, the viewfinder interface 104 shown in (b) in FIG. 1a and the viewfinder interface 107 shown in (c) in FIG. 1a are all viewfinders in the vertical screen form. interface.
  • the mobile phone can also realize dual-lens video recording in a landscape mode.
  • the mobile phone can display a viewfinder interface 113 shown in (a) in FIG. The real-time image 115 collected by the front camera).
  • the mobile phone can display the viewfinder interface 116 shown in (b) in FIG. The real-time image 118 that camera) collects.
  • the mobile phone may display a viewfinder interface 119 shown in (c) in FIG. Captured real-time images 121 .
  • the viewfinder interface can also include a mode identification, which is used to indicate the currently adopted preview camera (such as a front camera and a rear camera) and to indicate that the real-time image captured by the preview camera is displayed in the viewfinder interface. display layout.
  • the viewfinder interface 101 shown in (a) of FIG. 1 a includes a mode identifier 110, which is used to indicate that the currently used preview camera is a rear camera and a front camera, and the rear camera collects
  • the real-time image and the real-time image captured by the front camera are displayed in a top-bottom layout in the viewfinder interface.
  • the viewfinder interface 107 shown in (c) of FIG. 1 a includes a mode identification 112, which is used to indicate that the currently adopted preview camera is a rear camera and a front camera, and the front camera collects Live images and real-time images captured by the rear camera are displayed in a picture-in-picture layout on the viewfinder interface.
  • a control a is provided in the additional function menu interface (also called "more" menu interface) of the camera application, and the control a is used to trigger the mobile phone to start the dual-lens video recording function.
  • the mobile phone may receive the user's click operation on the control a.
  • the mobile phone may display interface a in response to the user's click operation on the control a.
  • the interface a is the viewfinder interface before starting dual-lens video recording.
  • the additional function menu interface 201 shown in FIG. 2 includes a control a 202.
  • the mobile phone can receive the user's click operation on the control a 202.
  • the mobile phone can display the viewfinder interface 101 shown in (a) in Figure 1a, which is the viewfinder interface before starting the dual-lens video recording.
  • a control b is included in the viewfinder interface of ordinary video recording, and the control b is used to trigger the mobile phone to display multiple mode options.
  • Each mode option corresponds to a display layout (eg, a picture-in-picture layout).
  • the mobile phone may receive the user's click operation on the control b.
  • the mobile phone can display a mode selection window on the common video viewfinder interface.
  • the mobile phone may receive a user's selection operation on mode option a among the multiple mode options.
  • the mobile phone may display interface a in response to the user's selection operation (such as a click operation) on the mode option a.
  • the interface a is the viewfinder interface before starting dual-lens video recording.
  • the real-time images captured by the first camera and the real-time images captured by the second camera are displayed in the display layout a corresponding to the mode option a. That is to say, in this embodiment, the dual-lens video recording is integrated into the normal video recording, and can be switched to the dual-camera video recording during the normal video recording.
  • the common video viewfinder interface 301 shown in (a) of FIG. 3a includes a control b 302.
  • the mobile phone can receive the user's click operation on the control b 302.
  • the mobile phone can display the common video viewfinder interface 303 shown in (b) in Figure 3a.
  • the viewfinder interface 303 includes a plurality of mode options, namely a mode option 304 , a mode option 305 and a mode option 306 .
  • the mode option 303 corresponds to the display layout of the real-time images collected by the front camera and the real-time images collected by the rear camera; the mode option 304 corresponds to the vertical arrangement of the real-time images collected by the two rear cameras The display layout; the mode option 305 corresponds to a display layout in which the real-time images collected by the front camera and the real-time images collected by the rear camera are arranged in a picture-in-picture manner.
  • the mobile phone can receive the user's click operation on the mode option 304 , that is, the mode option a is the mode option 304 .
  • the mobile phone can display the interface a 307 shown in (c) in Figure 3a in response to the user's click operation on the mode option 304. In the interface a 307, the real-time image 308 collected by the rear camera and the real-time image 309 collected by the front camera are displayed in a display layout arranged up and down.
  • the tab bar of the camera application includes a multi-camera recording tab (tab).
  • tab By triggering the multi-camera recording tab, you can directly enter the dual-camera recording.
  • the camera application provides a multi-camera video tag, and the mobile phone can receive a user's trigger operation (such as a click operation, a long-press operation, etc.) on the multi-camera video tag.
  • the mobile phone may display interface a in response to the user's trigger operation on the multi-camera video label.
  • the interface a is the viewfinder interface before starting dual-lens video recording. In this way, the dual-camera recording can be triggered through an independent label in the camera application, avoiding functional compatibility problems with other labels.
  • the camera application provides a multi-camera video tag 310 shown in (a) of FIG. 3b.
  • the mobile phone can receive the user's click operation on the multi-camera video label 310 .
  • the mobile phone can display the interface a 311 shown in (b) in FIG.
  • the real-time image 312 collected by the rear camera and the real-time image 313 collected by the front camera are displayed in a display layout arranged up and down. At this time, the dual-lens video recording is entered.
  • An embodiment of the present application provides a video shooting method, which can be applied to an electronic device.
  • the electronic device can provide a video recording function, specifically a dual-lens video recording function.
  • the electronic device in the process of dual-lens video recording, can perform dynamic processing on the images collected by the two cameras according to the dynamic effect template selected by the user, so as to record a micro-movie with animation effect.
  • micro-movies can be recorded without relying on complex operations such as user's mobile control of electronic devices, which reduces the difficulty of recording dynamic videos.
  • the electronic equipment in the embodiment of the present application can be a mobile phone, a tablet computer, a desktop, a laptop, a handheld computer, a notebook computer, an ultra-mobile personal computer (ultra-mobile personal computer, UMPC), a netbook, and a cellular Phones, personal digital assistants (personal digital assistant, PDA), augmented reality (augmented reality, AR) ⁇ virtual reality (virtual reality, VR) equipment, etc. No special restrictions are made.
  • the electronic device may include a processor 410, an external memory interface 420, an internal memory 421, a universal serial bus (universal serial bus, USB) interface 430, a charging management module 440, a power management module 441, a battery 442, Antenna 1, antenna 2, mobile communication module 450, wireless communication module 460, audio module 470, speaker 470A, receiver 470B, microphone 470C, earphone jack 470D, sensor module 480, button 490, motor 491, indicator 492, camera 493, A display screen 494, and a subscriber identification module (subscriber identification module, SIM) card interface 495, etc.
  • SIM subscriber identification module
  • the structure shown in this embodiment does not constitute a specific limitation on the electronic device.
  • the electronic device may include more or fewer components than shown, or combine certain components, or separate certain components, or arrange different components.
  • the illustrated components can be realized in hardware, software or a combination of software and hardware.
  • the processor 410 may include one or more processing units, for example: the processor 410 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, memory, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU) Wait. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • graphics processing unit graphics processing unit
  • ISP image signal processor
  • controller memory
  • video codec digital signal processor
  • DSP digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit, NPU
  • a controller can be the nerve center and command center of an electronic device.
  • the controller can generate an operation control signal according to the instruction opcode and timing signal, and complete the control of fetching and executing the instruction.
  • a memory may also be provided in the processor 410 for storing instructions and data.
  • the memory in processor 410 is a cache memory.
  • the memory may hold instructions or data that the processor 410 has just used or recycled. If the processor 410 needs to use the instruction or data again, it can be called directly from the memory. Repeated access is avoided, and the waiting time of the processor 410 is reduced, thus improving the efficiency of the system.
  • processor 410 may include one or more interfaces.
  • the interface may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous transmitter (universal asynchronous receiver/transmitter, UART) interface, mobile industry processor interface (mobile industry processor interface, MIPI), general-purpose input and output (general-purpose input/output, GPIO) interface, subscriber identity module (subscriber identity module, SIM) interface, and /or universal serial bus (universal serial bus, USB) interface, etc.
  • I2C integrated circuit
  • I2S integrated circuit built-in audio
  • PCM pulse code modulation
  • PCM pulse code modulation
  • UART universal asynchronous transmitter
  • MIPI mobile industry processor interface
  • GPIO general-purpose input and output
  • subscriber identity module subscriber identity module
  • SIM subscriber identity module
  • USB universal serial bus
  • the interface connection relationship among the modules shown in this embodiment is only a schematic illustration, and does not constitute a structural limitation of the electronic device.
  • the electronic device may also adopt different interface connection methods in the above embodiments, or a combination of multiple interface connection methods.
  • the charging management module 440 is configured to receive charging input from the charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 441 is used for connecting the battery 442 , the charging management module 440 and the processor 410 .
  • the power management module 441 receives the input from the battery 442 and/or the charging management module 440 to provide power for the processor 410 , internal memory 421 , external memory, display screen 494 , camera 493 , and wireless communication module 460 .
  • the wireless communication function of the electronic device can be realized by the antenna 1, the antenna 2, the mobile communication module 450, the wireless communication module 460, the modem processor and the baseband processor.
  • the mobile communication module 450 can provide wireless communication solutions including 2G/3G/4G/5G applied to electronic devices.
  • the mobile communication module 450 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the wireless communication module 460 can provide wireless local area networks (wireless local area networks, WLAN) (such as wireless fidelity (Wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT), global navigation satellite system, etc. (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field communication technology (near field communication, NFC), infrared technology (infrared, IR) and other wireless communication solutions.
  • the wireless communication module 360 may include an NFC chip, and the NFC chip may include an NFC controller (NFC controller, NFCC).
  • the NFC chip can perform processing such as signal amplification, analog-to-digital conversion, digital-to-analog conversion, and storage.
  • NFCC is used to take care of the physical transmission of data through the antenna.
  • NFCC can be included in the NFC chip of the electronic device.
  • the device host (DH) is responsible for the management of NFCC, such as initialization, configuration and power management. Wherein, the DH may be included in the main chip of the electronic device, and may also be integrated with the processor of the electronic device.
  • the antenna 1 of the electronic device is coupled to the mobile communication module 450, and the antenna 2 is coupled to the wireless communication module 460, so that the electronic device can communicate with the network and other devices through wireless communication technology.
  • the wireless communication technology may include global system for mobile communications (GSM), general packet radio service (general packet radio service, GPRS), code division multiple access (code division multiple access, CDMA), broadband Code division multiple access (wideband code division multiple access, WCDMA), time division code division multiple access (time-division code division multiple access, TD-SCDMA), long term evolution (long term evolution, LTE), BT, GNSS, WLAN, NFC , FM, and/or IR techniques, etc.
  • GSM global system for mobile communications
  • GPRS general packet radio service
  • code division multiple access code division multiple access
  • CDMA broadband Code division multiple access
  • WCDMA wideband code division multiple access
  • time division code division multiple access time-division code division multiple access
  • TD-SCDMA time-division code division multiple access
  • the electronic device realizes the display function through the GPU, the display screen 494 , and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 494 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • Processor 410 may include one or more GPUs that execute program instructions to generate or alter display information.
  • the display screen 494 is used to display images, videos and the like.
  • Display 494 includes a display panel.
  • the display panel can be a liquid crystal display (LCD), an organic light-emitting diode (OLED), an active matrix organic light emitting diode or an active matrix organic light emitting diode (active-matrix organic light emitting diode, AMOLED), flexible light-emitting diode (flex light-emitting diode, FLED), Miniled, MicroLed, Micro-oLed, quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • LCD liquid crystal display
  • OLED organic light-emitting diode
  • AMOLED active matrix organic light emitting diode
  • FLED flexible light-emitting diode
  • Miniled MicroLed, Micro-oLed
  • quantum dot light emitting diodes quantum dot light emitting diodes (quantum dot light emitting diodes, QLED), etc.
  • the electronic device can realize the shooting function through ISP, camera 493 , video codec, GPU, display screen 494 and application processor.
  • the ISP is used to process data fed back by the camera 493 . For example, when taking a picture, open the shutter, the light is transmitted to the photosensitive element of the camera through the lens, and the light signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color. ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 493 .
  • Camera 493 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • the electronic device may include 1 or N cameras 493, where N is a positive integer greater than 1.
  • the external memory interface 420 can be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the electronic device.
  • the external memory card communicates with the processor 410 through the external memory interface 420 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 421 may be used to store computer-executable program code, which includes instructions.
  • the processor 410 executes various functional applications and data processing of the electronic device by executing instructions stored in the internal memory 421 .
  • the processor 410 may display different content on the display screen 484 in response to the user's operation of expanding the display screen 494 by executing instructions stored in the internal memory 421 .
  • the internal memory 421 may include an area for storing programs and an area for storing data.
  • the stored program area can store an operating system, at least one application program required by a function (such as a sound playing function, an image playing function, etc.) and the like.
  • the storage data area can store data (such as audio data, phone book, etc.) created during the use of the electronic device.
  • the internal memory 421 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, flash memory device, universal flash storage (universal flash storage, UFS) and the like.
  • the electronic device can realize the audio function through the audio module 470, the speaker 470A, the receiver 470B, the microphone 470C, the earphone interface 470D, and the application processor. Such as music playback, recording, etc.
  • the keys 490 include a power key, a volume key and the like.
  • the key 490 may be a mechanical key. It can also be a touch button.
  • the electronic device can receive key input and generate key signal input related to user settings and function control of the electronic device.
  • the motor 491 can generate a vibrating prompt.
  • the motor 491 can be used for incoming call vibration prompts, and can also be used for touch vibration feedback.
  • the indicator 492 can be an indicator light, which can be used to indicate the charging status, the change of the battery capacity, and can also be used to indicate messages, missed calls, notifications and the like.
  • the SIM card interface 495 is used for connecting a SIM card. The SIM card can be inserted into the SIM card interface 495 or pulled out from the SIM card interface 495 to realize contact and separation with the electronic device.
  • the electronic device can support 1 or N SIM card interfaces, where N is a positive integer greater than 1.
  • An embodiment of the present application provides a video shooting method, which can be applied to a mobile phone.
  • the mobile phone includes at least a plurality of cameras, and the mobile phone can provide the function of dual-lens video recording.
  • the method includes S501-S504.
  • the mobile phone displays interface a, where the interface a is a viewfinder interface before the mobile phone starts video recording, and the interface a includes real-time images collected by camera a and camera b among the multiple cameras.
  • the interface a may also be referred to as the first interface, which is the same hereinafter.
  • Camera a and camera b are two cameras among the plurality of cameras.
  • the mobile phone may display interface a in response to the user's click operation on control a.
  • the mobile phone may also display interface a in response to the user's selection operation on mode option a among the multiple mode options.
  • the mobile phone may also display the interface a in response to the user's click operation on the control b.
  • interface a includes real-time images collected by two cameras.
  • the interface a 307 shown in (c) in FIG. 3 a includes a real-time image 308 collected by camera a (such as a rear main camera) and a real-time image 309 collected by camera b (such as a front camera).
  • the mobile phone displays multiple template options on the interface a in response to the user's operation a on the interface a.
  • the operation a is used to trigger the mobile phone to record a micro-movie, and each template option corresponds to a motion template for image processing.
  • the dynamic effect template is used to process preview images collected by at least two cameras among the plurality of cameras and obtain corresponding animation effects.
  • operation a may also be referred to as the first operation, which is the same hereinafter.
  • the mobile phone may receive the user's operation a on the interface a.
  • the operation a may be a preset gesture a (such as a sliding gesture, a long press gesture) performed by the user in the interface a.
  • the interface a includes a control c, and the control c is used to trigger the mobile phone to display multiple template options.
  • Operation a is a user's trigger operation on control c (such as a click operation, a long press operation, etc.).
  • the control c may also be called the second control.
  • the interface a 601 shown in (a) of FIG. 6 includes the control c 602.
  • the mobile phone can receive the user's click operation on the control c 602.
  • control c may also be displayed at the right or left edge of interface a.
  • the control c can also be a circle or other shapes.
  • the dynamic effect template is a template for simulating the dynamic effect of the video captured by the camera in various motion states.
  • the motion state includes states such as pushing, pulling, shaking, moving, heeling and/or throwing.
  • Each dynamic effect template includes at least one dynamic effect in a motion state.
  • the shooting effect of the camera in the motion state of pushing, pulling, shaking, moving, following or throwing is explained below:
  • Push refers to the shooting method that makes the picture transition continuously from a wide range of scenes. Pushing the lens separates the subject from the environment on the one hand, and on the other hand reminds the viewer to pay special attention to the subject or a certain detail of the subject.
  • Pull is just the opposite of push. It shows the subject from near to far and from part to whole in the picture, making the details of the subject or the subject gradually smaller. Pulling the lens emphasizes the relationship between the subject and the environment.
  • Shake Shake means that the position of the camera does not move, and only the angle changes. Its direction can be left and right or up and down, or it can be tilted or rotated. Its purpose is to show the various parts of the subject one by one, or to show the scale, or to inspect the environment, etc. The most common of these is the side-to-side pan, which is often used in television shows.
  • Move is the abbreviation of "Move", which means that the camera moves in various directions along the horizontal and shoots at the same time. Mobile shooting requires high requirements, and special equipment is needed in actual shooting. Mobile shooting can produce the visual effect of patrolling or displaying. If the subject is in motion, using mobile shooting can produce the visual effect of following on the screen.
  • Shaking is actually a kind of shaking. The specific operation is to turn the camera to another direction abruptly when the previous picture ends. During the panning process, the image becomes very blurred, and a new image appears only when the lens stabilizes. Its function is to express the rapid changes of things, time and space, and to create people's psychological sense of urgency.
  • the mobile phone may display the user's click operation on control c 602 in interface a 601 shown in (a) in Figure 6 in response to the user's click operation in Figure 6.
  • (b) shows interface a 603.
  • the interface a 603 includes 4 template options, which are respectively: template option 604, template option 605, template option 606 and template option 607.
  • the template option 604 corresponds to the dynamic effect template of "friends gathering”
  • the template option 605 corresponds to the dynamic effect template of "warm moment”
  • the template option 606 corresponds to the dynamic effect template of "intimate time”
  • the template option 606 corresponds to the dynamic effect template of "happy moment”.
  • Motion template corresponds to the dynamic effect template of "friends gathering”
  • the template option 605 corresponds to the dynamic effect template of "warm moment”
  • the template option 606 corresponds to the dynamic effect template of "intimate time”
  • Motion template corresponds to the dynamic effect template of "happy moment”.
  • each template option shown in (b) in FIG. 6 is exemplary, and are not limited thereto in actual implementation.
  • multiple option templates can also be displayed in the middle of the interface a.
  • multiple option templates can also be arranged vertically.
  • each template option has a different option cover.
  • a plurality of template options includes a plurality of template options a and a plurality of template options b, wherein each template option a corresponds to a dynamic template for image processing in a single-shot video, and each template option b Corresponding to a motion template for image processing in dual-camera video. It should be noted that after entering the dual-lens recording interface through the methods shown in (a) and (b) in FIG. 3b, the multiple template options displayed usually only include the dynamic effect template for dual-lens recording.
  • the method further includes S701:
  • each template option b corresponds to a motion effect template for motion effect processing of a multi-camera video.
  • the mobile phone may locate multiple template options b according to scene attributes of each template option.
  • the scene attribute is used to indicate the scene to which the template option applies.
  • the applicable scene includes a single-lens recording scene or a double-lens recording scene.
  • the scene attribute of the template option is the first attribute
  • the scene attribute of the template option is the second attribute
  • the scene attribute of the option template is 1, the positioning template option is template option b. That is: the second attribute is 1.
  • S502 further includes S702:
  • the mobile phone displays multiple template options in interface a in response to operation a.
  • the operation a is used to trigger the display of template options.
  • the plurality of template options is a plurality of template options b.
  • multiple template options b are continuously displayed.
  • the 4 template options included in the interface a 603 shown in (b) in FIG. 6 are all template options b applicable to the dual-lens recording scene.
  • the template option 604 is the first template option b located
  • the template option 605 is the second template option b located
  • the template option 606 is the third template option b located
  • the template option 607 is the located template option b.
  • the fourth template option b is the first template option b located
  • the template option 605 is the second template option b located
  • the template option 606 is the third template option b located
  • the template option 607 is the located template option b.
  • the fourth template option b is the fourth template option b.
  • the template option b is displayed for the user to choose.
  • the displayed template options can be adapted to the scene, which is conducive to the subsequent rapid selection of matching template options.
  • the mobile phone displays the scene identification at the preset position of the template options.
  • the scene identifier is used to indicate the scene to which the template option applies.
  • the mobile phone displays multiple template options in interface a in response to operation a.
  • multiple function icons and controls in interface a will be hidden to simplify the elements in the interface and facilitate the selection of template options. For example, compared with the interface a 601 shown in (a) in FIG. 6 : the interface a 603 shown in (b) in FIG. icon 611 , filter icon 612 , setting icon 613 , and lens switching control 614 .
  • the mobile phone displays an interface b in response to the user's operation b on the template option c among the multiple template options.
  • the operation b is used to trigger the mobile phone to play the animation effect.
  • Interface b is used to play the animation effect of animation template a corresponding to template option c.
  • the template option c can also be called the first template option
  • the operation b can also be called the second operation
  • the interface b can also be called the second interface
  • the dynamic template a can also be called the first dynamic template
  • the mobile phone may receive the user's operation b on the interface a.
  • operation b may be a user's selection operation on template option c (such as a click operation, a long press operation, etc.). Take operation b as an example where the user clicks on template option c.
  • the mobile phone can receive the user's click operation on the template option c 802 in the interface a 801 shown in FIG. 8 .
  • the user's selection operation on the template option c can only trigger the mobile phone to select the dynamic effect template a. Then, the mobile phone can only display the interface b in response to the user's preset operation a on the interface a.
  • the preset operation a may be a preset gesture b (such as a sliding gesture) on the interface a.
  • the preset operation a is a long press operation on an area in the interface a where no controls or icons are displayed.
  • the interface a includes a control d, and the control d is used to trigger the mobile phone to play an animation effect, and the preset operation a is a trigger operation (such as a click operation, a long press operation, etc.) on the control d by the user.
  • the control d may also be called the third control.
  • S503 includes S901-S904:
  • the mobile phone receives a user's selection operation on template option c.
  • the template option c is one of a plurality of template options. Template option c corresponds to dynamic template a.
  • the mobile phone selects template option c in response to the selection operation, and highlights template option c in interface a.
  • the selection operation is the user's click operation on template option c.
  • the mobile phone can receive the user's click operation on the template option c 1002 in the interface a 1001 shown in (a) in FIG. 10 .
  • the click operation is used to trigger the mobile phone to select the dynamic effect template of "warm moment", that is, the dynamic effect template of "warm moment” is the dynamic effect template a.
  • the dynamic effect template of the "warm moment” is the dynamic effect template corresponding to the template option c 1002.
  • the mobile phone can display an interface a 1003 shown in (b) in Figure 10, in which the Display template option c 1004.
  • the default highlight in interface a is the template option ranked first or in the middle position in interface a.
  • the interface a 603 shown in (b) in FIG. 6 includes 4 template options. Among them, the first template option 604 is highlighted by default. Therefore, in a specific implementation manner, if the dynamic effect template a is the dynamic effect template corresponding to the template option in the first or middle position, S901 and S902 may be omitted.
  • the mobile phone receives the user's click operation on the control d.
  • the mobile phone can receive the user's click operation on the control d 1005 in the interface a 1003 shown in (b) in FIG. 10 .
  • the click operation is used to trigger the mobile phone to play the dynamic effect template of "warm moment", that is, the dynamic effect template of "warm moment” is the dynamic effect template a.
  • the dynamic effect template of the "warm moment” is the dynamic effect template corresponding to the template option c 1004.
  • the form and position of the control d 1005 shown in (b) in FIG. 10 are exemplary, and are not limited thereto during actual implementation.
  • the shape of the control d may also be a rounded rectangle including a camera icon.
  • the control d can also be set at the lower right corner of the interface a.
  • the mobile phone displays interface b in response to the user's click operation on control d.
  • the mobile phone selects the dynamic effect template a corresponding to the template option c in response to the user's selection operation on the template option c. Then, the mobile phone can trigger the playback of the dynamic effect template a according to the user's further click operation on the control d in the interface a. In this way, the playback of the dynamic effect template a can be triggered only after the user accurately selects the corresponding template option. In this way, the orderliness of trigger playback can be improved.
  • the operation b includes the user's selection operation on the template option c and the click operation on the control d.
  • the mobile phone can display the interface b 1101 shown in (a) in FIG. 11 in response to the user's click operation on the control d 1005 shown in (b) in FIG. 10 .
  • the interface b 1101 includes window a 1102.
  • the window a 1102 is used to play the dynamic effect template a.
  • the interface b shown in (a) in FIG. 11 is only exemplary.
  • the interface b is to add a mask layer (such as a gray mask layer) on the real-time image in the interface a, display a plurality of template options, control d and other interface elements on the mask layer, and display window a on the mask layer owned.
  • a mask layer such as a gray mask layer
  • actual implementation is not limited to this.
  • interface b may be a viewfinder interface before starting dual-lens video recording. Different from interface a, interface b includes window a.
  • interface b may be interface b 1103 shown in (b) in FIG. 11 .
  • the interface b 1103 includes a window a 1104, and the window a 1104 is used to play the dynamic effect template a.
  • the interface b may be an interface with a preset background.
  • the preset background can be a solid color background.
  • the interface b 1105 shown in (c) in FIG. 11 is an interface with a pure black background.
  • the interface b 1105 includes a window a 1106, and the window a 1106 is used to play the dynamic effect template a.
  • window a 1102 shown in (a) in FIG. 11 window a 1104 shown in (b) in FIG. 11
  • window a 1106 shown in (c) in FIG. 11 are only For example. In actual implementation, it is not limited to this.
  • window a may be a full screen window. In this way, the dynamic template can be restored one-to-one during the playback of the dynamic template.
  • the shape of the window a is adapted to the screen orientation to which the dynamic effect template a is applicable. In this way, it is convenient to instruct the user to adjust the screen orientation.
  • the animation template a is suitable for performing animation processing on a video recorded in a landscape mode, and the shape of the window a is a rectangle whose width value is greater than the height value.
  • the animation template a is suitable for performing animation processing on the video recorded in the vertical screen mode, and the shape of the window a is a rectangle whose width value is smaller than the height value.
  • window a includes a first sub-window and a second sub-window, wherein the first sub-window is used to play sub-template a, and the second sub-window is used to play sub-template b.
  • the display layouts of the first sub-window and the second sub-window match the applicable display layout of the dynamic effect template a. In this way, it is convenient to specify the applicable display layout of the dynamic effect template a.
  • sub-template a may also be called a first sub-template
  • sub-template b may also be called a second sub-template.
  • the mobile phone before displaying the interface b, detects the display layout b applicable to the dynamic effect template a, wherein the display layout b includes a top-bottom display layout, a left-right display layout, a horizontal picture-in-picture display layout or a vertical picture-in-picture layout. Draw the display layout.
  • S503 further includes: the mobile phone displays interface b in response to operation b.
  • the interface b includes window a, and the window a includes a first sub-window and a second sub-window, and the first sub-window and the second sub-window are displayed in a first display layout.
  • the mobile phone may display an interface b 1101 shown in (a) in FIG. Window a 1102 .
  • the window a 804 includes a first sub-window 1107 and a second sub-window 1108, wherein the first sub-window 1107 and the second sub-window 1108 display the layout from left to right, which indicates that the template option c in (b) of FIG. 10
  • the dynamic effect template corresponding to 1004 is applicable to the scene where the preview streams corresponding to the two cameras are displayed in a left and right layout in the horizontal screen mode.
  • the interface b also includes a plurality of template options, so that the user can reselect the dynamic effect template a in the interface b. In this way, there is no need to return to interface a, and the dynamic effect template can be continuously switched in interface b.
  • the interface b 1101 shown in (a) in FIG. 11 includes a plurality of template options, which are respectively: template option 1109, template option 1110, template option 1111 and template option 1112.
  • the mobile phone uses the dynamic template a to process the real-time image a collected by the camera c and the real-time image b collected by the camera d to record a micro movie.
  • the camera c is a camera among the plurality of cameras
  • the camera d is a camera except the camera c among the plurality of cameras.
  • the operation c can also be called the third operation
  • the camera c can also be called the first camera
  • the real-time image a can also be called the first real-time image
  • the camera d can also be called the second camera
  • the real-time image b can also be called is the second real-time image.
  • the mobile phone may receive the user's operation c on the interface b, and the operation c may be a preset gesture c of the user on the interface b.
  • the preset gesture c is a sliding gesture from right to left in the interface b.
  • the interface b includes a control e, which is used to trigger the mobile phone to start micro-movie recording.
  • Operation c may be a trigger operation (such as a click operation, a long press operation, etc.) on the control e.
  • operation c may be a user's click operation on control e 1202 in interface b 1201 shown in (a) in FIG. 12 .
  • the mobile phone can use the dynamic effect template a to perform dynamic effect processing to complete the recording of the micro movie. For example, dynamic effect processing is performed on the real-time image a collected by the camera c and the real-time image b collected by the camera d, so as to achieve the animation effect of the dynamic template a.
  • different motion effect templates are suitable for performing motion effect processing on preview images captured by different cameras.
  • different animation templates are suitable for different cameras.
  • Camera c and camera d are two cameras applicable to animation template a.
  • S504 further includes S1301 and S1302:
  • the mobile phone In response to the user's operation c on the interface b, the mobile phone starts the camera c and the camera d according to the cameras applicable to the dynamic effect template a.
  • the camera c and the camera d applicable to the dynamic effect template a may be a combination of any two cameras among the front camera, the rear main camera, the rear wide-angle camera, the rear ultra-wide-angle camera and the rear telephoto camera.
  • the mobile phone can query the attribute information of the dynamic effect template a to obtain its applicable camera.
  • the mobile phone After starting camera c and camera d, the mobile phone uses dynamic template a to process the real-time image a collected by camera c and the real-time image b collected by camera d to record a micro movie.
  • camera c may be the same as camera a or camera b, or may be different from both camera a and camera b; camera c may be the same as camera a or camera b, or may be the same as camera a and camera b all different.
  • the camera c and the camera d are cameras that have been turned on before recording preparations are started.
  • camera c is camera a that is turned on
  • camera d is camera b that is turned on.
  • the two cameras that have been turned on can be directly used as the camera c and the camera d respectively. In this way, the process of determining the camera is reduced, and the dynamic processing can be quickly entered.
  • the real-time images collected by the two cameras can be dynamically processed to obtain a dynamic effect.
  • Animated microfilm In this way, micro-movie recording in a dual-lens recording scene can be realized, so that rich dual-lens video content can be recorded.
  • complex operations such as framing, which can reduce the difficulty of recording micro movies.
  • a micro-movie includes multiple movie segments.
  • the dynamic effect template a includes a plurality of dynamic effect sub-templates, and a plurality of movie clips corresponds to a plurality of dynamic effect sub-templates one by one.
  • Each animation sub-template is used for animation processing of the real-time images collected in the corresponding movie segment.
  • each animation sub-template further includes sub-template a and sub-template b.
  • the sub-template a is used for the mobile phone to perform dynamic effect processing on the real-time image a
  • the sub-template b is used for the mobile phone to perform dynamic effect processing on the real-time image b.
  • corresponding sub-templates can be used for processing for different cameras. Therefore, different animation effects can be obtained by processing the preview image at the same time, which can further improve the processing effect.
  • sub-template a may also be called a first sub-template
  • sub-template b may also be called a second sub-template.
  • the dynamic effect template a includes n dynamic effect sub-templates, which are respectively the first dynamic effect sub-template, the second dynamic effect sub-template...the nth dynamic effect sub-template.
  • Each animation sub-template includes sub-template a and sub-template b. Let's say each movie segment is 2.5 seconds long.
  • the first animation sub-template is used for the animation processing in the first movie clip (such as the first 2.5 seconds), specifically, sub-template a in the first animation sub-template is used
  • the animation processing of the real-time image a captured by camera c in the first movie segment, the sub-template b in the first animation sub-template is used for the animation processing of the real-time image b captured by camera d in the first movie segment .
  • the second animation sub-template is used for animation processing in the second movie clip (such as the second 2.5 seconds), specifically, sub-template a in the second animation sub-template is used for the second
  • the dynamic effect processing of the real-time image a collected by the camera c in the first movie segment, the sub-template b in the second animation sub-template is used for the dynamic effect processing of the real-time image b collected by the camera d in the second movie segment...
  • the second The n animation sub-templates are used for the animation processing in the nth movie segment (such as the nth 2.5 seconds), specifically, the sub-template a in the nth animation sub-template is used in the nth movie segment
  • the motion effect processing of the real-time image a collected by the camera c, the sub-template b in the nth motion effect sub-template is used for the motion effect processing of the real-time image b collected by the camera d in the nth movie segment.
  • S504 of the foregoing embodiment further includes S1401-S1402, and after S504, S1403 is also included:
  • the mobile phone displays interface c in response to event a.
  • the interface c is a viewfinder interface before starting dual-camera recording.
  • the interface c includes the real-time image c collected by the camera c and the real-time image d collected by the camera d.
  • the interface c also includes fragment options for multiple movie fragments.
  • the interface c may also be called the third interface
  • the real-time image c may be called the third real-time image
  • the real-time image d may be called the fourth real-time image.
  • the real-time image a is a real-time image collected during the recording process of the movie segment
  • the real-time image c is a real-time image collected during the recording preparation process of the movie segment (that is, when the interface c is displayed).
  • there is no essential difference between the real-time image d and the real-time image b and the reason is the same as above.
  • the interface c is a viewfinder interface before starting dual camera recording. That is to say, when the interface c is displayed, the recording does not actually start. Therefore, in the process of displaying the interface c, the mobile phone can adjust the viewing angle according to the movement of the mobile phone by the user. And the framing changes during the adjustment process will not be recorded in the video. In this embodiment, the process of adjusting the framing is called recording preparation.
  • the mobile phone jumps from interface b to interface c in response to user operation c on interface b. That is, in the first case, event a is user's operation c on interface b.
  • the mobile phone jumps back to interface c in response to the event that the recording of the kth movie segment is completed.
  • 1 ⁇ k ⁇ n, n is the number of dynamic sub-templates included in the dynamic template a. Both k and n are positive integers. That is to say, in the second case, event a is an event that the recording of the kth movie segment is completed, and event a at this time may also be called the first event.
  • the S1401 will be described below for these two cases respectively.
  • the event a may be the user's operation c on the interface b, and the operation c is used to trigger the mobile phone to start recording preparations.
  • the mobile phone may receive user's operation c on interface b.
  • the operation c may be a preset gesture c of the user in the interface b.
  • the preset gesture c is a sliding gesture from right to left in the interface b.
  • the interface b includes a control e, which is used to trigger the mobile phone to start recording preparations.
  • Operation c may be a trigger operation (such as a click operation, a long press operation, etc.) on the control e.
  • operation c may be a user's click operation on control e 1202 in interface b 1201 shown in (a) in FIG. 12 .
  • the mobile phone may display an interface c 1203 shown in (b) in FIG. It is the viewfinder interface before starting dual camera recording.
  • the interface c 1203 includes a real-time image c 1204 collected by a camera c (such as a rear camera) and a real-time image d 1205 collected by a camera d (such as a front camera).
  • the interface c 1203 also includes segment options of 5 movie segments, which are respectively: segment option 1206, segment option 1207, segment option 1208, segment option 1209 and segment option 1210.
  • segment options shown in (b) in FIG. 12 are only exemplary. In actual implementation, it is not limited to this.
  • the segment duration (such as 2.5s) may not be displayed in the segment option.
  • the fragment option may also be in the shape of a circle, a square, or the like.
  • the number of segment options varies with the number of dynamic sub-templates included in the dynamic template a.
  • event a may be an event that the recording of the kth movie segment is completed. For example, when the recording countdown (such as 2.5s) of the kth movie segment ends, event a is triggered.
  • the kth movie segment recorded each time may also be referred to as the first movie segment.
  • the mobile phone can detect whether the recording countdown of the kth movie segment ends. If it is detected that the recording countdown is over, interface c is displayed.
  • each segment option a corresponds to a recorded movie segment
  • each segment option b corresponds to an unrecorded movie segment.
  • segment option a may also be referred to as a first segment option
  • segment option b may also be referred to as a third segment option.
  • segment option a includes a segment cover
  • segment option b does not include a segment cover, so as to distinguish between recorded movie segments and unrecorded movie segments.
  • the mobile phone can display the interface c 1501 shown in (a) in FIG. 15 .
  • the fragment option a 1502 pointing to the first movie fragment in the interface c 1501 is displayed with a cover, while the rest of the fragment options do not display the cover.
  • the mobile phone can display the interface c 1503 shown in (b) in FIG. 15 .
  • the fragment option a 1504 pointing to the first movie fragment and the fragment option a 1505 pointing to the second movie fragment both display a cover, while the rest of the fragment options do not display the cover.
  • the mobile phone can display the interface c 1506 shown in (c) in FIG. 15 .
  • All the segment options in the interface c 1506 are segment option a, and correspondingly, segment option a 1507, segment option a 1508, segment option a 1509, segment option a 1510 and segment option a 1511 all display covers.
  • the cover can be selected from the video frames included in the movie segment pointed to by the segment option a.
  • the cover may be the preview image of the first frame or the preview image of the last frame of the corresponding movie segment.
  • the first case corresponds to the case of entering interface c for the first time during the recording process of the micro-movie
  • the second case corresponds to entering the interface c again during the recording process of the micro-movie Case.
  • the interface c further includes a window b
  • the window b is used to play the animation effects of each animation sub-template in the animation template a.
  • the animation effect of the motion effect sub-template (such as the first motion effect sub-template) corresponding to the movie segment to be recorded.
  • window b may also be referred to as the first window.
  • the window b is no longer displayed in the interface c.
  • the window b is not included in the interface c 1506 shown in (c) of FIG. 15 .
  • the mobile phone may hide the window b in response to the user's closing operation on the window b. This simplifies interface elements and is more conducive to previewing.
  • the mobile phone may display interface c shown in (b) in FIG. 1603, the interface c 1603 hides the identification 1604.
  • the hidden sign 1604 is used to trigger the restoration of the display window b 1602. That is to say, window b 1602 is hidden as this hidden identification 1604.
  • recording preparations can be performed in the interface c. After the preparation is completed, it can be triggered to enter the dual camera recording. Record multiple movie clips in sequence, and at the same time need to use each animation sub-template to sequentially perform animation processing on the real-time images collected in each movie clip. Specifically, for the kth movie segment, the recording process is as shown in S1402 below:
  • the mobile phone displays the interface d in response to the user's operation d on the interface c.
  • the operation d is used to trigger the mobile phone to start recording the kth movie segment.
  • the k-th movie segment is any one of the plurality of movie segments.
  • the kth movie segment corresponds to the first motion effect sub-template.
  • the interface d is a viewfinder interface where the mobile phone is recording, and the interface d includes a preview image a and a preview image b.
  • the preview image a is obtained by the mobile phone using the first animation sub-template to perform animation processing on the real-time image a collected by the camera c
  • the preview image b is obtained by the mobile phone using the first animation sub-template to perform animation on the real-time image b collected by the camera d dealt with.
  • the operation d can also be called the fourth operation
  • the interface d can also be called the fourth interface
  • the kth movie segment can also be called the first movie segment
  • the preview image a can also be called the first preview image
  • the preview image b may also be referred to as a second preview image.
  • segment option c is one of q segment options b.
  • the segment option c corresponds to an unrecorded movie segment, that is, the kth movie segment.
  • event b may also be referred to as a third event
  • segment option c may also be referred to as a fourth segment option.
  • the kth movie segment may be automatically selected by the mobile phone. That is to say, event b is an event that the mobile phone automatically selects fragment option c.
  • event b is an event that the mobile phone automatically selects fragment option c.
  • the mobile phone selects the first movie clip, the second movie clip...the n-th movie clip as the k-th movie clip to be recorded from front to back according to the order of the multiple movie clips.
  • the operation d may be a preset gesture c performed by the user on the interface c.
  • the preset gesture c is a sliding gesture from bottom to top in interface c 1603 shown in (b) in FIG. 16 .
  • the interface c includes a control f, and the control f is used to trigger the mobile phone to start dual-camera recording.
  • Operation d is a trigger operation (such as a click operation, a long press operation, etc.) on the control f.
  • operation d may be a user's click operation on control f 1605 in interface c 1603 shown in (b) in FIG. 16 .
  • the control f can also be called the first control.
  • the kth movie segment is manually selected by the user. That is to say, event b may be a user's manual selection operation on segment option c.
  • event b may be a user's manual selection operation on segment option c.
  • the mobile phone in response to the user's selection operation on the third segment option (that is, segment option c is the third segment option), the mobile phone can determine that the kth movie segment to be recorded is the third movie segment.
  • operation d may be the user's selection operation on segment option c.
  • the operation d may be the user's preset gesture c on the interface c when the fragment option c is selected.
  • the preset gesture c is a sliding gesture from bottom to top in interface c 1603 shown in (b) in FIG. 16 .
  • the interface c includes a control f, and the control f is used to trigger the mobile phone to start dual-camera recording.
  • Operation d is a trigger operation (such as a click operation, a long press operation, etc.) on the control f.
  • the mobile phone displays the interface d 1701 shown in FIG. 17 in response to the user's click operation on the control f 1605 in the interface c 1603 shown in (b) in FIG. 16 .
  • the interface d 1701 includes a preview image a 1702 and a preview image b 1703.
  • the preview image a 1702 is obtained after the mobile phone performs dynamic effect processing on the real-time image a collected by the camera c according to the sub-template a in the first dynamic effect sub-template;
  • the preview image b 1703 is obtained by the mobile phone according to the first dynamic effect sub-template
  • the sub-template b in the sub-templates is obtained by performing dynamic processing on the real-time image b collected by the camera d.
  • the preview image a and the preview image b displayed on the interface d are both preview images after dynamic effect processing. In this way, the effect after the motion effect processing can be viewed in real time from the interface d during the recording process.
  • the interface d further includes prompt information a, and the prompt information a is used to prompt the skill of recording dynamic video.
  • the interface d 1701 shown in FIG. 17 includes prompt information a 1704, and the specific content of the prompt information a 1704 is: automatically shoot a dynamic video without the user moving the mobile phone.
  • the interface d also includes a recording countdown of the kth movie segment. In this way, the remaining recording duration of the kth movie segment can be clearly prompted.
  • the mobile phone can respond to the user's operation f to switch the viewfinder frames of the two cameras in the viewfinder interface, so as to realize the flexible swap of real-time images.
  • operation f may also be referred to as a sixth operation.
  • the mobile phone blocks the user's operation f on the interface d.
  • the operation f is used to trigger the viewfinder frame of camera c and the viewfinder frame of camera d in the mobile phone interchange interface d.
  • the mobile phone does not respond to the user's operation f on the interface d.
  • inconsistency between the obtained preview and the animation effect of the animation sub-template corresponding to the k-th movie segment due to swapping during the animation processing process can be avoided. This improves the consistency of before and after previews.
  • the operation f may be a double-click operation on the preview image a or the preview image b, or a drag operation on the preview image a or the preview image b. Take operation f as an example of a double-click operation on preview image a or preview image b.
  • the mobile phone does not respond to the user's double-click operation on the preview image a 1702 or the preview image b 1703 in the interface d 1701 shown in FIG. 17 .
  • event a When the recording countdown of the kth movie segment ends, event a will be triggered, and then return to S1401, and interface c will be displayed. Then the mobile phone responds to the user's operation d on the interface c, displays the interface d, and enters the recording of the next movie segment. This cycle goes on and on until all n movie clips are finally recorded, and the cycle ends.
  • the mobile phone responds to the user's click operation on the control d 1202 in the interface b 1201 shown in (a) in Figure 12 , and displays the interface c 1203 shown in (b) in Figure 12 , this Enter the recording preparation of the first movie clip. Then, the mobile phone displays the interface d 1701 shown in FIG. 17 in response to the user's click operation on the control f in the interface c 1203 shown in (b) in FIG. 12 . At this point, enter the recording of the first movie segment, and when the 2.5s recording countdown ends, the recording of the first movie segment ends.
  • the mobile phone responds to the end of the 2.5s recording countdown of the first movie segment, displays the interface c 1501 shown in (a) in Figure 15, and now enters the recording preparation of the second movie segment. Then, the mobile phone displays the interface d 1701 shown in FIG. 17 in response to the user's click operation on the control f in the interface c 1501 shown in (a) in FIG. 15 . At this point, enter the recording of the second movie segment, and when the 2.5s recording countdown ends, the recording of the second movie segment ends.
  • the mobile phone responds to the end of the 2.5s recording countdown of the second movie segment, displays the interface c 1503 shown in (b) in Figure 15, and now enters the recording preparation of the third movie segment. Then, the mobile phone displays the interface d 1701 shown in FIG. 17 in response to the user's click operation on the control f in the interface c 1503 shown in (b) in FIG. 15 . At this point, enter the recording of the third movie segment, and when the 2.5s recording countdown ends, the recording of the third movie segment ends.
  • This cycle repeats until the recording of the 5th movie segment ends.
  • the mobile phone responds to the end of the 2.5s recording countdown of the fifth movie segment, and displays the interface c 1506 shown in (c) in FIG. 15 .
  • the mobile phone generates a video file a in response to the event c.
  • the event c is used to trigger the mobile phone to save a video with a dynamic effect.
  • the video file a includes n sections of the first video stream and n sections of the second video stream, wherein the kth section of the first video stream includes the multi-frame preview image a processed in the kth movie segment, and the kth section of the second video stream
  • the stream includes the multi-frame preview image b processed in the kth movie segment.
  • event c may also be called a second event
  • video file a may also be called a first video file
  • the mobile phone may receive event c.
  • the event c may be an event automatically triggered by the mobile phone. For example, after n movie clips are all recorded, event a is triggered. For another example, after all n movie clips are recorded, if the user does not perform any operation on interface c within a preset time, event c is triggered.
  • the event c may also be an event triggered by a user.
  • the interface c displayed by the mobile phone includes a control g, and the control g is used to trigger the mobile phone to save a video with a dynamic effect.
  • the event c may be a user's trigger operation on the control g (such as a click operation, a long press operation, etc.).
  • the interface c displayed by the mobile phone includes a control h, and the control h is used to trigger the mobile phone to display the control j and the control j in the interface c.
  • control j is used to trigger the mobile phone to save the video with the dynamic effect
  • control j is used to trigger the mobile phone to delete the video with the dynamic effect.
  • Event c is a trigger operation (such as click operation, long press operation, etc.) on control j.
  • the mobile phone can display the interface c 1801 shown in (a) in FIG. 18 .
  • the interface c 1801 includes a control h 1802.
  • the mobile phone can receive the user's click operation on the control h1802.
  • the mobile phone can display an interface c 1803 shown in (b) in FIG. 18 , and the interface c 1803 includes control j 1804 and control j 1805.
  • Control j 1804 is used to trigger the mobile phone to save the video with dynamic effect
  • control j 1805 is used to trigger the mobile phone to delete the video with dynamic effect.
  • Event c is a click operation on control j 1804.
  • the mobile phone generates video file a in response to event c.
  • the mobile phone in response to the event c, displays prompt information b on the interface c, where the prompt information b is used to prompt the progress of generating the video file. In this way, the generation progress can be displayed intuitively.
  • event c is the user's click operation on control j 1804 in interface c 1803 shown in (b) in FIG. 18 .
  • the mobile phone can display the interface c 1901 shown in FIG. 19 in response to the user's click operation on the control i 1804.
  • the interface c 1901 includes prompt information b 1902, and the prompt information b 1902 prompts that the progress of generating the video file is 25%.
  • the video file a includes n pieces of first video streams, that is, the first piece of first video stream, the second piece of first video stream...the nth piece of first video stream.
  • splicing all the preview images a in the first movie segment (such as the first 2.5s) in time sequence can obtain the first segment of the first video stream
  • the second movie segment (such as the second 2.5s ) splicing all preview images a in time sequence to obtain the second segment of the first video stream
  • the video file a further includes n sections of second video streams, namely: the first section of the first video stream, the second section of the first video stream...the nth section of the first video stream.
  • splicing all the preview images b in the first movie segment (such as the first 2.5s) in time sequence can obtain the first segment of the second video stream
  • the second movie segment (such as the second 2.5s All the preview images b of ) can be spliced in time sequence to obtain the second segment of the second video stream
  • All the preview images b in the nth movie segment (such as the nth 2.5s) can be spliced in time sequence to obtain the nth segment Second video stream.
  • the mobile phone generates n segments of the first video stream and n segments of the second video stream. In this way, a video file with a dynamic effect is obtained.
  • the mobile phone can perform dynamic effect processing on the real-time images collected by the two cameras in real time according to the dynamic effect template selected by the user, and process the obtained A preview image is displayed in the viewfinder while recording.
  • the difficulty of recording a video with a dynamic effect can be reduced.
  • the result after the dynamic effect processing can be presented to the user in real time, which is conducive to previewing the recorded result in real time.
  • the mobile phone After the recording ends, the mobile phone generates a video file a with a dynamic effect. In this way, videos with dynamic effects can be obtained intelligently.
  • n pieces of first video streams and n pieces of second video streams need to be obtained. Then, a video file a including n sections of the first video stream and n sections of the second video stream is generated.
  • the mobile phone obtains n pieces of first video streams and n pieces of second video streams in response to event c. In this way, after the mobile phone is triggered to save the video with dynamic effects, all video streams can be obtained through centralized processing. This avoids repeatedly calling the same program multiple times.
  • the mobile phone responds to the event that the recording of the kth movie segment is completed (such as the end of the recording countdown), and processes to obtain the kth segment of the first video stream and the kth segment of the second video stream corresponding to the kth segment of the movie. flow. In this way, the first video stream and the second video stream corresponding to the movie segment can be obtained in time after the recording of the corresponding movie segment is completed.
  • obtaining the k-th segment of the first video stream and the k-th segment of the second video stream in time is also beneficial for the user to check the recording effect of each movie segment that has been recorded in time. And start re-recording in case of bad recording.
  • re-recording means re-recording.
  • the re-recording process includes S2101-S2103.
  • the mobile phone responds to the user's operation e of segment option d in interface c, and plays the first video stream of segment r in the viewfinder frame of camera c in interface c, and the camera in interface c Play the rth segment of the second video stream in the viewfinder frame of d.
  • Fragment option d is one fragment option among p fragment options a. This operation e is a selection operation on the fragment option d.
  • segment option d may also be referred to as a second segment option.
  • the second segment option corresponds to a second movie segment, and the second movie segment is composed of an r-th segment of the first video stream and an r-th segment of the second video stream.
  • the segment option a is a segment option corresponding to a recorded movie segment.
  • the mobile phone may receive the user's operation e on the fragment option d in the interface c.
  • the operation e may be a click operation or a long press operation or the like.
  • the mobile phone may display an interface c 2201 shown in (a) in FIG. Both option 2202 and segment option 2203 have covers, indicating that the first movie segment and the second movie segment have been recorded.
  • the mobile phone can receive the user's click operation on the second fragment option 2203 .
  • the mobile phone may display an interface c 2204 shown in (b) in FIG. 22 in response to the user's click operation on the second fragment option 2203.
  • the interface c 2204 includes a view frame 2205 of camera c and a view frame 2206 of camera d. Wherein, the second segment of the first video stream corresponding to the second movie segment is played in the viewing frame 2205 , and the second segment of the second video stream corresponding to the second movie segment is played in the viewing frame 2206 .
  • the mobile phone displays prompt information c on the interface c in response to the user's operation d on the interface c.
  • the prompt information c is used to prompt whether to re-record the movie segment corresponding to the segment option d.
  • the prompt information c may also be referred to as first prompt information.
  • the mobile phone may receive the user's operation d on the interface c.
  • operation d refer to the relevant description in S1402 above, which will not be repeated here.
  • an interface c includes a control f, and the operation d is a click operation on the control f as an example.
  • the mobile phone can receive the user's click operation on the control f 2207 shown in (b) in FIG. 22 .
  • the mobile phone can display the interface c 2208 shown in (c) in FIG. 22 in response to the user's click operation on the control f 2207.
  • This interface c 2208 includes prompt information c 2209, the specific content of which prompt information c 2209 is: whether to re-record this section of video.
  • the mobile phone in response to operation d, does not directly enter the recording, but first prompts the user whether to re-record. Thereby, more accurate re-recording can be realized.
  • the mobile phone displays an interface d in response to the user's operation g on the prompt information c, so as to reshoot the movie segment corresponding to the segment option d.
  • the operation g is used to trigger the mobile phone to start re-recording the movie segment corresponding to the segment option d.
  • operation g may also be referred to as a fifth operation.
  • the mobile phone may receive the user's operation g on the prompt information c.
  • the operation g may be a preset gesture d for the prompt information c.
  • the preset gesture d is a gesture in a circle of "retake” in the prompt information c2209 shown in (c) of FIG. 22 .
  • the prompt information c includes a control 1 and a control 2, the control 1 is used to trigger the mobile phone to cancel the re-recording, and the control 2 is used to trigger the mobile phone to start the re-recording.
  • Operation g may be a click operation on control 2 .
  • the "cancel" button in the prompt information c 2209 shown in (c) in Figure 22 is control 1
  • the "retake” button is control 2.
  • the mobile phone displays an interface d in response to the user's operation g on the prompt information c.
  • the interface d refer to the relevant description in S1402 above, and details will not be repeated here.
  • the mobile phone displays prompt information d on the interface c in response to the user's operation h on the interface c.
  • the operation h is used to trigger the mobile phone to exit the micro movie recording.
  • the prompt information d is used to prompt the user whether to keep the video streams (such as the first video stream and the second video stream) of the recorded movie segments.
  • the mobile phone saves the video stream of the recorded movie segment in response to the user's operation i on the prompt information d, wherein the operation i triggers the mobile phone to save the video stream.
  • the mobile phone displays interface c in response to the user's operation a on interface a.
  • the operation h is the user's sliding gesture from left to right in the interface c
  • the operation i is the user's third control in the prompt information d (such as in the prompt information d 2304 shown in (b) in Figure 23 "Hold" button) click action.
  • the mobile phone displays the interface c 2301 shown in (a) in FIG. 23 .
  • the mobile phone can display the interface c 2302 shown in (b) in FIG. 23 in response to the user's sliding gesture from left to right in the interface c 2301.
  • the interface c 2302 includes prompt information d 2303.
  • the mobile phone In response to the user's click operation on the "reserve" button in the prompt message d 2303, the mobile phone saves the first video stream corresponding to the first movie segment and the second video stream corresponding to the second movie segment. Then, when the user enters the interface a 601 shown in (a) in Figure 6 again, the mobile phone can display the interface shown in (a) in Figure 23 in response to the user's click operation on the control c 602 in the interface a 601. Interface c 2301.
  • interface c includes fragment options, which can clearly indicate information such as the number and duration of movie fragments, and can also clearly indicate recorded movie fragments and unrecorded movie fragments, or it can also be convenient
  • the user selects a movie segment.
  • the interface c may not include fragment options.
  • the mobile phone automatically selects the movie segment to be recorded, there is no need for the user to select the movie segment to be recorded, so interface c may not include segment options.
  • an electronic device which may include: the above-mentioned display screen (such as a touch screen), a memory, and one or more processors.
  • the display screen, memory and processor are coupled.
  • the memory is used to store computer program code comprising computer instructions.
  • the processor executes the computer instructions, the electronic device can execute various functions or steps performed by the mobile phone in the foregoing method embodiments.
  • the structure of the electronic device reference may be made to the structure of the mobile phone 400 shown in FIG. 5 .
  • the chip system 2400 includes at least one processor 2401 and at least one interface circuit 2402 .
  • the processor 2401 and the interface circuit 2402 may be interconnected through wires.
  • interface circuit 2402 may be used to receive signals from other devices, such as memory of an electronic device.
  • the interface circuit 2402 may be used to send signals to other devices (such as the processor 2401).
  • the interface circuit 2402 can read instructions stored in the memory, and send the instructions to the processor 2401 .
  • the electronic device may be made to execute various steps in the foregoing embodiments.
  • the chip system may also include other discrete devices, which is not specifically limited in this embodiment of the present application.
  • the embodiment of the present application also provides a computer storage medium, the computer storage medium includes computer instructions, and when the computer instructions are run on the above-mentioned electronic device, the electronic device is made to perform the various functions or steps performed by the mobile phone in the above-mentioned method embodiment .
  • the embodiment of the present application also provides a computer program product, which, when the computer program product is run on a computer, causes the computer to execute each function or step performed by the mobile phone in the method embodiment above.
  • the disclosed devices and methods can be implemented in other ways.
  • the device embodiments described above are only illustrative.
  • the division of the modules or units is only a logical function division. In actual implementation, there may be other division methods.
  • multiple units or components can be Incorporation or may be integrated into another device, or some features may be omitted, or not implemented.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be through some interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
  • the unit described as a separate component may or may not be physically separated, and the component displayed as a unit may be one physical unit or multiple physical units, that is, it may be located in one place, or may be distributed to multiple different places . Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above-mentioned integrated units can be implemented in the form of hardware or in the form of software functional units.
  • the integrated unit is realized in the form of a software function unit and sold or used as an independent product, it can be stored in a readable storage medium.
  • the technical solution of the embodiment of the present application is essentially or the part that contributes to the prior art, or all or part of the technical solution can be embodied in the form of a software product, and the software product is stored in a storage medium Among them, several instructions are included to make a device (which may be a single-chip microcomputer, a chip, etc.) or a processor (processor) execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: various media that can store program codes such as U disk, mobile hard disk, read only memory (ROM), random access memory (random access memory, RAM), magnetic disk or optical disk.

Abstract

本申请提供一种视频拍摄方法及电子设备,涉及拍照技术领域,在双镜头录像的场景下,无需用户移动电子设备来控制取景,即可录制得到微电影。从而降低双镜头录像场景下拍摄微电影的难度。电子设备显示第一界面。电子设备响应于用户在第一界面的第一操作,在第一界面显示多个模板选项;每个模板选项对应一种图像处理的动效模板。电子设备响应于用户对多个模板选项中第一模板选项的第二操作,显示第二界面;其中,第二界面用于播放第一模板选项对应的第一动效模板的动画效果。电子设备响应于用户在第二界面的第三操作,采用第一动效模板处理第一摄像头采集的第一实时图像和第二摄像头采集的第二实时图像,以录制微电影。

Description

一种视频拍摄方法及电子设备
本申请要求于2021年5月31日提交国家知识产权局、申请号为202110604950.5、发明名称为“一种视频拍摄方法及电子设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及拍照技术领域,尤其涉及一种视频拍摄方法及电子设备。
背景技术
目前,手机等电子设备中,不仅可以提供图片拍摄功能,还能提供视频录制功能。在现有技术中,视频录制功能通常受限于画框边缘的局限,只能由用户移动电子设备,来录制得到动感视频。而实际中,普通用户并不具备操控手机取景,并录制得到动感视频的专业能力。
由此可见,亟需一种可以简单便捷的实现动感视频录制的方案,以减少视频拍摄的难度。
发明内容
本申请提供一种视频拍摄方法及电子设备,在双镜头录像的场景下,无需用户移动电子设备来控制取景,即可录制得到微电影。从而降低双镜头录像场景下拍摄微电影的难度。
第一方面,本申请实施例提供一种视频拍摄方法,该方法应用于包括多个摄像头的电子设备。电子设备显示第一界面;其中,第一界面是电子设备开始录像前的取景界面,第一界面包括多个摄像头中的两个摄像头采集的实时图像。电子设备响应于用户在第一界面的第一操作,在第一界面显示多个模板选项。第一操作用于触发电子设备录制微电影,每个模板选项对应一种图像处理的动效模板,动效模板用于处理多个摄像头中至少两个摄像头采集的预览图像并得到相应的动画效果。电子设备响应于用户对多个模板选项中第一模板选项的第二操作,显示第二界面;其中,第二界面用于播放第一模板选项对应的第一动效模板的动画效果。电子设备响应于用户在第二界面的第三操作,采用第一动效模板处理第一摄像头采集的第一实时图像和第二摄像头采集的第二实时图像,以录制微电影。其中,第一摄像头是多个摄像头中的一个摄像头,第二摄像头是多个摄像头中除第一摄像头之外的一个摄像头。
综上所述,采用本申请实施例提供的视频拍摄方法,可以将在双镜头录像的场景下,依据用户选择的动效模板,来对两个摄像头采集的实时图像进行动效处理,得到具有动画效果的微电影。如此,则可实现双镜头录像场景下的微电影录制,从而可以录制得到丰富的双镜头视频内容。并且,无需用户控制取景等复杂的操作,可以降低录制微电影的难度。
在第一方面的一种可能的设计方式中,上述采用第一动效模板处理第一摄像头采 集的实时图像和第二摄像头采集的实时图像,包括:电子设备显示第三界面。其中,第三界面是电子设备开始录像前的取景界面,从而可以在该第三界面中进行录制准备。第三界面包括第一摄像头采集的第三实时图像和第二摄像头采集的第四实时图像。采用第一动效模板录制的微电影由多个电影片段组成,第一动效模板包括多个动效子模板,多个电影片段与多个动效子模板一一对应。电子设备接收用户在第三界面的第四操作,第四操作用于触发电子设备录制第一电影片段,第一电影片段是多个电影片段中的任一电影片段。第一片段对应第一动效子模板。电子设备响应于第四操作,显示第四界面。其中,第四界面是电子设备正在录像的取景界面,第四界面中包括第一预览图像和第二预览图像。第一预览图像是电子设备采用第一动效子模板对第一实时图像进行动效处理得到的,第二预览图像是电子设备采用第一动效子模板对第二实时图像进行动效处理得到的。
也就是说,采用本实施例的方法,可以录制前进行录制准备,从而提升微电影录制的效果。以及,动效模板包括多个动效子模板,可以用于多个电影片段中的动效处理。从而可以处理得到动画效果更为丰富的微电影。并且,在双镜头录像视频的过程中,手机可以根据用户选择的动效模板实时对两个摄像头采集的实时图像进行动效处理,并将处理得到的预览图像显示在录制中的取景界面中。如此,可以降低录制得到具有动感效果的视频的难度。而且可以实时向用户呈现动效处理后的结果,有利于实时预览录制的结果。
在第一方面的另一种可能的设计方式中,在电子设备显示第四界面后,还包括:电子设备响应于第一事件,显示第三界面;其中,第一事件是第一电影片段录制完成的事件。
也就是说,采用本实施例的方法,可以在每个电影片段录制完成后,再次触发跳转至第三界面,从而可以在每个电影片段录制前,均进行录制准备。如此,则有利于进一步提高微电影录制的效果。
在第一方面的另一种可能的设计方式中,第三界面中还包括第一窗口,第一窗口用于播放第一动效子模板对应的动画效果。
也就是说,采用本实施例的方法,可以在录制准备阶段查看到相应动效子模板的动画效果。如此,则可以依据动画效果更准确的调整取景。从而提升录像效果。
在第一方面的另一种可能的设计方式中,每个动效子模板包括第一子模板和第二子模板;其中,第一子模板用于电子设备对第一实时图像进行动效处理,第二子模板用于电子设备对第二实时图像进行动效处理。
也就是说,采用实施例的方法,可以针对不同摄像头采用相应的子模板进行处理。从而针对同一时刻两个摄像头采集的预览图像可以处理得到不同的动画效果,可以进一步提高处理的效果。
在第一方面的另一种可能的设计方式中,在采用第一动效模板处理第一摄像头采集的第一实时图像和第二摄像头采集的第二实时图像之后,还包括:电子设备响应于第二事件,保存第一视频文件;其中,第二事件用于触发电子设备保存处理后的视频;第一视频文件包括多段第一视频流和多段第二视频流。多段第一视频流和多个电影片段一一对应,多段第二视频流和多个电影片段一一对应。每段第一视频流包括相应电 影片段中处理得到的多帧第一预览图像,每段第二视频流包括相应电影片段中处理得到的多帧第二预览图像。
也就是说,采用本实施例的方法,可以在完成微电影录制后生成微电影的视频文件。如此,后续则可播放此次录制的微电影。
在第一方面的另一种可能的设计方式中,第三界面中包括p个第一片段选项;其中,p≥0,p是自然数,p是录制完成的电影片段的片段数量;每个第一片段选项对应一个已录制完成的电影片段;第三界面中还包括第一控件。还包括:电子设备接收用户对第二片段选项的选择操作,第二片段选项是p个第一片段选项中的一个;第二片段选项对应第二电影片段。电子设备响应于用户对第二片段选项的选择操作,在第三界面中播放第二电影片段中处理得到的多帧第一预览图像和第二电影片段中处理得到的多帧第二预览图像。电子设备响应于用户对第一控件的点击操作,在第二界面中显示第一提示信息;第一提示信息用于提示是否重拍第二电影片段。电子设备响应于用户对第一提示信息的第五操作,显示第四界面,以重拍第二电影片段。
也就是说,采用本实施例的方法,针对录制完成的电影片段,可以进行重录。从而可以确保录制的微电影中每个电影片段的质量。并且,在重录之前,会提示用户是否重复,以避免用户误操作。
在第一方面的另一种可能的设计方式中,第三界面中包括q个第三片段选项;其中,q≥0,q是自然数,q是未录制完成的电影片段的片段数量;每个第三片段选项对应一个未录制完成的电影片段;第三界面中还包括第一控件。在电子设备接收用户在第三界面的第四操作之前,方法还包括:电子设备响应于第三事件,选中第四片段选项;第四片段选项是q个第三片段选项中的一个;第四片段选项对应第一电影片段。其中,第四操作是在选中第四片段选项的情况下,用户对第一控件的点击操作。
也就是说,采用本实施例的方法,可以依据用户选择来确定即将录制的电影片段,而不受限于电影片段的先后顺序。从而可以使微电影的录制更为灵活。
在第一方面的另一种可能的设计方式中,在显示第四界面之后,还包括:电子设备不响应用户对第四界面的第六操作,第六操作用于触发电子设备互换第四界面中第一摄像头的取景框和第二摄像头的取景框。其中,第六操作包括长按操作或者拖拽操作。
也就是说,采用本实施例的方法,可以避免在动效处理过程中,因对调取景框而导致得到的预览与相应动效子模板的动画效果不一致。从而提高前后预览的一致性。
在第一方面的另一种可能的设计方式中,不同动效模板适用不同的摄像头;方法还包括:电子设备响应于用户在第二界面的第三操作,启动第一摄像头和第二摄像头;第一摄像头和第二摄像头是第一动效模板适用的摄像头。
也就是说,采用本实施例的方法,可以提高采用的摄像头与动效模板适用的摄像头的匹配度。从而可以提升动效处理的效果。
在第一方面的另一种可能的设计方式中,第一界面中包括第二控件,第二控件用于触发电子设备显示多个模板选项;第一操作是对第二控件的点击操作或长按操作。
也就是说,采用本实施例的方法,通过在开始录像前的取景界面中设置第二控件,可便于触发微电影录制。
在第一方面的另一种可能的设计方式中,第一界面中包括第三控件;其中,电子设备响应于用户对多个模板选项中第一模板选项的第二操作,显示第二界面,包括:电子设备响应于用户对多个模板选项中第一模板选项的选择操作,选中第一模板选项。在电子设备选中第一模板选项的情况下,电子设备响应于用户对第三控件的点击操作,显示第二界面。
也就是说,在本实施例中,可以在用户选中相应的模板选项后,进一步依据用户对第三控件的点击操作,才触发动效模板的播放。从而可以保证播放用户确有查看需求的动效模板。
第二方面,本申请实施例提供一种电子设备,电子设备包括多个摄像头、显示屏、存储器和一个或多个处理器。所述显示屏、所述存储器和所述处理器耦合。所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行如下步骤:显示第一界面;其中,第一界面是开始录像前的取景界面,第一界面包括多个摄像头中的两个摄像头采集的实时图像。响应于用户在第一界面的第一操作,在第一界面显示多个模板选项;第一操作用于触发录制微电影,每个模板选项对应一种图像处理的动效模板,动效模板用于处理多个摄像头中至少两个摄像头采集的预览图像并得到相应的动画效果。响应于用户对多个模板选项中第一模板选项的第二操作,显示第二界面;其中,第二界面用于播放第一模板选项对应的第一动效模板的动画效果。响应于用户在第二界面的第三操作,采用第一动效模板处理第一摄像头采集的第一实时图像和第二摄像头采集的第二实时图像,以录制微电影;其中,第一摄像头是多个摄像头中的一个摄像头,第二摄像头是多个摄像头中除第一摄像头之外的一个摄像头。
在第二方面的一种可能的设计方式中,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:显示第三界面;其中,第三界面是开始录像前的取景界面;第三界面包括第一摄像头采集的第三实时图像和第二摄像头采集的第四实时图像;采用第一动效模板录制的微电影由多个电影片段组成,第一动效模板包括多个动效子模板,多个电影片段与多个动效子模板一一对应;接收用户在第三界面的第四操作,第四操作用于触发录制第一电影片段,第一电影片段是多个电影片段中的任一电影片段;第一片段对应第一动效子模板;响应于第四操作,显示第四界面;其中,第四界面是正在录像的取景界面,第四界面中包括第一预览图像和第二预览图像;第一预览图像是采用第一动效子模板对第一实时图像进行动效处理得到的,第二预览图像是采用第一动效子模板对第二实时图像进行动效处理得到的。
在第二方面的另一种可能的设计方式中,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:响应于第一事件,显示第三界面;其中,第一事件是第一电影片段录制完成的事件。
在第二方面的另一种可能的设计方式中,第三界面中还包括第一窗口,第一窗口用于播放第一动效子模板对应的动画效果。
在第二方面的另一种可能的设计方式中,每个动效子模板包括第一子模板和第二子模板;其中,第一子模板用于对第一实时图像进行动效处理,第二子模板用于对第二实时图像进行动效处理。
在第二方面的另一种可能的设计方式中,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:响应于第二事件,保存第一视频文件;其中,第二事件用于触发保存处理后的视频;第一视频文件包括多段第一视频流和多段第二视频流;多段第一视频流和多个电影片段一一对应,多段第二视频流和多个电影片段一一对应;每段第一视频流包括相应电影片段中处理得到的多帧第一预览图像,每段第二视频流包括相应电影片段中处理得到的多帧第二预览图像。
在第二方面的另一种可能的设计方式中,第三界面中包括p个第一片段选项;其中,p≥0,p是自然数,p是录制完成的电影片段的片段数量;每个第一片段选项对应一个已录制完成的电影片段;第三界面中还包括第一控件;
当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:接收用户对第二片段选项的选择操作,第二片段选项是p个第一片段选项中的一个;第二片段选项对应第二电影片段;响应于用户对第二片段选项的选择操作,在第三界面中播放第二电影片段中处理得到的多帧第一预览图像和第二电影片段中处理得到的多帧第二预览图像;响应于用户对第一控件的点击操作,在第二界面中显示第一提示信息;第一提示信息用于提示是否重拍第二电影片段;响应于用户对第一提示信息的第五操作,显示第四界面,以重拍第二电影片段。
在第二方面的另一种可能的设计方式中,第三界面中包括q个第三片段选项;其中,q≥0,q是自然数,q是未录制完成的电影片段的片段数量;每个第三片段选项对应一个未录制完成的电影片段;第三界面中还包括第一控件;
当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:响应于第三事件,选中第四片段选项;第四片段选项是q个第三片段选项中的一个;第四片段选项对应第一电影片段;其中,第四操作是在选中第四片段选项的情况下,用户对第一控件的点击操作。
在第二方面的另一种可能的设计方式中,当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:不响应用户对第四界面的第六操作,第六操作用于触发互换第四界面中第一摄像头的取景框和第二摄像头的取景框。
在第二方面的另一种可能的设计方式中,第六操作包括长按操作或者拖拽操作。
在第二方面的另一种可能的设计方式中,不同动效模板适用不同的摄像头;
当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:响应于用户在第二界面的第三操作,启动第一摄像头和第二摄像头;第一摄像头和第二摄像头是第一动效模板适用的摄像头。
在第二方面的另一种可能的设计方式中,第一界面中包括第二控件,第二控件用于触发显示多个模板选项;第一操作是对第二控件的点击操作或长按操作。
在第二方面的另一种可能的设计方式中,第一界面中包括第三控件;
当所述计算机指令被所述处理器执行时,使得所述电子设备还执行如下步骤:响应于用户对多个模板选项中第一模板选项的选择操作,选中第一模板选项;在选中第一模板选项的情况下,响应于用户对第三控件的点击操作,显示第二界面。
第三方面,本申请实施例提供一种芯片系统,该芯片系统应用于包括多个摄像头、显示屏和存储器的电子设备;所述芯片系统包括一个或多个接口电路和一个或多个处 理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第四方面,本申请提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在电子设备上运行时,使得电子设备执行如第一方面及其任一种可能的设计方式所述的方法。
第五方面,本申请提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行如第一方面及其任一种可能的设计方式所述的方法。
可以理解地,上述提供的第二方面所述的电子设备,第三方面所述的芯片系统,第四方面所述的计算机存储介质,第五方面所述的计算机程序产品所能达到的有益效果,可参考第一方面及其任一种可能的设计方式中的有益效果,此处不再赘述。
附图说明
图1a为本申请实施例提供的一种竖屏形式下的双镜头录像界面的示意图;
图1b为本申请实施例提供的一种竖屏形式下的双镜头录像界面的示意图;
图2为本申请实施例提供的一种双镜头录像的入口界面的示意图;
图3a为本申请实施例提供的另一种双镜头录像的入口界面的示意图;
图3b为本申请实施例提供的另一种双镜头录像的入口界面的示意图;
图4为本申请实施例提供的一种手机的硬件结构示意图;
图5为本申请实施例提供的一种视频拍摄方法的流程图;
图6为本申请实施例提供的一种手机录像界面的示意图;
图7为本申请实施例提供的另一种视频拍摄方法的流程图;
图8为本申请实施例提供的另一种手机录像界面的示意图;
图9为本申请实施例提供的另一种视频拍摄方法的流程图;
图10为本申请实施例提供的另一种手机录像界面的示意图;
图11为本申请实施例提供的另一种手机录像界面的示意图;
图12为本申请实施例提供的另一种手机录像界面的示意图;
图13为本申请实施例提供的另一种视频拍摄方法的流程图;
图14a为本申请实施例提供的一种视频文件的构成示意图;
图14b为本申请实施例提供的另一种视频拍摄方法的流程图;
图15为本申请实施例提供的另一种手机录像界面的示意图;
图16为本申请实施例提供的另一种手机录像界面的示意图;
图17为本申请实施例提供的另一种手机录像界面的示意图;
图18为本申请实施例提供的另一种手机录像界面的示意图;
图19为本申请实施例提供的一种视频保存界面的示意图;
图20为本申请实施例提供的另一种视频文件的构成示意图;
图21为本申请实施例提供的一种视频重录方法的流程图;
图22为本申请实施例提供的另一种手机录像界面的示意图;
图23为本申请实施例提供的另一种手机录像界面的示意图;
图24为本申请实施例提供的一种芯片系统的结构示意图。
具体实施方式
以下,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个该特征。在本实施例的描述中,除非另有说明,“多个”的含义是两个或两个以上。
下面将结合附图对本申请实施例的实施方式进行详细描述。本申请实施例提供的视频拍摄方法可应用于双镜头录像的场景中。双镜头录像是指同一时刻开启两个摄像头录制视频的方式。在双镜头录像的场景中,手机显示的取景界面中有两个摄像头采集的两路图像。
为便于对本申请实施例的理解,先在此以电子设备是手机为例,并结合图1a和图1b对双镜头录像的场景进行说明。
手机可以显示图1a中的(a)示出的取景界面101,该取景界面101中包括第一摄像头(如后置主摄像头)采集的实时图像102和第二摄像头(如前置摄像头)采集的实时图像103。或者,手机可以显示图1a中的(b)示出的取景界面104,该取景界面104中包括第一摄像头(如后置长焦摄像头)采集的实时图像105和第二摄像头(如后置广角摄像头)采集的实时图像106。或者,手机可以显示图1a中的(c)示出的取景界面107,该取景界面107中包括第一摄像头(如后置主摄像头)采集的实时图像108和第二摄像头(如前置摄像头)采集的实时图像109。
上述图1a中的(a)示出的取景界面101、图1a中的(b)示出的取景界面104以及图1a中的(c)示出的取景界面107都是竖屏形式下的取景界面。
在一些实施例中,手机也可以在横屏形式下实现双镜头录像。在本实施例中,手机可以显示图1b中的(a)示出的取景界面113,该取景界面113中包括第一摄像头(如后置主摄像头)采集的实时图像114和第二摄像头(如前置摄像头)采集的实时图像115。或者,手机可以显示图1b中的(b)示出的取景界面116,该取景界面116中包括第一摄像头(如后置长焦摄像头)采集的实时图像117和第二摄像头(如后置广角摄像头)采集的实时图像118。或者,手机可以显示图1b中的(c)示出的取景界面119,该取景界面119中包括第一摄像头(如后置主摄像头)采集的实时图像120和第二摄像头(如前置摄像头)采集的实时图像121。
下文实施例中,将主要以竖屏形式(如图1a所示的形式)来说明本申请实施例的方案。
在双镜头录像的场景中,取景界面中还可以包括模式标识,该模式标识用于指示当前采用的预览摄像头(如前置摄像头和后置摄像头)以及指示预览摄像头采集的实时图像在取景界面中的显示布局。
例如,假设人物图像是前置摄像头采集的实时图像,建筑图像是后置摄像头采集的实时图像。图1a中的(a)示出的取景界面101中包括模式标识110,该模式标识110用于指示当前采用的预览摄像头是1个后置摄像头和1个前置摄像头,并且后置摄像头采集的实时图像和前置摄像头采集的实时图像在取景界面中以上下布局显示。图1a中的(b)示出的取景界面104中包括模式标识111,该模式标识111用于指示 当前采用的预览摄像头是两个后置摄像头(如后置长焦摄像头和后置广角摄像头),并且两个后置摄像头采集的实时图像在取景界面中以上下布局显示。图1a中的(c)示出的取景界面107中包括模式标识112,该模式标识112用于指示当前采用的预览摄像头是1个后置摄像头和1个前置摄像头,并且前置摄像头采集的实时图像和后置摄像头采集的实时图像在取景界面中以画中画布局显示。
另外,在本申请实施例中,不限定触发进入双镜头录像的具体方式。在一些实施例中,在相机应用的附加功能菜单界面(也可称为“更多”菜单界面)中提供有控件a,该控件a用于触发手机开启双镜头录像的功能。手机可接收用户对该控件a的点击操作。手机响应于用户对该控件a的点击操作,可显示界面a。该界面a是开始双镜头录像前的取景界面。
例如,图2示出的附加功能菜单界面201中包括控件a 202。手机可接收用户对该控件a 202的点击操作。手机响应于用户对该控件a 202的点击操作,可以显示图1a中的(a)示出的取景界面101,该取景界面101是开始双镜头录像前的取景界面。
在另一些实施例中,在普通录像的取景界面中包括控件b,该控件b用于触发手机显示多个模式选项。每个模式选项对应一种显示布局(例如画中画布局)。手机可接收用户对该控件b的点击操作。手机响应于用户对该控件b的点击操作,可在普通录像的取景界面显示模式选择窗口。手机可接收用户对多个模式选项中模式选项a的选择操作。手机响应于用户对该模式选项a的选择操作(如点击操作),可显示界面a。该界面a是开始双镜头录像前的取景界面。在该双镜头录像前的取景界面中,第一摄像头采集的实时图像和第二摄像头采集的实时图像以模式选项a对应的显示布局a来显示。也就是说,在本实施例中,将双镜头录像融入到普通录像中,可在普通录像中切换到双镜头录像。
例如,图3a中的(a)示出的普通录像的取景界面301中包括控件b 302。手机可接收用户对该控件b 302的点击操作。手机响应于用户对该控件b 302的点击操作,可显示图3a中的(b)示出的普通录像的取景界面303。该取景界面303中包括多个模式选项,分别为模式选项304,模式选项305和模式选项306。其中,该模式选项303对应的是前置摄像头采集的实时图像和后置摄像头采集的实时图像上下排列的显示布局;该模式选项304对应的是两个后置摄像头分别采集的实时图像上下排布的显示布局;该模式选项305对应的是前置摄像头采集的实时图像和后置摄像头采集的实时图像以画中画排列的显示布局。手机可接收用户对模式选项304的点击操作,即模式选项a是模式选项304。手机响应于用户对模式选项304的点击操作,可显示图3a中的(c)示出的界面a 307。该界面a 307中,后置摄像头采集的实时图像308和前置摄像头采集的实时图像309以上下排列的显示布局来显示。
在另一些实施例中,相机应用的标签栏中包括多镜录像标签(tab)。通过触发该多镜录像标签,可直接进入双镜头录像。具体的,相机应用中提供有多镜录像标签,手机可接收用户对该多镜录像标签的触发操作(如点击操作、长按操作等)。手机响应于用户对该多镜录像标签的触发操作,可显示界面a。该界面a是开始双镜头录像前的取景界面。如此,则可以通过相机应用中的独立标签来触发进入双镜头录像,避免与其它标签出现功能上的兼容性问题。
以用户对该多镜录像标签的触发操作是点击操作为例。相机应用中提供有图3b中的(a)示出的多镜录像标签310。手机可接收用户对该多镜录像标签310的点击操作。手机响应于用户对该多镜录像标签310的点击操作,可显示图3b中的(b)示出的界面a 311。该界面a 311中,后置摄像头采集的实时图像312和前置摄像头采集的实时图像313以上下排列的显示布局来显示。此时则进入了双镜头录像。
本申请实施例提供一种视频拍摄方法,该方法可应用于电子设备。该电子设备可提供视频录制功能,具体可提供双镜头录像的功能。其中,在双镜头录像的过程中,电子设备可以依据用户选择的动效模板对两个摄像头采集的图像进行动效处理,从而录制得到具有动画效果的微电影。
综上所述,采用本申请实施例提供的视频拍摄方法,无需依赖用户对电子设备的移动控制等复杂操作,即可录制得到微电影,降低了录制动感视频的难度。
示例性的,本申请实施例中的电子设备可以是手机、平板电脑、桌面型、膝上型、手持计算机、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本,以及蜂窝电话、个人数字助理(personal digital assistant,PDA)、增强现实(augmented reality,AR)\虚拟现实(virtual reality,VR)设备等包括上述折叠屏的设备,本申请实施例对该电子设备的具体形态不作特殊限制。
以电子设备是手机为例。请参考图4,为本申请实施例提供的一种手机400的结构示意图。如图4所示,电子设备可以包括处理器410,外部存储器接口420,内部存储器421,通用串行总线(universal serial bus,USB)接口430,充电管理模块440,电源管理模块441,电池442,天线1,天线2,移动通信模块450,无线通信模块460,音频模块470,扬声器470A,受话器470B,麦克风470C,耳机接口470D,传感器模块480,按键490,马达491,指示器492,摄像头493,显示屏494,以及用户标识模块(subscriber identification module,SIM)卡接口495等。
可以理解的是,本实施例示意的结构并不构成对电子设备的具体限定。在另一些实施例中,电子设备可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以硬件,软件或软件和硬件的组合实现。
处理器410可以包括一个或多个处理单元,例如:处理器410可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,存储器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
控制器可以是电子设备的神经中枢和指挥中心。控制器可以根据指令操作码和时序信号,产生操作控制信号,完成取指令和执行指令的控制。
处理器410中还可以设置存储器,用于存储指令和数据。在一些实施例中,处理器410中的存储器为高速缓冲存储器。该存储器可以保存处理器410刚用过或循环使用的指令或数据。如果处理器410需要再次使用该指令或数据,可从所述存储器中直接调用。避免了重复存取,减少了处理器410的等待时间,因而提高了系统的效率。
在一些实施例中,处理器410可以包括一个或多个接口。接口可以包括集成电路(inter-integrated circuit,I2C)接口,集成电路内置音频(inter-integrated circuit sound,I2S)接口,脉冲编码调制(pulse code modulation,PCM)接口,通用异步收发传输器(universal asynchronous receiver/transmitter,UART)接口,移动产业处理器接口(mobile industry processor interface,MIPI),通用输入输出(general-purpose input/output,GPIO)接口,用户标识模块(subscriber identity module,SIM)接口,和/或通用串行总线(universal serial bus,USB)接口等。
可以理解的是,本实施例示意的各模块间的接口连接关系,只是示意性说明,并不构成对电子设备的结构限定。在另一些实施例中,电子设备也可以采用上述实施例中不同的接口连接方式,或多种接口连接方式的组合。
充电管理模块440用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。电源管理模块441用于连接电池442,充电管理模块440与处理器410。电源管理模块441接收电池442和/或充电管理模块440的输入,为处理器410,内部存储器421,外部存储器,显示屏494,摄像头493,和无线通信模块460等供电。
电子设备的无线通信功能可以通过天线1,天线2,移动通信模块450,无线通信模块460,调制解调处理器以及基带处理器等实现。
移动通信模块450可以提供应用在电子设备上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块450可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。
无线通信模块460可以提供应用在电子设备上的包括无线局域网(wireless local area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM),近距离无线通信技术(near field communication,NFC),红外技术(infrared,IR)等无线通信的解决方案。
在一些实施例中,无线通信模块360可以包括NFC芯片,该NFC芯片可以包括NFC控制器(NFC controller,NFCC)。该NFC芯片能够对信号进行放大、模数转换及数模转换、存储等处理。NFCC用于负责通过天线进行数据的物理传输。NFCC可以包含在电子设备的NFC芯片中。设备主机(device host,DH)用于负责NFCC的管理,如初始化、配置和电源管理等。其中,DH可以包含在电子设备的主芯片中,也可以与电子设备的处理器集成在一起。
在一些实施例中,电子设备的天线1和移动通信模块450耦合,天线2和无线通信模块460耦合,使得电子设备可以通过无线通信技术与网络以及其他设备通信。所述无线通信技术可以包括全球移动通讯系统(global system for mobile communications,GSM),通用分组无线服务(general packet radio service,GPRS),码分多址接入(code division multiple access,CDMA),宽带码分多址(wideband code division multiple access,WCDMA),时分码分多址(time-division code division multiple access,TD-SCDMA),长期演进(long term evolution,LTE),BT,GNSS,WLAN,NFC,FM,和/或IR技术等。
电子设备通过GPU,显示屏494,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏494和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。处理器410可包括一个或多个GPU,其执行程序指令以生成或改变显示信息。
显示屏494用于显示图像,视频等。显示屏494包括显示面板。显示面板可以采用液晶显示屏(liquid crystal display,LCD),有机发光二极管(organic light-emitting diode,OLED),有源矩阵有机发光二极体或主动矩阵有机发光二极体(active-matrix organic light emitting diode,AMOLED),柔性发光二极管(flex light-emitting diode,FLED),Miniled,MicroLed,Micro-oLed,量子点发光二极管(quantum dot light emitting diodes,QLED)等。
电子设备可以通过ISP,摄像头493,视频编解码器,GPU,显示屏494以及应用处理器等实现拍摄功能。ISP用于处理摄像头493反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将所述电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头493中。
摄像头493用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。在一些实施例中,电子设备可以包括1个或N个摄像头493,N为大于1的正整数。
外部存储器接口420可以用于连接外部存储卡,例如Micro SD卡,实现扩展电子设备的存储能力。外部存储卡通过外部存储器接口420与处理器410通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器421可以用于存储计算机可执行程序代码,所述可执行程序代码包括指令。处理器410通过运行存储在内部存储器421的指令,从而执行电子设备的各种功能应用以及数据处理。例如,处理器410可以通过执行存储在内部存储器421中的指令,响应于用户展开显示屏494的操作,在显示屏484显示不同的内容。内部存储器421可以包括存储程序区和存储数据区。其中,存储程序区可存储操作系统,至少一个功能所需的应用程序(比如声音播放功能,图像播放功能等)等。存储数据区可存储电子设备使用过程中所创建的数据(比如音频数据,电话本等)等。此外,内部存储器421可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件,闪存器件,通用闪存存储器(universal flash storage,UFS)等。
电子设备可以通过音频模块470,扬声器470A,受话器470B,麦克风470C,耳机接口470D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
按键490包括开机键,音量键等。按键490可以是机械按键。也可以是触摸式按键。电子设备可以接收按键输入,产生与电子设备的用户设置以及功能控制有关的键 信号输入。马达491可以产生振动提示。马达491可以用于来电振动提示,也可以用于触摸振动反馈。指示器492可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。SIM卡接口495用于连接SIM卡。SIM卡可以通过插入SIM卡接口495,或从SIM卡接口495拔出,实现和电子设备的接触和分离。电子设备可以支持1个或N个SIM卡接口,N为大于1的正整数。
以下实施例中的方法均可以在具有上述硬件结构的手机400中实现,并主要结合竖屏形式下的双镜头录像视频场景,来对本申请实施例的方法进行说明。
本申请实施例提供一种视频拍摄方法,该方法可应用于手机中。该手机至少包括多个摄像头,并且该手机可提供双镜头录像的功能。如图5所示,该方法包括S501-S504。
S501、手机显示界面a,该界面a是手机开始录像前的取景界面,界面a包括多个摄像头中的摄像头a和摄像头b采集的实时图像。
其中,界面a也可以称为第一界面,下文中相同。摄像头a和摄像头b是多个摄像头中的两个摄像头。
基于前文说明可知,手机响应于用户对控件a的点击操作,可显示界面a。或者,手机响应于用户对多个模式选项中模式选项a的选择操作,也可显示界面a。或者,手机响应于用户对控件b的点击操作,也可显示界面a。并且,界面a中包括两个摄像头采集的实时图像。例如,图3a中的(c)示出的界面a 307中包括摄像头a(如后置主摄像头)采集的实时图像308和摄像头b(如前置摄像头)采集的实时图像309。
S502、手机响应于用户对该界面a的操作a,在界面a中显示多个模板选项。该操作a用于触发手机录制微电影,每个模板选项对应一种图像处理的动效模板。该动效模板用于处理多个摄像头中至少两个摄像头采集的预览图像并得到相应的动画效果。
其中,操作a也可以称为第一操作,下文中相同。
在S502之前,手机可接收用户对该界面a的操作a。其中,操作a可以是用户在界面a中执行的预设手势a(如滑动手势、长按手势)。或者,界面a中包括控件c,控件c用于触发手机显示多个模板选项。操作a是用户对控件c的触发操作(如点击操作、长按操作等)。其中,控件c也可以称为第二控件。
以操作a是用户对控件c的点击操作为例,图6中的(a)示出的界面a 601中包括控件c 602。手机可以接收用户对该控件c 602的点击操作。应注意,图6中的(a)示出的控件c 602的形态和位置仅为示例性的,实际实施时,并不以此为限。例如,控件c还可以显示在界面a的右边缘或者左边缘的位置。又如,控件c还可以是圆形或其他形状。
其中,动效模板是模拟摄像头在各种运动状态下拍摄的视频的动感效果的模板。其中,运动状态包括推、拉、摇、移、跟和/或甩等状态。每个动效模板中至少包括一种运动状态下的动感效果。为便于对动效模板的理解,下面对摄像头在推、拉、摇、移、跟或甩的运动状态的拍摄效果进行说明:
1、推:推是指使画面由大范围景别连续过渡的拍摄方法。推镜头一方面把主体从环境中分离出来,另一方面提醒观者对主体或主体的某个细节特别注意。
2、拉:拉与推正好相反,它把被摄主体在画面由近至远由局部到全体地展示出来,使得主体或主体的细节渐渐变小。拉镜头强调是主体与环境的关系。
3、摇:摇是指摄像机的位置不动,只作角度的变化,其方向可以是左右摇或上下摇,也可以是斜摇或旋转摇。其目的是对被摄主体的各部位逐一展示,或展示规模,或巡视环境等。其中最常见的摇是左右摇,在电视节目中经常使用。
4、移:移是“移动”的简称,是指摄像机沿水平作各方向移动并同时进行拍摄。移动拍摄要求较高,在实际拍摄中需要专用设备配合。移动拍摄可产生巡视或展示的视觉效果,如果被摄主体属于运动状态,使用移动拍摄可在画面上产生跟随的视觉效果。
5、跟:跟是指跟随拍摄,即摄像机始终跟随被摄主体进行拍摄,使运动的被摄主体始终在画面中。其作用是能更好地表现运动的物体。
6、甩:甩实际上是摇的一种,具体操作是在前一个画面结束时,镜头急骤地转向另一个方向。在摇的过程中,画面变的非常模糊,等镜头稳定时才出现一个新的画面。它的作用是表现事物、时间、空间的急剧变化,造成人们心理的紧迫感。
在S502中,以操作a是用户对控件c的点击操作为例,手机响应于用户对图6中的(a)示出的界面a 601中的控件c 602的点击操作,可以显示图6中的(b)示出的界面a 603。该界面a 603中包括4个模板选项,分别为:模板选项604,模板选项605,模板选项606和模板选项607。其中,模板选项604对应“好友欢聚”的动效模板,模板选项605对应“温馨一刻”的动效模板,模板选项606对应“亲密时光”的动效模板,模板选项606对应“悦己时刻”的动效模板。
应注意,图6中的(b)示出的模板选项的数量、位置以及形态都是示例性的,实际实施时,并不以此为限。例如,多个选项模板也可以显示在界面a的中间位置。又如,多个选项模板也可以纵向排列。又如,各个模板选项有不同的选项封面。
在一些实施例中,多个模板选项包括多个模板选项a和多个模板选项b,其中,每个模板选项a对应一种单镜头录像中的图像处理的动效模板,每个模板选项b对应一种双镜头录像中的图像处理的动效模板。应注意,由图3b中的(a)和图3b中的(b)所示的方式进入双镜头录像的界面后,显示的多个模板选项中通常仅包括双镜头录像的动效模板。
在本实施例中,为便于用户选择与双镜头录制场景相匹配的动效模板,如图7所示,该方法还包括S701:
S701、手机从多个模板选项中定位出多个模板选项b。其中,每个模板选项b对应一种多镜视频的动效处理的动效模板。
手机可根据各个模板选项的场景属性来定位出多个模板选项b。其中,场景属性用于指示模板选项适用的场景。其中,适用的场景包括单镜头录像场景或双镜头录像场景。具体的,若模板选项的场景属性是第一属性,则定位该模板选项为模板选项a。若模板选项的场景属性是第二属性,则定位该模板选项为模板选项b。例如,若选项模板的场景属性是1,则定位模板选项是模板选项b。即:第二属性是1。
并且,S502进一步包括S702:
S702、手机响应于操作a,在界面a中显示多个模板选项。该操作a用于触发模板选项的显示。该多个模板选项是多个模板选项b。
具体的,从第一个模板选项b开始,连续显示多个模板选项b。
例如,图6中的(b)示出的界面a 603中包括的4个模板选项都是适用于双镜头录制场景的模板选项b。并且,模板选项604是定位出的第一个模板选项b,模板选项605是定位出的第二个模板选项b,模板选项606是定位出的第三个模板选项b,模板选项607是定位出的第四个模板选项b。
在本实施例中,针对双镜头录像的场景,显示模板选项b供用户选择。如此,可以使显示的模板选项与场景相适应,进而有利于后续快速选出相匹配的模板选项。
在另一些实施例中,为便于区分模板选项适用的场景(如单摄像头录制场景或双镜头录像场景)。手机在显示多个模板选项时,在模板选项的预设位置处显示场景标识。其中,场景标识用于指示模板选项适用的场景。
在另一些实施例中,手机响应于操作a,在界面a中显示多个模板选项。与此同时,还会隐藏界面a中的多个功能图标和控件,以简化界面中的元素,有利于模板选项的选择。例如,相较于图6中的(a)示出的界面a 601:图6中的(b)示出的界面a 603,隐藏了变焦调节控件608、美颜控件609、模式标识610、闪光灯图标611、滤镜图标612、设置图标613以及镜头切换控件614。
S503、手机响应于用户对多个模板选项中模板选项c的操作b,显示界面b。该操作b用于触发手机播放动画效果。界面b用于播放模板选项c对应的动效模板a的动画效果。
其中,模板选项c也可以称为第一模板选项,操作b也可以称为第二操作,界面b也可以称为第二界面,动效模板a也可以称为第一动效模板。
在S503前,手机可接收用户对界面a的操作b。
在一些实施例中,操作b可以是用户对模板选项c的选择操作(如点击操作、长按操作等)。以操作b是用户对模板选项c的点击操作为例。手机可接收用户对图8示出的界面a 801中模板选项c 802的点击操作。
在另一些实施例中,用户对模板选项c的选择操作,只能触发手机选择动效模板a。而后,手机响应于用户对界面a的预设操作a,才能显示界面b。例如,该预设操作a可以是对界面a的预设手势b(如滑动手势)。或者,预设操作a是对界面a中未显示控件或图标的区域的长按操作。或者,界面a中包括控件d,控件d用于触发手机播放动画效果,预设操作a是用户对控件d的触发操作(如点击操作、长按操作等)。其中,控件d也可以称为第三控件。
以操作b包括用户对模板选项c的选择操作和对控件d的点击操作为例。如图9所示,S503包括S901-S904:
S901、手机接收用户对模板选项c的选择操作。该模板选项c是多个模板选项中的一个。模板选项c对应动效模板a。
S902、手机响应于该选择操作,选中模板选项c,在界面a中突出显示模板选项c。
例如,选择操作是用户对模板选项c的点击操作。手机可接收用户对图10中的(a)示出的界面a 1001中的模板选项c 1002的点击操作。该点击操作用于触发手机选择“温馨一刻”的动效模板,即“温馨一刻”的动效模板是动效模板a。该“温馨一刻”的动效模板是模板选项c 1002对应的动效模板。手机响应于用户对图10中的(a)示出的界面a 1001中的模板选项c 1002的点击操作,可显示图10中的(b)示出的界面a 1003, 该界面a 1001中突出显示模板选项c 1004。
应注意,界面a中默认突出显示的是界面a中排在第一位或者中间位置的模板选项。例如,图6中的(b)示出的界面a 603中包括4个模板选项。其中,默认突出显示的是排在第一位的模板选项604。因此,在一种具体的实现方式中,若动效模板a是该第一位或者中间位置的模板选项对应的动效模板,则可以省略该S901和S902。
S903、手机接收用户对控件d的点击操作。
例如,手机可接收用户对图10中的(b)示出的界面a 1003中控件d 1005的点击操作。该点击操作用于触发手机播放“温馨一刻”的动效模板,即“温馨一刻”的动效模板是动效模板a。该“温馨一刻”的动效模板是模板选项c 1004对应的动效模板。应注意,图10中的(b)中示出的控件d 1005的形态和位置均为示例性的,实际实施时,并不以此为限。例如,控件d的形态也可以是一个圆角矩形内包括相机图标。又如,控件d也可以设置在界面a的右下角。
S904、手机响应于用户对控件d的点击操作,显示界面b。
由此可见,在本实施例中,手机响应于用户对模板选项c的选择操作,选中该模板选项c对应的动效模板a。而后,手机可根据用户进一步对界面a中的控件d的点击操作,来触发播放动效模板a。如此,可以在用户准确选中相应的模板选项后,才触发动效模板a的播放。从而可以提高触发播放的有序性。
在S503中,假设操作b包括用户对模板选项c的选择操作和对控件d的点击操作。手机响应于用户对图10中的(b)示出的控件d 1005的点击操作,可显示图11中的(a)示出的界面b 1101。该界面b 1101中包括窗口a 1102。该窗口a 1102用于播放动效模板a。
应理解,图11中的(a)示出的界面b仅为示例性的。该界面b是在界面a中的实时图像之上添加蒙层(如灰色蒙层),在蒙层之上显示多个模板选项、控件d等界面元素,并且在蒙层之上显示窗口a而得到的。但是实际实施,并不以此为限。
在一些实施例中,界面b可以是开始双镜头录像前的取景界面。与界面a不同的是,该界面b中包括窗口a。例如,界面b可以是图11中的(b)示出的界面b 1103。该界面b 1103中包括窗口a 1104,该窗口a 1104用于播放动效模板a。
在另一些实施例中,界面b可以是预设背景的界面。例如,预设背景可以是纯色背景。图11中的(c)示出的界面b 1105是纯黑背景的界面。该界面b 1105中包括窗口a 1106,该窗口a 1106用于播放动效模板a。
应理解,上述图11中的(a)示出的窗口a 1102、图11中的(b)示出的窗口a 1104和图11中的(c)示出的窗口a 1106的形态和位置仅为示例性的。实际实施时,并不以此为限。在一些实施例中,窗口a可以是全屏窗口。如此,则可以在播放动效模板过程中一比一还原出动效模板。在另一些实施例中,窗口a的形状与动效模板a适用的屏幕方向相适应。如此,便于指示用户调整屏幕方向。例如,动效模板a适用于对横屏形式下录制的视频进行动效处理,则窗口a的形状是宽度值比高度值大的矩形。又如,动效模板a适用于对竖屏形式下录制的视频进行动效处理,则窗口a的形状是宽度值比高度值小的矩形。
在一些实施例中,窗口a包括第一子窗口和第二子窗口,其中,第一子窗口用于 播放子模板a,第二子窗口用于播放子模板b。第一子窗口和第二子窗口的显示布局与动效模板a适用的显示布局相匹配。如此,则可便于明确动效模板a适用的显示布局。其中,子模板a也可以称为第一子模板,子模板b也可以称为第二子模板。
在本实施例中,在显示界面b之前,手机检测动效模板a适用的显示布局b,其中,显示布局b包括上下显示布局、左右显示布局、横屏画中画显示布局或者竖屏画中画显示布局。而后,S503进一步包括:手机响应于操作b,显示界面b。该界面b中包括窗口a,该窗口a包括第一子窗口和第二子窗口,第一子窗口和第二子窗口以第一显示布局来显示。
例如,手机响应于用户对10中的(b)示出的界面a 1003中的控件d 1005的点击操作,可显示图11中的(a)示出的界面b 1101,该界面b 1101中包括窗口a 1102。该窗口a 804包括第一子窗口1107和第二子窗口1108,其中,第一子窗口1107和第二子窗口1108左右显示布局来显示,则表明图10中的(b)中的模板选项c 1004对应的动效模板适用于横屏形式下,以左右布局显示两个摄像头对应的预览流的场景。
在一些实施例中,该界面b中还包括多个模板选项,以便用户在界面b中重新选择动效模板a。如此,则无需返回至界面a,而在界面b中即可继续切换动效模板。例如,图11中的(a)示出的界面b 1101中包括多个模板选项,分别为:模板选项1109、模板选项1110、模板选项1111和模板选项1112。
S504、手机响应于用户对界面b的操作c,采用动效模板a处理摄像头c采集的实时图像a和摄像头d采集的实时图像b,以录制微电影。其中,摄像头c是多个摄像头中的一个摄像头,摄像头d是所述多个摄像头中除摄像头c之外的一个摄像头。
其中,操作c也可以称为第三操作,摄像头c也可以称为第一摄像头,实时图像a也可以称为第一实时图像,摄像头d也可以称为第二摄像头,实时图像b也可以称为第二实时图像。
在S504前,手机可接收用户对界面b的操作c,该操作c可以是用户在界面b中的预设手势c。例如,预设手势c是在界面b中从右向左的滑动手势。或者,界面b中包括控件e,该控件e用于触发手机开始微电影录制。操作c可以是对控件e的触发操作(如点击操作、长按操作等)。例如,操作c可以是用户对图12中的(a)示出的界面b 1201中控件e 1202的点击操作。
手机响应于用户对界面b的操作c,则可以采用动效模板a来进行动效处理,完成微电影的录制。例如,为摄像头c采集的实时图像a和摄像头d采集的实时图像b进行动效处理,以达到动效模板a的动画效果。
在一些实施例中,不同的动效模板适用于对不同的摄像头采集的预览图像进行动效处理。简言之,不同动效模板适用的摄像头是不同的。摄像头c和摄像头d是动效模板a适用的两个摄像头。
基于此,如图13所示,在本实施例中,该S504进一步包括S1301和S1302:
S1301、手机响应于用户对界面b的操作c,根据动效模板a适用的摄像头来启动摄像头c和摄像头d。
其中,动效模板a适用的摄像头c和摄像头d可以是前置摄像头、后置主摄像头、后置广角摄像头、后置超广角摄像头和后置长焦摄像头中任意两个摄像头的组合。
通常情况下,手机可以查询动效模板a的属性信息来获取其适用的摄像头。
S1302、在启动摄像头c和摄像头d后,手机采用动效模板a处理摄像头c采集的实时图像a和摄像头d采集的实时图像b,以录制微电影。
如此,则可以提高采用的摄像头与动效模板适用的摄像头的匹配度。从而可以提升动效处理的效果。
应注意,在本实施例中,摄像头c可能与摄像头a或摄像头b相同,也可能与摄像头a和摄像头b都不同;摄像头c可能与摄像头a或摄像头b相同,也可能与摄像头a和摄像头b都不同。
在另一些实施例中,摄像头c和摄像头d是在开始录制准备前就已开启的摄像头。例如,摄像头c是已开启的摄像头a,摄像头d是已开启的摄像头b。如此,可以在接收到操作c后,直接以已开启的两个摄像头分别作为摄像头c和摄像头d。从而减少确定摄像头的过程,可以快速进入动效处理。
综上所述,采用本申请实施例提供的视频拍摄方法,可以将在双镜头录像的场景下,依据用户选择的动效模板,来对两个摄像头采集的实时图像进行动效处理,得到具有动画效果的微电影。如此,则可实现双镜头录像场景下的微电影录制,从而可以录制得到丰富的双镜头视频内容。并且,无需用户控制取景等复杂的操作,可以降低录制微电影的难度。
在说明下文实施例中,需要先说明的是:通常情况下,微电影中包括多个电影片段。相应的,动效模板a包括多个动效子模板,多个电影片段与多个动效子模板一一对应。每个动效子模板用于相应电影片段中采集到的实时图像的动效处理。从而可以处理得到动画效果更为丰富的微电影。
在一些实施例中,为了有针对性性的对不同摄像头采集的实时图像进行动效处理,每个动效子模板进一步包括子模板a和子模板b。子模板a用于手机对实时图像a进行动效处理,子模板b用于手机对实时图像b进行动效处理。如此,则可以针对不同摄像头采用相应的子模板进行处理。从而针对同一时刻的预览图像处理得到不同的动画效果,可以进一步提高处理的效果。其中,子模板a也可以称为第一子模板,子模板b也可以称为第二子模板。
例如,动效模板a包括n个动效子模板,分别为第1个动效子模板、第2个动效子模板……第n个动效子模板。每个动效子模板包括子模板a和子模板b。假设每个电影片段都是2.5秒。如图14a所示,第1个动效子模板用于第1个电影片段(如第1个2.5秒)中的动效处理,具体的,第1个动效子模板中的子模板a用于第1个电影片段中摄像头c采集的实时图像a的动效处理,第1个动效子模板中的子模板b用于第1个电影片段中摄像头d采集的实时图像b的动效处理。同理,第2个动效子模板用于第2个电影片段(如第2个2.5秒)中的动效处理,具体的,第2个动效子模板中的子模板a用于第2个电影片段中摄像头c采集的实时图像a的动效处理,第2个动效子模板中的子模板b用于第2个电影片段中摄像头d采集的实时图像b的动效处理……第n个动效子模板用于第n个电影片段(如第n个2.5秒)中的动效处理,具体的,第n个动效子模板中的子模板a用于第n个电影片段中摄像头c采集的实时图像a的动效处理,第n个动效子模板中的子模板b用于第n个电影片段中摄像头d采 集的实时图像b的动效处理。
在另一些实施例中,如图14b所示,前述实施例的S504进一步包括S1401-S1402,在S504后,还包括S1403:
S1401、手机响应于事件a,显示界面c。该界面c是开始双摄像头录制前的取景界面。该界面c中包括摄像头c采集的实时图像c和摄像头d采集的实时图像d。该界面c中还包括多个电影片段的片段选项。
其中,界面c也可以称为第三界面,实时图像c可以称为第三实时图像,实时图像d可以称为第四实时图像。应注意,实时图像c和实时图像a并不存在本质上的区别,该两者都是摄像头c采集的实时图像,只不过是摄像头在不同的时段下采集的实时图像。实时图像a是在电影片段的录制过程中采集的实时图像,实时图像c是在电影片段的录制准备过程中(即显示界面c时)采集的实时图像。同样的,实时图像d和实时图像b也并不存在本质上的区别,理由同上。
其中,界面c都是开始双摄像头录制前的取景界面。也就是说,在显示界面c时,并未真正开始录制。从而在显示该界面c的过程中,手机可根据用户对手机的移动来调整取景。并且调整过程中的取景变化不会被录制到视频中。在本实施例中,将该调整取景的过程称为录制准备。
在此,需要说明的是,显示界面c的情况有两种。第一种,手机响应于用户对界面b的操作c,从界面b跳转到界面c。也就是说,在第一种情况中,事件a是用户对界面b的操作c。第二种,手机响应于第k个电影片段录制完成的事件,跳回界面c。其中,1≤k≤n,n是动效模板a包括的动效子模板的数量。k和n都是正整数。也就是说,在第二种情况下,事件a是第k个电影片段录制完成的事件,此时的事件a也可以称为第一事件。下面将分别针对这两种情况来说明该S1401。
第一种情况,从界面b跳转到界面c。该情况下,事件a可以是用户对界面b的操作c,该操作c用于触发手机开始录制准备。
在1401前,手机可接收用户对界面b的操作c。
其中,操作c可以是用户在界面b中的预设手势c。例如,预设手势c是在界面b中从右向左的滑动手势。或者,界面b中包括控件e,该控件e用于触发手机开始录制准备。操作c可以是对控件e的触发操作(如点击操作、长按操作等)。例如,操作c可以是用户对图12中的(a)示出的界面b 1201中控件e 1202的点击操作。
示例性的,手机响应于用户对图12中的(a)示出的界面b 1201中控件e 1202的点击操作,可以显示图12中的(b)示出的界面c 1203,该界面c 1203是开始双摄像头录制前的取景界面。该界面c 1203中包括摄像头c(如后置摄像头)采集的实时图像c 1204和摄像头d(如前置摄像头)采集的实时图像d 1205。该界面c 1203中还包括5个电影片段的片段选项,分别为:片段选项1206、片段选项1207、片段选项1208、片段选项1209和片段选项1210。
应理解,图12中的(b)中示出的片段选项的形态、数量以及位置等仅为示例性的。实际实施时,并不以此为限。例如,片段选项中也可以不显示片段时长(如2.5s)。又如,片段选项也可以是圆形、正方形等形状。其中,片段选项的数量因动效模板a包括的动效子模板数量的不同而不同。
第二种情况,在第k(1≤k≤n,k为正整数)个电影片段录制完成后,跳回界面c。该情况下,事件a可以是第k个电影片段录制完成的事件。例如,第k个电影片段的录制倒计时(如2.5s)结束时,触发事件a。其中,每次录制的第k个电影片段也可以称为第一电影片段。
在S1401之前,手机可检测第k个电影片段的录制倒计时是否结束。若检测到录制倒计时结束,则显示界面c。
在该情况下,显示界面c时,已完成了k个电影片段的录制。在一些实施例中,为了区分已完成录制的电影片段和未完成录制的电影片段,在界面c中区别显示片段选项a和片段选项b。其中,每个片段选项a对应一个已录制完成的电影片段,每个片段选项b对应一个未录制完成的电影片段。具体的,界面c中包括p个片段选项a和q个片段选项b。p≥0,q≥0,p和q都是自然数。通常而言,p=k,q=n-k。其中,片段选项a也可以称为第一片段选项,片段选项b也可以称为第三片段选项。
在一种具体的实现方式中,片段选项a中包括片段封面,片段选项b中不包括片段封面,以区分已完成录制的电影片段和未完成录制的电影片段。
例如,k=1,则手机可显示图15中的(a)示出的界面c 1501。该界面c 1501中指向第1个电影片段的片段选项a 1502显示有封面,而其余片段选项则不显示封面。
又如,k=2,则手机可显示图15中的(b)示出的界面c 1503。该界面c 1503中指向第1个电影片段的片段选项a 1504和指向第2个电影片段的片段选项a 1505均显示有封面,而其余片段选项则不显示封面。
又如,k=5,则手机可显示图15中的(c)示出的界面c 1506。该界面c 1506中所有片段选项均为片段选项a,相应的,片段选项a 1507、片段选项a 1508、片段选项a 1509、片段选项a 1510以及片段选项a 1511均显示有封面。
应注意,封面可以从该片段选项a指向的电影片段包括的视频帧中选取。例如,封面可以是相应电影片段的第1帧预览图像或者最后1帧预览图像。
应注意,上述两种情况先后出现在微电影录制的过程中,其中,第一种情况对应微电影录制过程中首次进入界面c的情况,第二种情况对应微电影录制过程中再次进入界面c的情况。
在上述两种情况的一些实施例中,界面c中还包括窗口b,窗口b用于播放动效模板a中各个动效子模板的动效。具体的,窗口b中播放的是即将要录制的电影片段对应的动效子模板(如第一动效子模板)的动画效果。如此,有利于在录制准备阶段中,参照窗口b中播放的动画效果来调整即将来录制的电影片段的取景。其中,窗口b也可以称为第一窗口。
例如,如图15中的(a)所示,当前选中的是第二个片段选项,则表明即将录制的是第二个电影片段,从而界面c 1601中的窗口b 1602中播放的是第一个电影片段对应的动效子模板的动画效果。
应注意,在所有电影片段都已录制完成的情况下,则不存在调整取景的需求,进而界面c中不再显示窗口b。例如,图15中的(c)示出的界面c 1506中不包括窗口b。
并且,在本实施例中,手机响应于用户对该窗口b的关闭操作,可隐藏窗口b。从而可以简化界面元素,更有利于预览。例如,手机响应于用户对图16中的(a)示 出的界面c 1601中窗口b 1602右上角的关闭按钮“x”的点击操作,可显示图16中的(b)示出的界面c 1603,该界面c 1603中隐藏标识1604。该隐藏标识1604用于触发恢复显示窗口b 1602。也就是说,窗口b 1602隐藏为该隐藏标识1604。
在手机显示界面c后,可以在该界面c中进行录制准备。在准备完成后,则可触发进入双摄像头录制。依次进行多个电影片段的录制,同时需要利用各个动效子模板依次对各个电影片段中采集到的实时图像进行动效处理。具体的,针对第k个电影片段,录制过程如下述S1402所示:
S1402、手机响应于用户对界面c的操作d,显示界面d。该操作d用于触发手机开始录制第k个电影片段。第k个电影片段是多个电影片段中的任一电影片段。第k个电影片段对应第一动效子模板。该界面d是手机正在录像的取景界面,界面d中包括预览图像a和预览图像b。预览图像a是手机采用第一动效子模板对摄像头c采集的实时图像a进行动效处理得到的,预览图像b是手机采用第一动效子模板对摄像头d采集的实时图像b进行动效处理得到的。
其中,操作d也可以称为第四操作,界面d也可以称为第四界面,第k个电影片段也可以称为第一电影片段,预览图像a也可以称为第一预览图像,预览图像b也可以称为第二预览图像。
在S1402之前,手机需要确定即将录制的第k个电影片段。具体的,手机响应于事件b,选中片段选项c。片段选项c是q个片段选项b中的一个。片段选项c对应一个未完成录制的电影片段,即第k个电影片段。其中,事件b也可以称为第三事件,片段选项c也可以称为第四片段选项。
在一些实施例中,第k个电影片段可以是手机自动选中的。也就是说,事件b是手机自动选择片段选项c的事件。例如,手机按照多个电影片段的顺序由前至后依次选择第1个电影片段、第2个电影片段……第n个电影片段作为即将录制的第k个电影片段。并在界面c中突出显示该第k个电影片段对应的片段选项c。如此,则可以按照先后顺序依次录制各个电影片段,使录制的时序与电影片段的顺序一致。
在本实施例中,操作d可以是用户对界面c的预设手势c。例如,该预设手势c是对图16中的(b)示出的界面c 1603中从下向上的滑动手势。或者,该界面c中包括控件f,控件f用于触发手机开始双摄像头录制。操作d是对控件f的触发操作(如点击操作、长按操作等)。例如,操作d可以是用户对图16中的(b)示出的界面c 1603中控件f 1605的点击操作。该控件f也可以称为第一控件。
在另一些实施例中,第k个电影片段由用户手动选中。也就是说,事件b可以是用户手动对片段选项c的选择操作。例如,手机响应于用户对第3个片段选项(即片段选项c是第3个片段选项)的选择操作,则可确定即将录制的第k个电影片段是第3个电影片段。
在本实施例中,操作d可以是用户对片段选项c的选择操作。或者,操作d可以是在选中该片段选项c的情况下,用户对界面c的预设手势c。例如,该预设手势c是对图16中的(b)示出的界面c 1603中从下向上的滑动手势。或者,该界面c中包括控件f,控件f用于触发手机开始双摄像头录制。操作d是对控件f的触发操作(如点击操作、长按操作等)。
在S1402中,示例性的,以k=1为例。手机响应于用户对图16中的(b)示出的界面c 1603中控件f 1605的点击操作,显示图17示出的界面d 1701。该界面d 1701中包括预览图像a 1702和预览图像b 1703。其中,该预览图像a 1702是手机根据第一个动效子模板中的子模板a对摄像头c采集的实时图像a进行动效处理后得到;该预览图像b 1703是手机根据第一个动效子模板中的子模板b对摄像头d采集的实时图像b进行动效处理后得到的。
其中,界面d中显示的预览图像a和预览图像b都是经过动效处理后的预览图像。如此,则可以在录制过程中,从界面d中实时查看到动效处理后的效果。
在一些实施例中,界面d中还包括提示信息a,该提示信息a用于提示录制动感视频的技巧。例如,图17示出的界面d 1701中包括提示信息a 1704,该提示信息a 1704的具体内容为:无需用户移动手机,自动拍出动感视频。
在一些实施例中,界面d中还包括第k个电影片段的录制倒计时。如此,则可以明确提示第k个电影片段的剩余录制时长。
常规情况下,在双镜头录像过程中,手机可以响应于用户的操作f,对调取景界面中两个摄像头的取景框,以实现实时图像的灵活对调。其中,操作f也可以称为第六操作。
然而,在本申请的一些实施例中,为了使界面d中的预览与第k个电影片段对应的动效子模板的动画效果完全一致,手机屏蔽用户对界面d的操作f。该操作f用于触发手机互换界面d中摄像头c的取景框和摄像头d的取景框。换言之,手机不响应用户对界面d的操作f。如此,则可以避免在动效处理过程中,因对调而导致得到的预览与第k个电影片段对应的动效子模板的动画效果不一致。从而提高前后预览的一致性。
其中,操作f可以是对预览图像a或者预览图像b的双击操作,或者是对预览图像a或者预览图像b的拖拽操作。以操作f是对预览图像a或者预览图像b的双击操作为例。手机不响应用户对图17示出的界面d 1701中预览图像a 1702或者预览图像b 1703的双击操作。
在第k个电影片段的录制倒计时结束时,则会触发事件a,进而返回S1401中,显示界面c。而后手机响应于用户对界面c的操作d,显示界面d,进入下一个电影片段的录制。如此循环往复,直至最终n个电影片段全部录制完成,则循环结束。
为便于对该循环过程的理解,下面以n=5,并且以手机顺序选中片段选项的方式为例来具体说明S1401-S1402的过程:
第1个电影片段的录制:手机响应于用户对图12中的(a)示出的界面b 1201中控件d 1202的点击操作,显示图12中的(b)示出的界面c 1203,此时进入第1个电影片段的录制准备。而后,手机响应于用户对图12中的(b)示出的界面c 1203中控件f的点击操作,显示图17示出的界面d 1701。此时进入第1个电影片段的录制,直至2.5s录制倒计时结束,则第1个电影片段录制结束。
第2个电影片段的录制:手机响应于第1个电影片段的2.5s录制倒计时结束,显示图15中的(a)示出的界面c 1501,此时进入第2个电影片段的录制准备。而后,手机响应于用户对图15中的(a)示出的界面c 1501中控件f的点击操作,显示图17示出的界面d 1701。此时进入第2个电影片段的录制,直至2.5s录制倒计时结束, 则第2个电影片段录制结束。
第3个电影片段的录制:手机响应于第2次电影片段的2.5s录制倒计时结束,显示图15中的(b)示出的界面c 1503,此时进入第3个电影片段的录制准备。而后,手机响应于用户对图15中的(b)示出的界面c 1503中控件f的点击操作,显示图17示出的界面d 1701。此时进入第3个电影片段的录制,直至2.5s录制倒计时结束,则第3个电影片段录制结束。
如此循环往复,直至第5个电影片段录制结束。手机响应于第5个电影片段的2.5s录制倒计时结束,显示图15中的(c)示出的界面c 1506。
S1403、手机响应于事件c,生成视频文件a。该事件c用于触发手机保存具有动感效果的视频。该视频文件a包括n段第一视频流和n段第二视频流,其中,第k段第一视频流包括第k个电影片段中处理得到的多帧预览图像a,第k段第二视频流包括第k个电影片段中处理得到的多帧预览图像b。
其中,事件c也可以称为第二事件,视频文件a也可以称为第一视频文件。
在S1403之前,手机可接收事件c。其中,该事件c可以是手机自动触发的事件。例如,在n个电影片段全部录制完成后,则触发事件a。又如,在n个电影片段全部录制完成后,若用户在预设时间内对界面c无任何操作,则触发事件c。
或者,该事件c也可以是用户触发的事件。例如,在n个电影片段全部录制完成后,手机显示的界面c中包括控件g,该控件g用于触发手机保存具有动感效果的视频。事件c可以是用户对该控件g的触发操作(如点击操作、长按操作等)。又如,在n个电影片段全部录制完成后,手机显示的界面c中包括控件h,该控件h用于触发手机在界面c中显示控件j和控件j。其中,控件j用于触发手机保存具有动感效果的视频,控件j用于触发手机删除该具有动感效果的视频。事件c是对控件j的触发操作(如点击操作、长按操作等)。
例如,假设n=5,则5个电影片段全部录制完成后,手机可显示图18中的(a)示出的界面c 1801。该界面c 1801中包括控件h 1802。手机可接收用户对该控件h 1802的点击操作。手机响应于用户对该控件h 1802的点击操作,可显示图18中的(b)示出的界面c 1803,该界面c 1803中包括控件j 1804和控件j 1805。控件j 1804用于触发手机保存具有动感效果的视频,控件j 1805用于触发手机删除该具有动感效果的视频。事件c是对控件j 1804的点击操作。
而后,手机响应于事件c,生成视频文件a。在一些实施例中,手机响应于事件c,在界面c中显示提示信息b,该提示信息b用于提示生成视频文件的进度。如此,则可以直观的显示生成进度。
例如,事件c是用户对图18中的(b)示出的界面c 1803中控件j 1804的点击操作。手机响应于用户对该控件i 1804的点击操作,可以显示图19示出的界面c 1901。该界面c 1901中包括提示信息b 1902,该提示信息b 1902提示生成视频文件的进度为25%。
示例性的,如图20所示,视频文件a包括n段第一视频流,即:第1段第一视频流、第2段第一视频流……第n段第一视频流。其中,将第1个电影片段中(如第1个2.5s)的所有预览图像a按时序拼接则可得到第1段第一视频流,将第2个电影片 段中(如第2个2.5s)的所有预览图像a按时序拼接则可得到第2段第一视频流……将第n个电影片段中(如第n个2.5s)的所有预览图像a按时序拼接则可得到第n段第一视频流。
并且,视频文件a还包括n段第二视频流,即:第1段第一视频流、第2段第一视频流……第n段第一视频流。其中,将第1个电影片段中(如第1个2.5s)的所有预览图像b按时序拼接则可得到第1段第二视频流,将第2个电影片段中(如第2个2.5s)的所有预览图像b按时序拼接则可得到第2段第二视频流……将第n个电影片段中(如第n个2.5s)的所有预览图像b按时序拼接则可得到第n段第二视频流。
最后,手机生成包括n段第一视频流和n段第二视频流。如此,则得到了具有动感效果的视频文件。
综上所述,采用本申请实施例的方法,在双镜头录像视频的过程中,手机可以根据用户选择的动效模板实时对两个摄像头采集的实时图像进行动效处理,并将处理得到的预览图像显示在录制中的取景界面中。如此,可以降低录制得到具有动感效果的视频的难度。而且可以实时向用户呈现动效处理后的结果,有利于实时预览录制的结果。
在录制结束后,手机生成具有动感效果的视频文件a。如此,可以智能得到具有动感效果的视频。
进一步的,在生成视频文件a之前,需要得到n段第一视频流和n段第二视频流。而后,生成包括n段第一视频流和n段第二视频流的视频文件a。
在一些实施例中,手机响应于事件c,得到n段第一视频流和n段第二视频流。如此,则可以在触发手机保存具有动感效果的视频后,集中处理得到所有的视频流。从而可以避免反复多次调用同一程序。
在另一些实施例中,手机响应于第k个电影片段录制完成的事件(如录制倒计时结束),处理得到该第k个电影片段对应的第k段第一视频流和第k段第二视频流。如此,则可以在相应电影片段的录制完成后,及时得到与该电影片段对应的第一视频流和第二视频流。
并且,在一种具体的实现方式中,及时得到第k段第一视频流和第k段第二视频流,还有利于用户及时查看到已完成录制的各个电影片段的录制效果。并且在录制效果不佳的情况下启动重录。其中,重录即重新录制。具体地,如图21所示,重录的过程包括S2101-S2103。
S2101、在至少一个电影片段录制完成后,手机响应于用户对界面c中片段选项d的操作e,在界面c中摄像头c的取景框内播放第r段第一视频流,在界面c中摄像头d的取景框内播放第r段第二视频流。片段选项d是p个片段选项a中的一个片段选项。该操作e是对片段选项d的选择操作。
其中,片段选项d也可以称为第二片段选项。第二片段选项对应第二电影片段,第二电影片段由第r段第一视频流和第r段第二视频流组成。片段选项a是已录制完成的电影片段对应的片段选项。
在S2101之前,手机可以接收用户对界面c中片段选项d的操作e。其中,该操作e可以是点击操作或者长按操作等。
示例性的,在第1个电影片段和第2个电影片段录制完成后,手机可显示图22中的(a)示出的界面c 2201,该界面c 2201中共包括5个片段选项,其中片段选项2202和片段选项2203均有封面,表示第1个电影片段和第2个电影片段已录制完成。手机可接收用户对第2个片段选项2203的点击操作。手机响应于用户对第2个片段选项2203的点击操作,可显示图22中的(b)示出的界面c 2204。该界面c 2204包括摄像头c的取景框2205和摄像头d的取景框2206。其中,取景框2205中播放第2个电影片段对应的第2段第一视频流,取景框2206中播放第2个电影片段对应的第2段第二视频流。
S2102、手机响应于用户对界面c的操作d,在界面c中显示提示信息c。该提示信息c用于提示是否重录该片段选项d对应的电影片段。
其中,提示信息c也可以称为第一提示信息。
在S2102之前,手机可以接收用户对界面c的操作d。其中,关于操作d的说明,可参见前文S1402中的相关说明,此处不再赘述。
示例性的,以界面c中包括控件f,操作d是对控件f的点击操作为例。手机可接收用户对图22中的(b)示出的控件f 2207的点击操作。手机响应于用户对该控件f 2207的点击操作,可显示图22中的(c)示出的界面c 2208。该界面c 2208中包括提示信息c 2209,该提示信息c 2209的具体内容是:是否重录该段视频。
在本实施例中,手机响应于操作d,并未直接进入录制,而是先提示用户是否需要重录。从而可以更准确的实现重录。
S2103、手机响应于用户对提示信息c的操作g,显示界面d,以重拍该片段选项d对应的电影片段。其中,该操作g用于触发手机开始重新录制该片段选项d对应的电影片段。
其中,操作g也可以称为第五操作。
在S2103之前,手机可以接收用户对提示信息c的操作g。其中,操作g可以是对提示信息c的预设手势d。例如,预设手势d是对图22中的(c)示出的提示信息c2209中“重拍”的圈中手势。或者,提示信息c中包括控件1和控件2,控件1用于触发手机取消重录,控件2用于触发手机开始重录。操作g可以是对控件2的点击操作。例如,图22中的(c)所示的提示信息c 2209中的“取消”按钮为控件1,“重拍”按钮为控件2。
手机响应于用户对提示信息c的操作g,显示界面d。关于该界面d,可参见前文S1402中的相关说明,此处不再赘述。
前文实施例中,都是针对顺利完成至少一个电影片段的录制的情况在说明。在实际中,在未完成多个电影片段的录制之前,用户可能会退出录制。
在该情况中,手机响应于用户对界面c的操作h,在界面c中显示提示信息d。其中,操作h用于触发手机退出微电影录制。提示信息d用于提示用户是否保留已录制完成的电影片段的视频流(如第一视频流和第二视频流)。手机响应于用户对提示信息d的操作i,保存已录制完成的电影片段的视频流,其中,操作i用户触发手机保存视频流。而后,当用户再次进入界面a,手机响应于用户对界面a的操作a,显示界面c。其中,界面c中包括多个片段选项,并且,多个片段选项中指向上次已完成录制的 电影片段的片段选项a与指向上次未完成录制的电影片段的片段选项b区别显示。如此,则可以在下次进入录制时,在上一次保留的电影片段的视频流的基础上进一步录制。
示例性的,操作h是用户在界面c中从左向右的滑动手势,操作i是用户对提示信息d中第三控件(如图23中的(b)所示的提示信息d 2304中的“保留”按钮)的点击操作。在完成2个电影片段的录制之后,手机显示图23中的(a)示出的界面c 2301。手机响应于用户在该界面c 2301中从左到右的滑动手势,可显示图23中的(b)示出的界面c 2302。该界面c 2302中包括提示信息d 2303。手机响应于用户对该提示信息d 2303中的“保留”按钮的点击操作,保存第1个电影片段对应的第1段视频流和第2个电影片段对应的第2段视频流。而后,当用户再次进入图6中的(a)示出的界面a 601,手机响应于用户对该界面a 601中的控件c 602的点击操作,可显示图23中的(a)示出的界面c 2301。
最后,需要说明的是,界面c中包括片段选项,则可以明确指示电影片段的数量、时长等信息,还可以明确指示已录制完成的电影片段和未录制完成的电影片段,或者,也可以方便用户选择电影片段。而在另一些实施例中,为了简化界面元素,界面c中也可以不包括片段选项。例如,在由手机自动选定即将录制的电影片段的实施例中,不存在用户选择即将录制的电影片段的需求,从而界面c中可以不包括片段选项。
本申请另一些实施例提供了一种电子设备,该电子设备可以包括:上述显示屏(如触摸屏)、存储器和一个或多个处理器。该显示屏、存储器和处理器耦合。该存储器用于存储计算机程序代码,该计算机程序代码包括计算机指令。当处理器执行计算机指令时,电子设备可执行上述方法实施例中手机执行的各个功能或者步骤。该电子设备的结构可以参考图5所示的手机400的结构。
本申请实施例还提供一种芯片系统,如图24所示,该芯片系统2400包括至少一个处理器2401和至少一个接口电路2402。处理器2401和接口电路2402可通过线路互联。例如,接口电路2402可用于从其它装置(例如电子设备的存储器)接收信号。又例如,接口电路2402可用于向其它装置(例如处理器2401)发送信号。示例性的,接口电路2402可读取存储器中存储的指令,并将该指令发送给处理器2401。当所述指令被处理器2401执行时,可使得电子设备执行上述实施例中的各个步骤。当然,该芯片系统还可以包含其他分立器件,本申请实施例对此不作具体限定。
本申请实施例还提供一种计算机存储介质,该计算机存储介质包括计算机指令,当所述计算机指令在上述电子设备上运行时,使得该电子设备执行上述方法实施例中手机执行的各个功能或者步骤。
本申请实施例还提供一种计算机程序产品,当所述计算机程序产品在计算机上运行时,使得所述计算机执行上述方法实施例中手机执行的各个功能或者步骤。
通过以上实施方式的描述,所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,仅以上述各功能模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能模块完成,即将装置的内部结构划分成不同的功能模块,以完成以上描述的全部或者部分功能。
在本申请所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过 其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个装置,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是一个物理单元或多个物理单元,即可以位于一个地方,或者也可以分布到多个不同地方。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个可读取存储介质中。基于这样的理解,本申请实施例的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的全部或部分可以以软件产品的形式体现出来,该软件产品存储在一个存储介质中,包括若干指令用以使得一个设备(可以是单片机,芯片等)或处理器(processor)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(read only memory,ROM)、随机存取存储器(random access memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
以上内容,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何在本申请揭露的技术范围内的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以所述权利要求的保护范围为准。

Claims (15)

  1. 一种视频拍摄方法,其特征在于,应用于包括多个摄像头的电子设备,所述方法包括:
    所述电子设备显示第一界面;其中,所述第一界面是所述电子设备开始录像前的取景界面,所述第一界面包括所述多个摄像头中的两个摄像头采集的实时图像;
    所述电子设备响应于用户在所述第一界面的第一操作,在所述第一界面显示多个模板选项;所述第一操作用于触发所述电子设备录制微电影,每个模板选项对应一种图像处理的动效模板,所述动效模板用于处理所述多个摄像头中至少两个摄像头采集的预览图像并得到相应的动画效果;
    所述电子设备响应于用户对所述多个模板选项中第一模板选项的第二操作,显示第二界面;其中,所述第二界面用于播放所述第一模板选项对应的第一动效模板的动画效果;
    所述电子设备响应于用户在所述第二界面的第三操作,采用所述第一动效模板处理第一摄像头采集的第一实时图像和第二摄像头采集的第二实时图像,以录制微电影;其中,所述第一摄像头是所述多个摄像头中的一个摄像头,所述第二摄像头是所述多个摄像头中除第一摄像头之外的一个摄像头。
  2. 根据权利要求1所述的方法,其特征在于,所述采用所述第一动效模板处理第一摄像头采集的实时图像和第二摄像头采集的实时图像,包括:
    所述电子设备显示第三界面;其中,所述第三界面是所述电子设备开始录像前的取景界面;所述第三界面包括所述第一摄像头采集的第三实时图像和所述第二摄像头采集的第四实时图像;采用所述第一动效模板录制的微电影由多个电影片段组成,所述第一动效模板包括多个动效子模板,所述多个电影片段与所述多个动效子模板一一对应;
    所述电子设备接收用户在所述第三界面的第四操作,所述第四操作用于触发所述电子设备录制第一电影片段,所述第一电影片段是所述多个电影片段中的任一电影片段;所述第一片段对应第一动效子模板;
    所述电子设备响应于所述第四操作,显示第四界面;其中,所述第四界面是所述电子设备正在录像的取景界面,所述第四界面中包括第一预览图像和第二预览图像;所述第一预览图像是所述电子设备采用所述第一动效子模板对所述第一实时图像进行动效处理得到的,所述第二预览图像是所述电子设备采用所述第一动效子模板对所述第二实时图像进行动效处理得到的。
  3. 根据权利要求2所述的方法,其特征在于,在所述电子设备显示第四界面后,所述方法还包括:所述电子设备响应于第一事件,显示所述第三界面;其中,所述第一事件是所述第一电影片段录制完成的事件。
  4. 根据权利要求2或3所述的方法,其特征在于,所述第三界面中还包括第一窗口,所述第一窗口用于播放第一动效子模板对应的动画效果。
  5. 根据权利要求2-4中任一项所述的方法,其特征在于,每个动效子模板包括第一子模板和第二子模板;其中,第一子模板用于所述电子设备对所述第一实时图像进行动效处理,第二子模板用于所述电子设备对所述第二实时图像进行动效处理。
  6. 根据权利要求2-5中任一项所述的方法,其特征在于,在所述采用所述第一动效模板处理第一摄像头采集的第一实时图像和第二摄像头采集的第二实时图像之后,所述方法还包括:
    所述电子设备响应于第二事件,保存第一视频文件;其中,所述第二事件用于触发所述电子设备保存处理后的视频;所述第一视频文件包括多段第一视频流和多段第二视频流;多段第一视频流和多个电影片段一一对应,多段第二视频流和多个电影片段一一对应;每段第一视频流包括相应电影片段中处理得到的多帧所述第一预览图像,每段第二视频流包括相应电影片段中处理得到的多帧所述第二预览图像。
  7. 根据权利要求2-6中任一项所述的方法,其特征在于,所述第三界面中包括p个第一片段选项;其中,p≥0,p是自然数,p是录制完成的电影片段的片段数量;每个第一片段选项对应一个已录制完成的电影片段;所述第三界面中还包括第一控件;
    所述方法还包括:所述电子设备接收用户对第二片段选项的选择操作,所述第二片段选项是p个第一片段选项中的一个;所述第二片段选项对应第二电影片段;
    所述电子设备响应于用户对所述第二片段选项的选择操作,在所述第三界面中播放所述第二电影片段中处理得到的多帧所述第一预览图像和所述第二电影片段中处理得到的多帧所述第二预览图像;
    所述电子设备响应于用户对所述第一控件的点击操作,在所述第二界面中显示第一提示信息;所述第一提示信息用于提示是否重拍所述第二电影片段;
    所述电子设备响应于用户对所述第一提示信息的第五操作,显示第四界面,以重拍所述第二电影片段。
  8. 根据权利要求2-7中任一项所述的方法,其特征在于,所述第三界面中包括q个第三片段选项;其中,q≥0,q是自然数,q是未录制完成的电影片段的片段数量;每个第三片段选项对应一个未录制完成的电影片段;所述第三界面中还包括第一控件;
    在所述电子设备接收用户在所述第三界面的第四操作之前,所述方法还包括:所述电子设备响应于第三事件,选中第四片段选项;所述第四片段选项是q个第三片段选项中的一个;所述第四片段选项对应第一电影片段;
    其中,所述第四操作是在选中所述第四片段选项的情况下,用户对所述第一控件的点击操作。
  9. 根据权利要求2-8中任一项所述的方法,其特征在于,在所述显示第四界面之后,所述方法还包括:
    所述电子设备不响应用户对所述第四界面的第六操作,所述第六操作用于触发所述电子设备互换所述第四界面中所述第一摄像头的取景框和所述第二摄像头的取景框。
  10. 根据权利要求9所述的方法,其特征在于,所述第六操作包括长按操作或者拖拽操作。
  11. 根据权利要求1-10中任一项所述的方法,其特征在于,不同动效模板适用不同的摄像头;所述方法还包括:
    所述电子设备响应于用户在所述第二界面的第三操作,启动所述第一摄像头和所述第二摄像头;所述第一摄像头和所述第二摄像头是所述第一动效模板适用的摄像头。
  12. 根据权利要求1-11中任一项所述的方法,其特征在于,所述第一界面中包括第 二控件,所述第二控件用于触发所述电子设备显示多个模板选项;所述第一操作是对所述第二控件的点击操作或长按操作。
  13. 根据权利要求1-12中任一项所述的方法,其特征在于,所述第一界面中包括第三控件;
    其中,所述电子设备响应于用户对所述多个模板选项中第一模板选项的第二操作,显示第二界面,包括:
    所述电子设备响应于用户对所述多个模板选项中第一模板选项的选择操作,选中第一模板选项;
    在所述电子设备选中第一模板选项的情况下,所述电子设备响应于用户对所述第三控件的点击操作,显示第二界面。
  14. 一种电子设备,其特征在于,所述电子设备包括多个摄像头、显示屏、存储器和一个或多个处理器;所述多个摄像头、显示屏、所述存储器和所述处理器耦合;所述存储器用于存储计算机程序代码,所述计算机程序代码包括计算机指令,当所述计算机指令被所述处理器执行时,使得所述电子设备执行权利要求1-13中任一项所述的方法。
  15. 一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,其特征在于,当所述指令在电子设备上运行时,使得所述电子设备执行如权利要求1-13中任一项所述的方法。
PCT/CN2022/074128 2021-05-31 2022-01-26 一种视频拍摄方法及电子设备 WO2022252660A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US17/911,215 US20240129620A1 (en) 2021-05-31 2022-01-26 Video Shooting Method and Electronic Device
EP22757810.1A EP4124019A4 (en) 2021-05-31 2022-01-26 VIDEO CAPTURING METHOD AND ELECTRONIC DEVICE

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110604950.5 2021-05-31
CN202110604950.5A CN113727015B (zh) 2021-05-31 2021-05-31 一种视频拍摄方法及电子设备

Publications (1)

Publication Number Publication Date
WO2022252660A1 true WO2022252660A1 (zh) 2022-12-08

Family

ID=78672839

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/074128 WO2022252660A1 (zh) 2021-05-31 2022-01-26 一种视频拍摄方法及电子设备

Country Status (4)

Country Link
US (1) US20240129620A1 (zh)
EP (1) EP4124019A4 (zh)
CN (2) CN113727015B (zh)
WO (1) WO2022252660A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230229289A1 (en) * 2022-01-14 2023-07-20 Beijing Bytedance Network Technology Co., Ltd. Template selection method, electronic device and non-transitory computer-readable storage medium

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113727015B (zh) * 2021-05-31 2023-01-24 荣耀终端有限公司 一种视频拍摄方法及电子设备
CN115514874B (zh) * 2021-06-07 2024-04-05 荣耀终端有限公司 一种视频拍摄方法及电子设备
CN116546313A (zh) * 2022-01-25 2023-08-04 华为技术有限公司 一种复原拍摄的方法及电子设备
CN116112780B (zh) * 2022-05-25 2023-12-01 荣耀终端有限公司 录像方法和相关装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130083215A1 (en) * 2011-10-03 2013-04-04 Netomat, Inc. Image and/or Video Processing Systems and Methods
CN110012352A (zh) * 2019-04-17 2019-07-12 广州华多网络科技有限公司 图像特效处理方法、装置及视频直播终端
CN110865754A (zh) * 2019-11-11 2020-03-06 北京达佳互联信息技术有限公司 信息展示方法、装置及终端
CN112839190A (zh) * 2021-01-22 2021-05-25 九天华纳(北京)科技有限公司 虚拟图像与现实场景同步视频录制或直播的方法
CN113727015A (zh) * 2021-05-31 2021-11-30 荣耀终端有限公司 一种视频拍摄方法及电子设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9325889B2 (en) * 2012-06-08 2016-04-26 Samsung Electronics Co., Ltd. Continuous video capture during switch between video capture devices
CN112954218A (zh) * 2019-03-18 2021-06-11 荣耀终端有限公司 一种多路录像方法及设备
CN110650304B (zh) * 2019-10-23 2021-12-07 维沃移动通信有限公司 一种视频生成方法及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130083215A1 (en) * 2011-10-03 2013-04-04 Netomat, Inc. Image and/or Video Processing Systems and Methods
CN110012352A (zh) * 2019-04-17 2019-07-12 广州华多网络科技有限公司 图像特效处理方法、装置及视频直播终端
CN110865754A (zh) * 2019-11-11 2020-03-06 北京达佳互联信息技术有限公司 信息展示方法、装置及终端
CN112839190A (zh) * 2021-01-22 2021-05-25 九天华纳(北京)科技有限公司 虚拟图像与现实场景同步视频录制或直播的方法
CN113727015A (zh) * 2021-05-31 2021-11-30 荣耀终端有限公司 一种视频拍摄方法及电子设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4124019A4

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20230229289A1 (en) * 2022-01-14 2023-07-20 Beijing Bytedance Network Technology Co., Ltd. Template selection method, electronic device and non-transitory computer-readable storage medium

Also Published As

Publication number Publication date
EP4124019A4 (en) 2023-09-20
CN116156314A (zh) 2023-05-23
EP4124019A1 (en) 2023-01-25
CN113727015B (zh) 2023-01-24
CN113727015A (zh) 2021-11-30
US20240129620A1 (en) 2024-04-18

Similar Documents

Publication Publication Date Title
WO2022252660A1 (zh) 一种视频拍摄方法及电子设备
WO2021093793A1 (zh) 一种拍摄方法及电子设备
CN110072070B (zh) 一种多路录像方法及设备、介质
CN113556461B (zh) 一种图像处理方法、电子设备及计算机可读存储介质
US11832022B2 (en) Framing method for multi-channel video recording, graphical user interface, and electronic device
US11856286B2 (en) Video shooting method and electronic device
WO2021129198A1 (zh) 一种长焦场景下的拍摄方法及终端
WO2021219141A1 (zh) 拍照方法、图形用户界面及电子设备
US11310443B2 (en) Video processing method, apparatus and storage medium
WO2023134583A1 (zh) 视频录制方法、装置及电子设备
WO2023160241A1 (zh) 一种视频处理方法及相关装置
EP4120668A1 (en) Video processing method and electronic device
WO2022237286A1 (zh) 一种图像的融合方法及电子设备
WO2021185374A1 (zh) 一种拍摄图像的方法及电子设备
WO2023226634A1 (zh) 一种拍摄方法及电子设备
WO2022257687A1 (zh) 一种视频拍摄方法及电子设备
CN113973172A (zh) 拍摄方法、装置、存储介质及电子设备
WO2022262451A1 (zh) 一种视频拍摄方法及电子设备
WO2022262452A1 (zh) 一种视频拍摄方法及电子设备
RU2789447C1 (ru) Способ и устройство многоканальной видеозаписи
RU2809660C1 (ru) Способ кадрирования для записи многоканального видео, графический пользовательский интерфейс и электронное устройство
WO2023160143A1 (zh) 浏览多媒体内容的方法及装置
WO2023231696A1 (zh) 一种拍摄方法及相关设备
CN115484392B (zh) 一种拍摄视频的方法及电子设备
WO2024088074A1 (zh) 拍摄月亮的方法和电子设备

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022757810

Country of ref document: EP

Effective date: 20220831

WWE Wipo information: entry into national phase

Ref document number: 17911215

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE