WO2024067494A1 - 视频素材剪辑方法及装置 - Google Patents
视频素材剪辑方法及装置 Download PDFInfo
- Publication number
- WO2024067494A1 WO2024067494A1 PCT/CN2023/121138 CN2023121138W WO2024067494A1 WO 2024067494 A1 WO2024067494 A1 WO 2024067494A1 CN 2023121138 W CN2023121138 W CN 2023121138W WO 2024067494 A1 WO2024067494 A1 WO 2024067494A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video
- split
- track
- screen
- editing
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 85
- 239000000463 material Substances 0.000 claims abstract description 178
- 239000012634 fragment Substances 0.000 claims description 43
- 238000004590 computer program Methods 0.000 claims description 24
- 230000008569 process Effects 0.000 claims description 18
- 238000012545 processing Methods 0.000 claims description 17
- 230000004044 response Effects 0.000 claims description 8
- 230000008859 change Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 4
- 238000003672 processing method Methods 0.000 claims description 4
- 238000003780 insertion Methods 0.000 claims description 3
- 230000037431 insertion Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 24
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 8
- 230000008676 import Effects 0.000 description 7
- 101001121408 Homo sapiens L-amino-acid oxidase Proteins 0.000 description 6
- 102100026388 L-amino-acid oxidase Human genes 0.000 description 6
- 230000001960 triggered effect Effects 0.000 description 6
- 101100233916 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) KAR5 gene Proteins 0.000 description 4
- 101100012902 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) FIG2 gene Proteins 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 101150053844 APP1 gene Proteins 0.000 description 1
- 101100189105 Homo sapiens PABPC4 gene Proteins 0.000 description 1
- 102100039424 Polyadenylate-binding protein 4 Human genes 0.000 description 1
- 230000003190 augmentative effect Effects 0.000 description 1
- 230000000739 chaotic effect Effects 0.000 description 1
- 238000012790 confirmation Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/036—Insert-editing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04845—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04883—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures for inputting data by handwriting, e.g. gesture or text
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0487—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser
- G06F3/0488—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures
- G06F3/04886—Interaction techniques based on graphical user interfaces [GUI] using specific features provided by the input device, e.g. functions controlled by the rotation of a mouse with dual sensing arrangements, or of the nature of the input device, e.g. tap gestures based on pressure sensed by a digitiser using a touch-screen or digitiser, e.g. input of commands through traced gestures by partitioning the display area of the touch-screen or the surface of the digitising tablet into independently controllable areas, e.g. virtual keyboards or menus
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/02—Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
- G11B27/031—Electronic editing of digitised analogue information signals, e.g. audio or video signals
- G11B27/034—Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
-
- G—PHYSICS
- G11—INFORMATION STORAGE
- G11B—INFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
- G11B27/00—Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
- G11B27/10—Indexing; Addressing; Timing or synchronising; Measuring tape travel
- G11B27/34—Indicating arrangements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/222—Studio circuitry; Studio devices; Studio equipment
- H04N5/262—Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
- H04N5/265—Mixing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2203/00—Indexing scheme relating to G06F3/00 - G06F3/048
- G06F2203/048—Indexing scheme relating to G06F3/048
- G06F2203/04803—Split screen, i.e. subdividing the display area or the window area into separate subareas
Definitions
- the present invention relates to a video material editing method and device.
- the present disclosure provides a video material editing method and device.
- the present disclosure provides a video material editing method, comprising:
- the plurality of video track segments are formed based on the plurality of video materials, at least one video material among the plurality of video materials is used to form one video track segment among the plurality of video track segments, and at least one video material among the plurality of video track segments is used to form video elements of different video track segments among the plurality of video track segments.
- the video material is a different video material among the multiple video materials;
- the timeline intervals corresponding to the multiple video track segments on the video editing track at least partially overlap
- the video editing image comprises the multiple split-screen areas indicated by the video split-screen template, one split-screen area in the video editing image is used to display the video image of a video track fragment in the multiple video tracks, and different split-screen areas in the video editing image are used to display images of different video track fragments in the multiple video track fragments.
- it also includes: entering a video editing page based on the video split-screen template adopted by the multiple video track fragments, and displaying a video editing image formed by the video images of the multiple video track fragments according to the video split-screen template in the video editing page.
- one of the track video clips is set as a video track clip on the main track, and all other video track clips are set as video track clips on the picture-in-picture track.
- setting one of the track video segments as a video track segment on the main track, and setting all other video track segments as video track segments on the picture-in-picture track comprises:
- the video track fragment formed by the first obtained video material is set as the video track fragment on the main track, and the video track fragments formed by other video materials are set as the video track fragments on the picture-in-picture track.
- the video images displaying the plurality of video track segments are video edited images formed according to the video split-screen template, including:
- the mapping relationship between the multiple split-screen areas indicated by the video split-screen template and the video editing track, and the size of the canvas determining the sizes and positions of the video images of the multiple video track segments in the canvas respectively;
- the video images of the multiple video track segments are filled in the canvas, and the canvas is displayed.
- the video images of the plurality of video track segments are displayed to display a video editing image formed according to the video split-screen template.
- the durations of the multiple video track segments formed based on the multiple video materials are consistent with the length of the timeline, and the start times of the multiple video track segments are aligned on the timeline.
- one or more video processing methods such as video speed change, insertion or splicing of video fragments of specified content are used to process the at least one video material to obtain a video track fragment with a duration consistent with the length of the timeline.
- it also includes: responding to a video split-screen template switching instruction, displaying a video editing image of the video images of the multiple video track fragments according to the video split-screen template indicated by the video split-screen template switching instruction; the video split-screen template indicated by the video split-screen template switching instruction is different from the layout of the multiple split-screen areas indicated by the video split-screen template used before the switching.
- it also includes: responding to a preview playback instruction, playing the video images of the multiple video track segments based on the timeline to form a video editing image according to the video split-screen template; wherein, during the playback process, when the preview playback position is located in the timeline interval covered by the video track segments on the timeline, the video images of the video track segments are displayed in the video editing image; if the preview playback position is located outside the timeline interval covered by the video track segments on the timeline, a preset background is displayed in the split-screen area corresponding to the video track segments in the video editing image.
- it also includes: responding to an adjustment instruction for a video track segment, adjusting the position, direction or size of the video image of the video track segment in the corresponding split-screen area; or; exchanging the split-screen areas corresponding to the video images of different video track segments; or; replacing the video track segment in the split-screen area; or, mirroring the video image of the video track segment in the corresponding split-screen area.
- the present disclosure provides a video material editing device, comprising:
- An acquisition module used to acquire multiple video materials
- a template determination module used to determine a video split screen template, wherein the video split screen template is used to indicate multiple split screen areas located in the same video image;
- the video processing module is used to display multiple video track segments on the video editing track and convert the The video images of the plurality of video track segments form a video editing image according to the video split-screen template;
- a display module used for displaying the video editing image
- the multiple video track segments are formed based on the multiple video materials, at least one video material among the multiple video materials is used to form one video track segment among the multiple video track segments, and the video materials used to form different video track segments among the multiple video track segments are different video materials among the multiple video materials;
- the timeline intervals corresponding to the multiple video track segments on the video editing track at least partially overlap
- the video editing image comprises the multiple split-screen areas indicated by the video split-screen template, one split-screen area in the video editing image is used to display the video image of a video track fragment in the multiple video tracks, and different split-screen areas in the video editing image are used to display images of different video track fragments in the multiple video track fragments.
- the present disclosure provides an electronic device, comprising: a memory and a processor;
- the memory is configured to store computer program instructions
- the processor is configured to execute the computer program instructions, and the electronic device executes the computer program instructions so that the electronic device implements the video material editing method as described above.
- the present disclosure provides a readable storage medium, including: computer program instructions; an electronic device executes the computer program instructions, so that the electronic device implements the video material editing method as described above.
- the present disclosure provides a computer program product.
- the electronic device executes the computer program product, the electronic device implements the video material editing method as described above.
- the disclosed embodiments provide a method and device for editing video materials, wherein the method comprises: obtaining a plurality of video materials, and determining a video split-screen template, by displaying a plurality of video track segments formed based on the plurality of video materials on a video editing track, and displaying the video images of the plurality of video track segments according to the video split-screen template, so as to display the video images of the plurality of video track segments in a split-screen manner through a plurality of split-screen areas in the video editing image; wherein at least one of the plurality of video materials is used to form a video track segment among the plurality of video track segments, and the video materials used to form different video track segments are not completely the same; when performing video split-screen splicing, the timeline intervals corresponding to the plurality of video track segments on the video editing track are at least partially overlapped.
- the method disclosed in the present invention can realize the rapid split-screen splicing of multiple video materials through video editing tools without manual adjustment, thereby meeting the video editing needs of users who want to quickly splice video materials, and automatically performing split-screen splicing of videos improves the efficiency of video material editing.
- FIG1 is a schematic diagram of a process of a video material editing method provided by an embodiment of the present disclosure
- FIG2 is a schematic diagram of the present disclosure exemplarily showing a video split-screen splicing method that supports two video track segments using a left-right split-screen video split-screen template;
- FIG3 is a schematic diagram of a flow chart of a video material editing method provided by another embodiment of the present disclosure.
- FIGS. 4A to 4I are schematic diagrams of human-computer interaction interfaces provided by the present disclosure.
- FIG. 5 is a schematic diagram of the structure of a video material editing device provided by an embodiment of the present disclosure.
- the present disclosure provides a video material editing method and device, by displaying multiple video track segments formed based on multiple video materials on a video editing track, and displaying a video editing image formed by the video images of the multiple video track segments according to a video split-screen template, so as to display the video images of the multiple video track segments in a split-screen manner through multiple split-screen areas in the video editing image; wherein at least one video material among the multiple video materials is used to form a video in the multiple video track segments Track segments, the video materials used to form different video track segments are not completely the same; when performing video split-screen splicing, the timeline intervals corresponding to the multiple video track segments on the video editing track are at least partially overlapped, ensuring the video split-screen splicing effect.
- the disclosed method can realize the rapid split-screen splicing of multiple video materials through a video editing tool without manual adjustment, meeting the video editing needs of users who want to quickly splice video materials, and automatically performing video split-screen splicing improves the efficiency of video material editing.
- the video editing tool can provide users with visual components to trigger video split-screen splicing, which is easy to operate for users and helps to improve the user's editing experience.
- the video material editing method disclosed in the present invention is performed by an electronic device.
- the electronic device may be a tablet computer, a mobile phone (such as a folding screen mobile phone, a large screen mobile phone, etc.), a wearable device, a vehicle-mounted device, an augmented reality (AR)/virtual reality (VR) device, a laptop computer, an ultra-mobile personal computer (UMPC), a netbook, a personal digital assistant (PDA), a smart TV, a smart screen, a high-definition TV, a 4K TV, a smart speaker, a smart projector, and other Internet of Things (IOT) devices.
- the present invention does not impose any restrictions on the specific type of the electronic device.
- the present invention does not limit the type of operating system of the electronic device. For example, Android system, Linux system, Windows system, iOS system, etc.
- the present disclosure will take an electronic device as an example in an embodiment, and combine the accompanying drawings and application scenarios to elaborate on the video material editing method provided by the present disclosure.
- FIG1 is a flow chart of a video material editing method provided by an embodiment of the present disclosure. Referring to FIG1 , the method of this embodiment includes:
- the electronic device may display a material selection page to the user, wherein the material selection page is used to aggregate and display identification information of video materials that the user can select for editing, such as displaying videos and images in the electronic device's photo album in the form of thumbnails.
- the electronic device can obtain multiple video materials based on the user's selection.
- the present disclosure does not limit the number of video materials selected by the user, the video content, the format, resolution and other parameters of the video materials.
- the minimum number of video materials that need to be obtained can be set. For example, at least 2 video materials need to be obtained. If the number of video materials is less than the minimum number of video materials required to be obtained, the electronic device can display a prompt message to the user, prompting the user to add video materials.
- split-screen splicing may cause a chaotic video editing image formed by the split-screen splicing and a poor splicing effect. Therefore, the maximum number of video materials that can be obtained can be set. When the maximum number of video materials that can be obtained is exceeded, the electronic device can also display a prompt message to the user, prompting the user to delete the video material. Of course, there is no limit on the maximum number of video materials that can be obtained.
- S102 Determine a video split-screen template, where the video split-screen template is used to indicate multiple split-screen areas in the same video image.
- the electronic device may determine a video split-screen template in response to a video split-screen splicing instruction input by a user, or the electronic device may determine a video split-screen template before the user starts editing but before selecting a video material.
- the electronic device may display a target control in a page displayed to the user, and the user may input a video split screen splicing instruction to the electronic device by operating the target control.
- the target control may be, but is not limited to, displayed in a material selection page.
- the target control may be implemented in any manner, and the present disclosure does not limit the size, position, style, effect, and other parameters of the target control.
- the style of the target control may be displayed in the form of text, letters, numbers, icons, pictures, and the like.
- the electronic device displays a pop-up window to the user to ask whether the user needs to perform split-screen splicing of video materials. Based on the user's confirmation, a video split-screen splicing instruction is input to the electronic device. If the user chooses to skip, the video material can be edited by entering the video editing page based on the acquired multiple video materials in the existing method in the video editing tool in the electronic device. Of course, it is not limited to the user inputting the video split-screen splicing instruction to the electronic device through other methods such as voice, gesture, and combined gesture.
- the number of split-screen areas indicated by the video split-screen template determined in this step may be consistent with or inconsistent with the number of video materials selected by the user, and this disclosure does not limit this.
- a video split-screen template with a consistent number of split-screens may be matched based on the number of video materials; in other embodiments, a video split-screen template may be determined based on a preset method or randomly selected or based on a strategy for forming video track segments based on video materials.
- the present disclosure does not limit the implementation method for determining the video split-screen template.
- step S101 and step S102 in this embodiment is There is no particular order.
- the electronic device may pre-store video split screen templates supporting multiple different numbers of split screen areas to form a video split screen template set.
- the video split screen template set is matched based on the number of video materials selected by the user for this video clip, and the video split screen template to be used is determined from multiple successfully matched candidate splicing templates.
- the video split screen template to be used may be determined based on one or more of the number of times the candidate splicing template is used, the number of times it is collected, or the information of the video split screen template used by the user in the historical video clip, or any candidate splicing template may be determined as the video split screen template to be used in a random manner.
- video split-screen templates may also be pre-specified.
- video split-screen templates corresponding to 2-split screens, 3-split screens, and 4-split screens may be pre-specified.
- the pre-specified video split-screen template with the corresponding number of split screens may be determined as the video split-screen template to be used based on the determined number of split screens.
- S103 Display multiple video track segments on the video editing track, and display a video editing image formed by video images of the multiple video track segments according to the video split-screen template.
- the split-screen splicing page After obtaining multiple video materials and determining the video split-screen template, you can enter the split-screen splicing page.
- the video editing image is formed.
- multiple video track segments are formed based on multiple video materials, and at least one video material among the multiple video materials is used to form a video track segment among the multiple video track segments.
- the present disclosure does not limit the specific implementation method of forming video track segments through video materials. For example, it is possible to generate video track segments corresponding to the video materials without performing any video processing on the video materials; or the corresponding video track segments can be obtained by performing one or more video processing methods such as splicing, speed change, repeated looping, inserting video segments of specified content, etc. on at least one video material used to form a video track segment. And the duration of the multiple video track segments formed can be exactly the same, or not exactly the same, or completely different, and the present disclosure does not limit this.
- the video materials used to form different video track segments in the plurality of video track segments are different video materials in the plurality of video materials, that is, the video materials forming different video track segments may be Completely different video materials among multiple video materials may also be video materials that are not completely the same.
- the video materials forming different video track segments are completely different video materials; or, video materials 1 and 2 may be used to form video track segment 1, and video materials 2 and video materials 3 may be used to form video track segment 2.
- the video materials forming different video track segments are not completely the same video materials.
- multiple video track segments formed based on multiple video materials are displayed on corresponding video editing tracks, wherein one video editing track corresponds to one video track segment, and this solution may include multiple video editing tracks, and the number of video editing tracks may be consistent with the number of video track segments.
- the timeline intervals corresponding to the multiple video track segments on the video editing track are at least partially overlapped.
- the timeline length of the video editing track can be determined by the video track segment with the longest duration.
- the durations of multiple video track segments are consistent, and the corresponding timeline intervals on the video editing track are aligned, it can be understood that the multiple video track segments completely overlap in the time dimension. If the durations of multiple video tracks are inconsistent, or the durations of multiple video track segments are consistent but not completely aligned on the video editing track, there may be different video track segments overlapping at different time positions on the video editing track.
- the electronic device can display a split-screen splicing page to the user, and display a video editing image in the split-screen splicing page, so that the user can preview the splicing effect of split-screen splicing of the video material through the split-screen splicing page.
- the video editing image has multiple split-screen areas indicated by the video split-screen template, and one split-screen area in the video editing image is used to display the video image of a video track segment in multiple video tracks, and different split-screen areas in the video editing image are used to display images of different video track segments in multiple video track segments. It can also be understood that the multiple split-screen areas in the video editing image are one-to-one corresponding to the multiple video track segments.
- the size of the video editing image may be different from the size of the display area displaying the video editing image in the split-screen splicing page.
- the size of the video editing image can be adjusted to adapt to the size of the corresponding display area to ensure that the video editing image can be fully displayed on the display screen of the electronic device.
- the overlapping video track clips at different preview display positions may differ. For example, at some preview display positions, all video track clips overlap, while at some preview display positions, only some video track clips overlap.
- the split-screen area corresponding to the video track segment in the video editing image displays the video image corresponding to the video track segment at the corresponding pre-display position; if the preview display position on the timeline is located outside the timeline interval corresponding to the video track segment, the split-screen area corresponding to the video track segment in the video editing image displays a preset background, such as a black background, presenting a black screen display effect.
- a preset background such as a black background
- other set backgrounds can also be displayed, such as other solid color backgrounds or patterned backgrounds.
- the method disclosed in the present invention can realize the rapid split-screen splicing of multiple video track segments formed based on multiple video materials through a video editing tool without manual adjustment, thereby meeting the editing needs of users who want rapid video split-screen splicing, and automatically performing video split-screen splicing improves the efficiency of video material editing.
- the video images of the plurality of video track segments are displayed, and the video editing image formed according to the video split-screen template can be implemented in the following manner:
- Step a1 Get the size of the canvas corresponding to the video clip.
- Step a2 determine the sizes and positions of the video images of the multiple video track fragments in the canvas based on the video editing track information of the multiple video track fragments, the mapping relationship between the multiple split-screen areas indicated by the video split-screen template and the video editing track, and the size of the canvas.
- split-screen areas corresponding to the multiple video track segments are determined; the size of the canvas and the size of the video image can be consistent, so the split-screen areas indicated by the video split-screen template can be mapped in different canvas areas of the canvas, and there is a correspondence between the split-screen area and the canvas area. Determining the split-screen area corresponding to each video track segment is equivalent to determining the canvas area corresponding to each video track segment, and the size and position of the split-screen area corresponding to the video track segment are the size and position of the video image of the video track segment in the canvas.
- Step a3 based on the sizes of the video images of the plurality of video track segments respectively in the canvas
- the sizes of the video images of the multiple video track segments are adjusted to be consistent with the sizes of the corresponding split-screen areas, and then the resized video images are filled in the canvas based on the positions of the video images of the video track segments in the canvas.
- Step a4 displaying the canvas to show the video editing image formed by the video images of the multiple video track segments according to the video split-screen template.
- the size of the canvas filled with the video image of multiple video track segments may be inconsistent with the size of the screen area used to display the video editing image in the display screen of the electronic device.
- the size of the canvas and the size of the video image on the canvas can be adjusted to adapt to the size of the corresponding screen area. It should be noted that the adjustment of the size of the canvas and the size of the video image here is for adapting the display, and does not affect the size of the target video that is exported after the video split screen splicing and the entire editing is completed.
- the left-right two-split video split screen template is exemplified as box S1 in Figure 2
- the split screen area S1a on the left side of the video track segment 1 and the split screen area S1b on the right side of the video track segment 2 can be determined.
- the size of the video clip canvas is consistent with the size of the video split-screen template
- determine the size and position of the video image of video track segment 1 in the canvas wherein the height of the video image of video track segment 1 is consistent with the height of the canvas, and the width of the video image of video track segment 1 is equal to half the width of the canvas
- determine the position of the video image of video track segment 1 in the canvas assuming that the center of the canvas is the origin, along the horizontal axis and the vertical axis, the normalized coordinate range is -0.5 to 0.5, and the coordinate points of some positions are shown in FIG2, and the coordinate position of the center of video track segment 1 in the canvas is (-0.25, 0).
- video track segment 2 For video track segment 2, a similar method is used to process it, and it can be determined that the height of the video image of video track segment 2 in the canvas is consistent with the canvas, the width is half the width of the canvas, and the normalized coordinate position of the center of video material 2 in the canvas is (0.25, 0). Afterwards, the video images of the resized video track segments 1 and 2 are filled into the canvas according to the normalized coordinate positions of their centers in the canvas.
- FIG. 2 is a video split screen with two video track segments using left and right split screens.
- the template is used as an example for explanation.
- the processing process of using the video split-screen template to realize video split-screen splicing for a larger number of video track segments is similar.
- FIG. 3 is a schematic diagram of a flow chart of a video material editing method provided by another embodiment of the present disclosure.
- the layout of the multiple split-screen areas indicated by the video split-screen template indicated by the video split-screen template switching instruction is different from that indicated by the video split-screen template used before the switching, but the number of split-screen areas supported is the same.
- the layout of the multiple split-screen areas indicated by the video split-screen template can be reflected by the size and position of the multiple split-screen areas.
- the electronic device responds to the video split screen template switching instruction, applies the video split screen template indicated by the video split screen template switching instruction to multiple video track segments, forms a video editing image based on the video images of the multiple video track segments according to the video split screen template indicated by the video split screen template switching instruction, and displays it in the split screen splicing page, so that the user can preview the video split screen splicing effect.
- the implementation method of forming a video editing image by applying the video split screen template indicated by the video split screen template switching instruction to multiple video track segments and the implementation method of displaying the video editing image are similar to the implementation method shown in Figure 1, and the detailed description of the embodiment shown in Figure 1 can be referred to. For the sake of brevity, it will not be repeated here.
- the electronic device can display one or more video split-screen templates in the split-screen splicing page for the user to choose, respond to the user's trigger operation on any of the video split-screen templates, and obtain the video split-screen template switching instruction.
- the present invention meets the editing needs of users by showing other video split-screen templates to choose from, supporting users to switch video split-screen templates, replacing the image video of the video track segment, and presenting a split-screen splicing effect of the video editing image formed based on the video split-screen template.
- the method further includes:
- the split-screen splicing page may display playback controls, and the user can operate A playback control is used to input a preview playback instruction to the electronic device, and the electronic device responds to the preview playback instruction and plays the video images of the multiple video track segments based on the timeline according to the video editing image formed by the video split-screen template, wherein the effect of the preview playback is that the multiple video track segments are played simultaneously in their respective corresponding split-screen areas.
- the video image of the video track segment is displayed in the video editing image through the corresponding split-screen area; if the preview playback position is located outside the timeline interval covered by the video track segment on the timeline, the preset background is displayed in the split-screen area corresponding to the video track segment in the video editing image.
- the two video track segments are played simultaneously in their respective corresponding split-screen areas. From 5 to 8 seconds, since the duration of video track segment 1 is not enough, its corresponding timeline interval does not cover the 5 to 8 seconds on the timeline. In this time period from 5 to 8 seconds, the split-screen area S1a displays a black background, and the split-screen area S1b displays the video image of video track segment 2. After the playback is completed, it automatically locates to the starting time position of the timeline.
- some user trigger operations can interrupt the preview playback. For example, during the playback process, clicking to switch the selected video track segment, switching the video split screen, or clicking the display area in the split screen splicing page used to display the video editing image (i.e., the preview playback screen), etc., will trigger the pause playback.
- the visual effect of the pause playback is that multiple video track segments are paused at the same time. On this basis, if the user triggers the playback control again, it will continue to play from the preview display position that triggered the interruption.
- the user can also control the pause playback by triggering the playback control again.
- the present disclosure supports users to preview and play the video split-screen splicing effect in the split-screen splicing page. Through the preview, users can also clearly understand whether the editing effect of the video exported according to the current video split-screen method meets expectations.
- the method further includes:
- S106 In response to the adjustment instruction for the video track segment, adjust the video image of the video track segment. or, replacing the video track fragments; or, mirroring the video images of the video track fragments in the corresponding split-screen area.
- the adjustment instructions input based on different triggering methods correspond to different adjustment methods as shown above.
- the position, direction or size of the video image of the selected video track segment in the corresponding split-screen area can be adjusted by gestures or gesture combination control;
- the split-screen areas corresponding to the two video track segments can be exchanged by dragging the video track segment to the split-screen area corresponding to other video track segments and letting go;
- the electronic device can be triggered to display the corresponding function panel, and the control provided in the function panel can be used to trigger the replacement of the video track segment or the mirroring of the video image of the video track segment in the corresponding split-screen area.
- the adjustment shown above can be, but is not limited to, for all video images of the video track segment.
- triggering different adjustments can be achieved through but not limited to the above examples.
- the adjustment of the position, size and direction of the video image of the video track segment in the split-screen area can also be triggered by the corresponding controls in the split-screen splicing page.
- the present disclosure supports the user to make one or more adjustments to the video track segments as described above during the video split-screen splicing process, thereby obtaining a video splicing effect that meets the user's expectations and meeting the user's video editing needs.
- step S104 can choose to execute any step from step S104 to step S106, and can also repeat one or more steps as needed.
- the order of the multiple steps is not limited. For example, the user can execute step S105 to preview, then execute step S106 to adjust the video track segment, and then execute step S105 to preview and play.
- the user determines that the video split screen splicing effect meets expectations, and the electronic device can enter the video editing page to perform other editing operations, export video operations, etc.
- the electronic device can enter the video editing page to perform other editing operations, export video operations, etc.
- FIG. 3 optionally, it may also include:
- multiple video track fragments and the video split-screen splicing information corresponding to the multiple video track fragments can be imported into the corresponding editing draft file in response to the user's trigger operation, and the corresponding editing page can be displayed.
- the video images of the multiple video track fragments are displayed in the preview area of the editing page to form a video editing image according to the video split-screen template.
- one of the multiple video track segments is set as a video track segment on the main track, and all other video track segments except the video segment on the main track are set as video track segments on the picture-in-picture track.
- video track segments corresponding to the video materials are formed based on multiple video materials. Then, in the editing draft file, based on the order in which the multiple video materials are obtained, the video track segment formed by the first obtained video material can be set as the video track segment on the main track, and the video track segments formed by other video materials can be set as the video track segments on the picture-in-picture track.
- the existing logic framework of the video editing tool can be used to respond to the user's subsequent editing operations according to the picture-in-picture processing logic.
- the user can perform a new editing operation through the editing page and then export the edited target video, or the user can export the edited target video without performing a new editing operation.
- the electronic device is a mobile phone
- an APP1 abbreviated as Application 1 supporting the video editing function is installed in the mobile phone.
- Figures 4A-4I are schematic diagrams of human-computer interaction interfaces provided in embodiments of the present disclosure.
- Application 1 can display a user interface 11 as shown in FIG4A on the mobile phone.
- User interface 11 is used to display the material selection page of application 1.
- the material selection page can be entered through the creation entrance provided in the main page of application 1, or through other entrances/paths provided by application 1.
- the present disclosure does not limit the way to enter the material selection page.
- application 1 can aggregate and display the identification of the material to the user on the material selection page, select the material based on the user's operation, trigger the split screen, and so on. Splicing, entering editing projects, etc.
- the user interface 11 includes an area 101 , a control 102 , and a control 103 .
- thumbnails of images, photos, videos and other materials contained in the album can be displayed in area 101, and can be displayed according to different categories. Multiple tags are set in area 101, and thumbnails of materials under corresponding tags are aggregated and displayed based on the selected tags. Due to the size limitation of area 101, it may not be possible to display thumbnails of all materials at once. Users can view more materials by sliding up and down. In area 101, each material corresponds to a display area a1, and thumbnails of corresponding materials are displayed in area a1. If it is a video material, the duration of the video material can also be displayed in area a1.
- a selection mark and sequence information can be displayed in area a1, and the display style of area a1 corresponding to the material selected by the user (such as color, edge lines of area a1, brightness, etc.) can be different to distinguish the material selected by the user from the material not selected.
- the control 102 is used to trigger the video split screen splicing for the acquired video material, so the control 102 can also be understood as an entrance to the split screen splicing page.
- the name of the control 102 can be "Splicing".
- Control 103 is used to trigger the import of the selected video material into the editing draft and enter the editing page to edit the video material, wherein application 1 can perform a set of functions on the imported material in the editing page, such as adding special effects, stickers, picture-in-picture, text, audio, etc.
- the present disclosure does not limit the display styles of controls 102 and 103 .
- application 1 can generate video material of a preset length based on the image or photo.
- the present disclosure does not limit the preset length.
- the preset length can be 2 seconds, 3 seconds, etc.
- the user selects 4 video materials, and when entering the split-screen splicing page, application 1 forms video track segments 1 to 4 corresponding to the 4 video materials based on the 4 video materials.
- the formed video track segments 1 to 4 can be understood as the original video materials 1 to 4.
- application 1 After application 1 receives an operation performed by the user in user interface 11 shown in FIG. 4A, such as clicking control 102, application 1 generates a video split screen splicing instruction based on the user's operation, and responds to the video split screen splicing instruction.
- Application 1 exemplarily displays user interface 12 as shown in FIG. 4B on the mobile phone.
- User interface 12 is used to display a split screen splicing page, on which the video split screen splicing effect and Adjust the video split-screen splicing effect.
- the user interface 12 includes: an area 104 , an area 105 , an area 106 , a control 107 , and a control 108 .
- area 104 is a preview area of a video editing image formed by video images of multiple video materials according to the video split-screen splicing template determined by application 1.
- application 1 automatically matches video split-screen template 1 for multiple video materials, as shown in FIG4B , the four split-screen areas indicated by video split-screen template 1 have the same size and are arranged in 2 rows and 2 columns, video material 1 corresponds to the split-screen area in the 1st row and 1st column, video material 2 corresponds to the split-screen area in the 1st row and 2nd column, video material 3 corresponds to the split-screen area in the 2nd row and 1st column, and video material 4 corresponds to the split-screen area in the 2nd row and 2nd column.
- Area 105 is used to display a progress bar, which can reflect the playback progress and also show the timeline of the video editing track, and the progress bar supports manual dragging by the user.
- Region 106 may include a label 109, wherein the label 109 is used to trigger the display of more video split-screen templates supporting four video materials in region 106.
- a preset number of video split-screen templates supporting four video materials may be displayed in region 106, and arranged and displayed in a set manner, for example, arranged horizontally from left to right, and positioned by default on the currently used video split-screen template 1, and video split-screen template 1 may be displayed at the leftmost position of region 106.
- four video split-screen templates are displayed in region 106, namely, video split-screen templates 1 to 4, and video split-screen template 1 is selected by default.
- application 1 may exemplarily display a user interface 13 as shown in FIG4C on the mobile phone, and region 104 in user interface 13 displays a canvas, and the video images of the four video materials on the canvas form a video editing image according to the multiple split-screen areas indicated by video split-screen template 2, and the corresponding video editing image is displayed in region 104, and video split-screen template 2 in region 106 is in a selected state. Displaying a video split-screen template in area 106 with the same number of split-screen areas as the number of video materials selected by the user can facilitate the user to quickly select and switch the video split-screen template he wants to use, which is conducive to improving editing efficiency.
- area 106 may also include other labels, for example, a label named "AA" in area 106, which may correspond to a specified function panel. No specific limitation is made here.
- the function labels set in the split-screen splicing page can be determined according to needs.
- Control 107 is an entry for exiting the split screen splicing page.
- application 1 When application 1 receives a user's trigger operation on control 107, it can return to display the material selection page shown in FIG. 4A.
- the material selection page is an entry for The previous state, the previously selected material is still in the selected state.
- the user can enter the split screen splicing page through the control 102 for multiple times, and then exit the split screen splicing page through the control 107 to return to the material selection page.
- Control 108 is a playback control that can control the display of a video editing image formed by a video split-screen template according to a timeline of multiple video materials in area 104, thereby presenting the effect of simultaneously playing or pausing each video material in its corresponding split-screen area.
- the split-screen splicing page shown in FIG. 4B is entered from the material selection page, and the playback progress defaults to the time position of the first frame, and is in a paused playback state.
- the user can reposition the preview display position by dragging the progress bar in area 105, and control the four video materials to play/pause simultaneously in their corresponding split-screen areas by operating control 108. Since the duration of multiple video materials is different, the corresponding split-screen area of the short-duration video material can be, but is not limited to, displayed as a black screen after the playback of the short-duration video material is completed.
- the split-screen areas corresponding to different video materials displayed in area 104 are operable, and the user can select a video material through the split-screen area corresponding to the video material and can adjust the video material in area 104 .
- a user can select video material by clicking on the split-screen area corresponding to the video material in area 104.
- the display style of the selected video material in area 104 can be different from the display style of other unselected video materials.
- the selected video material can be displayed semi-transparently. Any of the adjustments described above can be performed on the selected video material through different operations.
- the application 1 can prompt the user through vibration or text or other methods. For example, in the user interface 14 shown in FIG4D, the user selects the video material 2 by clicking the split-screen area of the video material 2, and drags the video material 2 in the direction indicated by the arrow, and moves the video image of the video material 2 downward in the corresponding split-screen area, wherein the image area portion of the video material 2 moved outside the split-screen area may not be displayed in the area 104.
- application 1 can provide a prompt by vibration, text or other means.
- the user selects the video material 2 by clicking the split-screen area corresponding to the video material 2, and reduces the size of the video image of the video material 2 in the corresponding split-screen area by two-finger operation. (The implementation method of enlarging the video material is similar).
- display styles can also be used to highlight the video material selected by the user for adjusting the split-screen area, and are not limited to the method of highlighting the border of the image area.
- the selected video material can also be highlighted by adding a mask with a specific effect to the selected video material.
- the user interface 16 shown in FIG. 4F and the user interface 17 shown in FIG. 4G wherein, referring to the user interface 16, the user selects the video material 2 by long pressing the split-screen area corresponding to the video material 2, and triggers the exchange of the split-screen area between the video material 1 and the video material 2 by dragging the video material 2 to the position of the split-screen area corresponding to the video material 1.
- the state shown in the user interface 16 is the state where the exchange of the split-screen area is triggered but the finger is not released, and the video material 1 moves to the position of the split-screen area of the video material 2, but the video material 2 is not yet displayed in the split-screen area of the video material 1.
- the split-screen area of the video material 1 and the video material 2 is successfully exchanged, and after the split-screen area is successfully exchanged, the two video materials are adaptively adjusted to adapt to the size of the split-screen area.
- the function panel can be triggered to display.
- the function panel can be used to perform some editing operations on the video material, such as material, vertical flip, horizontal flip, and rotation, etc.
- application 1 can exemplarily display the user interface 18 as shown in Figure 4H, and the user interface 18 includes: a function panel a2 corresponding to the video material 2, and the function panel a2 can include: controls 110 to controls 113.
- a trigger operation such as a single-click operation
- Control 110 is used to trigger the replacement of video material. After the user operates control 110 (such as clicking), application 1 can enter the material selection page, allowing the user to select another piece of material (video material or video material generated based on images/photos) in the material selection page to replace video material 2. Among them, the present disclosure can allow video materials to be imported repeatedly.
- Control 111 is used to trigger the video image of the video material to flip vertically in the corresponding split-screen area.
- application 1 receives the user's trigger operation (such as a click operation) on control 111, application 1 vertically flips the video image of video material 2 in the split-screen area corresponding to video material 2, presenting a left-right mirror-symmetrical display.
- Control 112 is used to trigger the video image of the video material to flip horizontally in the corresponding split-screen area.
- application 1 receives a trigger operation (such as a click operation) from the user on control 112, application 1 horizontally flips the video image of video material 2 in the split-screen area corresponding to video material 2, presenting a horizontally mirror-symmetrical display.
- Control 113 is used to trigger the video image of the video material to rotate in the split-screen area.
- the rotation angle can be a preset angle. For example, each time the control 113 is triggered, the selected video material rotates 90 degrees to the left or 90 degrees to the right.
- function panel may also include other controls for triggering adjustments, and are not limited to the controls 110 to 113 illustrated here.
- the user interface 12 further includes an import control 114.
- the application 1 receives a trigger operation for the import control 114, the application 1 imports the information of the multiple video materials and the currently used video split-screen template into the editing draft file for recording, and jumps to the editing page, and displays the video images of the multiple video materials in the preview area of the editing page according to the video split-screen template currently used to form a video editing image.
- material track information is assigned to each of the multiple video materials.
- the first video material selected by the user that is, video material 1, is the video material on the main track
- the remaining video materials are video materials on the picture-in-picture track (can also be understood as main track materials)
- video materials 2 to 3 are video materials on the picture-in-picture track (can also be understood as picture-in-picture materials)
- the order of the multiple picture-in-picture tracks is arranged according to the order in which the corresponding video materials are obtained.
- the application 1 receives a trigger operation for the import control 114, and exemplarily displays the user interface 19 shown in FIG. 4I on the mobile phone.
- the preview area 115 included in the user interface 19 displays the video images of the video materials 1 to 4 according to the video split-screen template 1 to form a video editing image.
- the application 1 can respond to the user's editing operation in the editing page in the manner of the main track material and the picture-in-picture track material; for example, a certain picture-in-picture material can be deleted, and the split-screen area corresponding to the picture-in-picture material displays the preset background; for another example, if a new picture-in-picture material is added to the existing picture-in-picture track, the picture-in-picture material is filled into the split-screen area of the existing picture-in-picture track, but covers different timeline intervals on the video editing track; if a new picture-in-picture track is added, the new picture-in-picture material covers the specified split-screen area for display.
- the user can still manually adjust the screen size based on needs, without having to set the video split-screen splicing processing logic separately in the video editing page. This can reduce the complexity of the processing logic of application 1 while meeting the user's video split-screen splicing and editing needs.
- the user can also preview the video split-screen splicing effect and adjust the video material in the split-screen splicing page and then enter the editing project by operating the import control 114, that is, the editing page can be entered through the import control displayed in the user interface shown in any of the embodiments shown in Figures 4C to 4I.
- FIG5 is a schematic diagram of the structure of a video material editing device provided by an embodiment of the present disclosure.
- the device 500 provided by this embodiment includes:
- the acquisition module 501 is used to acquire multiple video materials.
- the template determination module 502 is used to determine a video split-screen template, where the video split-screen template is used to indicate multiple split-screen areas in the same video image.
- the video processing module 503 is used to display multiple video track segments on the video editing track, and form a video editing image with the video images of the multiple video track segments according to the video split-screen template.
- the display module 504 is used to display the video editing image.
- the multiple video track segments are formed based on the multiple video materials, at least one video material among the multiple video materials is used to form one video track segment among the multiple video track segments, and the video materials used to form different video track segments among the multiple video track segments are different video materials among the multiple video materials;
- the timeline intervals corresponding to the multiple video track segments on the video editing track at least partially overlap
- the video editing image has the multiple split-screen areas indicated by the video split-screen template, One split-screen area in the video editing image is used to display a video image of a video track segment in the multiple video tracks, and different split-screen areas in the video editing image are used to display images of different video track segments in the multiple video track segments.
- the video processing module 503 is also used to enter the video editing page based on the video split-screen template adopted by the multiple video track fragments, and display the video images of the multiple video track fragments in the video editing page according to the video split-screen template to form a video editing image.
- the video processing module 503 is also used to, after entering the video editing page, set one of the track video segments as a video track segment on the main track, and set all other video track segments as video track segments on the picture-in-picture track in the corresponding editing draft file.
- the video processing module 503 is specifically used to set the video track segment formed by the first acquired video material as the video track segment on the main track, and set the video track segments formed by other video materials as the video track segments on the picture-in-picture track based on the order of acquiring the multiple video materials when the multiple video track segments correspond one-to-one to the multiple video materials.
- the video processing module 503 is specifically used to obtain the size of the canvas corresponding to the video clip; based on obtaining the video editing track information of the multiple video track fragments, the mapping relationship between the multiple split-screen areas indicated by the video split-screen template and the video editing track, and the size of the canvas, determine the sizes and positions of the video images of the multiple video track fragments in the canvas respectively; based on the sizes and positions of the video images of the multiple video track fragments in the canvas respectively, fill the video images of the multiple video track fragments in the canvas.
- the display module 504 is specifically used to display the canvas to show the video editing image formed by the video images of the multiple video track segments according to the video split-screen template.
- the durations of the multiple video track segments formed based on the multiple video materials are consistent with the length of the timeline, and the start times of the multiple video track segments are aligned on the timeline.
- the video processing module 503 when one of the plurality of video track segments is formed based on at least one video material among the plurality of video materials, the video processing module 503 is specifically used to The at least one video material is processed by one or more video processing methods including video speed change, insertion or splicing of video clips of specified content to obtain a video track segment whose length is consistent with the length of the timeline.
- the video processing module 503 is also used to respond to a video split-screen template switching instruction, and form a video editing image based on the video images of the multiple video track fragments according to the video split-screen template indicated by the video split-screen template switching instruction; the video split-screen template indicated by the video split-screen template switching instruction is different from the layout of the multiple split-screen areas indicated by the video split-screen template used before the switching.
- the display module 504 is further used to display the video images of the multiple video track segments according to the video editing image formed by the video split-screen template indicated by the video split-screen template switching instruction.
- the display module 504 is also used to respond to the preview playback instruction, and display the video images of the multiple video track segments based on the timeline to form a video editing image according to the video split-screen template; wherein, during the playback process, when the preview playback position is located in the timeline interval covered by the video track segments on the timeline, the video images of the video track segments are displayed in the video editing image through the corresponding split-screen area; if the preview playback position is located outside the timeline interval covered by the video track segments on the timeline, the preset background is displayed in the split-screen area corresponding to the video track segments in the video editing image.
- the video processing module 503 is also used to respond to adjustment instructions for video track segments, adjust the position, direction or size of the video image of the video track segment in the corresponding split-screen area; or; exchange the split-screen areas corresponding to the video images of different video track segments; or; replace the video track segments in the split-screen area; or, mirror the video image of the video track segment in the corresponding split-screen area.
- the device provided in this embodiment can be used to execute the technical solution of any of the aforementioned method embodiments. Its implementation principle and technical effects are similar. Please refer to the detailed description of the aforementioned method embodiments. For the sake of brevity, they will not be repeated here.
- the present disclosure provides an electronic device, comprising: one or more processors; a memory; and one or more computer programs; wherein the one or more computer programs are stored in the memory; when the one or more processors execute the one or more computer programs, the electronic device implements the video material editing method of the previous embodiment.
- the present disclosure provides a chip system, which is applied to an electronic device including a memory and a sensor; the chip system includes: a processor; when the processor executes the video material editing method of the above embodiment.
- the present disclosure provides a computer-readable storage medium having a computer program stored thereon.
- the computer program is executed by a processor in an electronic device, the video material editing method of the foregoing embodiment is implemented.
- the present disclosure provides a computer program product.
- the computer program product When the computer program product is run on a computer, the computer is enabled to execute the video material editing method of the foregoing embodiment.
- all or part of the functions can be implemented by software, hardware, or a combination of software and hardware.
- software When implemented using software, it can be implemented in whole or in part in the form of a computer program product.
- the computer program product includes one or more computer instructions.
- the computer can be a general-purpose computer, a special-purpose computer, a computer network, or other programmable device.
- the computer instructions can be stored in a computer-readable storage medium.
- the computer-readable storage medium can be any available medium that can be accessed by the computer or a data storage device such as a server or a data center that includes one or more available media integrated.
- the available medium can be a magnetic medium (for example, a floppy disk, a hard disk, a tape), an optical medium (for example, a DVD), or a semiconductor medium (for example, a solid state drive (SSD)), etc.
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Television Signal Processing For Recording (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
一种视频素材剪辑方法及装置,其中,该方法包括:获取多个视频素材,并确定视频分屏模板,通过在视频编辑轨道上展示基于多个视频素材所形成的多个视频轨道片段,并显示多个视频轨道片段的视频图像按照视频分屏模板所形成的视频编辑图像,以通过视频编辑图像中的多个分屏区域分屏展示多个视频轨道片段的视频图像;多个视频轨道片段在视频编辑轨道上对应的时间线区间至少是部分重叠的,以保证最终得到的视频具有视频分屏拼接效果。该方法能够通过视频剪辑工具实现多个视频素材的快速分屏拼接,无需用户手动进行调整,满足用户想要快速拼接视频素材的视频剪辑需求,且自动进行视频分屏拼接提高了视频素材剪辑效率,有利于提升用户体验。
Description
本申请要求于2022年9月30日递交的中国专利申请第202211231854.1号的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
本公开涉及一种视频素材剪辑方法及装置。
随着互联网技术的快速发展,用户通过电子设备中的应用程序可以对视频或者图像进行剪辑得到视觉效果丰富的视频。在视频剪辑场景中,用户常常想要将多个视频素材进行拼接,以在屏幕中同时展示多个视频素材的内容。目前,用户是通过手动对视频素材进行排版从而得到想要的拼接效果,采用这种方式视频剪辑效率较低。
发明内容
为了解决上述技术问题,本公开提供了一种视频素材剪辑方法及装置。
本公开提供了一种视频素材剪辑方法,包括:
获取多个视频素材;
确定视频分屏模板,所述视频分屏模板用于指示位于同一视频图像中的多个分屏区域;
在视频编辑轨道上展示多个视频轨道片段,并显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像;
其中,所述多个视频轨道片段是基于所述多个视频素材所形成的,所述多个视频素材中至少一个视频素材用于形成所述多个视频轨道片段中的一个视频轨道片段,用于形成所述多个视频轨道片段中不同视频轨道片段的视频素
材是所述多个视频素材中的不同视频素材;
所述多个视频轨道片段在所述视频编辑轨道上对应的时间线区间至少是部分重叠的;
所述视频编辑图像中具有所述视频分屏模板所指示的所述多个分屏区域,所述视频编辑图像中的一个分屏区域用于展现所述多个视频轨道中的一个视频轨道片段的视频图像,所述视频编辑图像中的不同分屏区域用于展现所述多个视频轨道片段中不同视频轨道片段的图像。
在一些实施例中,还包括:基于所述多个视频轨道片段所采用的视频分屏模板,进入视频剪辑页面,并在所述视频剪辑页面中显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
在一些实施例中,进入所述视频剪辑页面后,在相应的剪辑草稿文件中,设置其中一个所述轨道视频片段为主轨道上的视频轨道片段,设置其他所有所述视频轨道片段为画中画轨道上的视频轨道片段。
在一些实施例中,所述设置其中一个所述轨道视频片段为主轨道上的视频轨道片段,设置其他所有所述视频轨道片段为画中画轨道上的视频轨道片段,包括:
所述多个视频轨道片段与所述多个视频素材一一对应时,基于获取所述多个视频素材的先后顺序,设置由第一个获取的视频素材形成的视频轨道片段为主轨道上的视频轨道片段,设置由其他视频素材分别形成的视频轨道片段为画中画轨道上的视频轨道片段。
在一些实施例中,所述显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像,包括:
获取视频剪辑对应的画布的尺寸;
基于获取所述多个视频轨道片段的视频编辑轨道信息、所述视频分屏模板所指示的多个分屏区域与视频编辑轨道之间的映射关系以及所述画布的尺寸,确定所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置;
基于所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置,将所述多个视频轨道片段的视频图像填充在所述画布中,并显示所述画
布,以展示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
在一些实施例中,基于所述多个视频素材形成的所述多个视频轨道片段的时长与所述时间线的长度一致,且所述多个视频轨道片段的起始时间在所述时间线上对齐。
在一些实施例中,基于所述多个视频素材中至少一个视频素材形成所述多个视频轨道片段中的一个视频轨道片段时,采用视频变速、插入或者拼接指定内容的视频片段中的一种或者多种视频处理方式针对所述至少一个视频素材进行处理得到时长与所述时间线的长度一致的视频轨道片段。
在一些实施例中,还包括:响应视频分屏模板切换指令,显示所述多个视频轨道片段的视频图像按照所述视频分屏模板切换指令所指示的视频分屏模板所形成的视频编辑图像;所述视频分屏模板切换指令所指示的视频分屏模板与切换之前所采用的视频分屏模板所指示的多个分屏区域的布局不同。
在一些实施例中,还包括:响应预览播放指令,基于所述时间线播放所述多个视频轨道片段的视频图像按照所述视频分屏模板形成的视频编辑图像;其中,在播放过程中,预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间时,在所述视频编辑图像中显示所述视频轨道片段的视频图像;若预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间之外,在所述视频编辑图像中所述视频轨道片段对应的分屏区域显示预设背景。
在一些实施例中,还包括:响应针对视频轨道片段的调整指令,调整所述视频轨道片段的视频图像在相应分屏区域中的位置、方向或者尺寸;或者;交换不同所述视频轨道片段的视频图像对应的分屏区域;或者;替换所述分屏区域中的视频轨道片段;或者,将所述视频轨道片段的视频图像在相应分屏区域中镜像显示。
本公开提供了一种视频素材剪辑装置,包括:
获取模块,用于获取多个视频素材;
模板确定模块,用于确定视频分屏模板,所述视频分屏模板用于指示位于同一视频图像中的多个分屏区域;
视频处理模块,用于在视频编辑轨道上展示多个视频轨道片段,并将所述
多个视频轨道片段的视频图像按照所述视频分屏模板形成视频编辑图像;
显示模块,用于显示所述视频编辑图像;
其中,所述多个视频轨道片段是基于所述多个视频素材所形成的,所述多个视频素材中至少一个视频素材用于形成所述多个视频轨道片段中的一个视频轨道片段,用于形成所述多个视频轨道片段中不同视频轨道片段的视频素材是所述多个视频素材中的不同视频素材;
所述多个视频轨道片段在所述视频编辑轨道上对应的时间线区间至少是部分重叠的;
所述视频编辑图像中具有所述视频分屏模板所指示的所述多个分屏区域,所述视频编辑图像中的一个分屏区域用于展现所述多个视频轨道中的一个视频轨道片段的视频图像,所述视频编辑图像中的不同分屏区域用于展现所述多个视频轨道片段中不同视频轨道片段的图像。
本公开提供一种电子设备,包括:存储器和处理器;
所述存储器被配置为存储计算机程序指令;
所述处理器被配置为执行所述计算机程序指令,电子设备执行所述计算机程序指令,使得所述电子设备实现如上述所述的视频素材剪辑方法。
本公开提供一种可读存储介质,包括:计算机程序指令;电子设备执行所述计算机程序指令,使得所述电子设备实现如上述所述的视频素材剪辑方法。
本公开提供一种计算机程序产品,电子设备执行所述计算机程序产品,使得所述电子设备实现如电子设备执行所述计算机程序产品,使得所述电子设备实现如上述所述的视频素材剪辑方法。
本公开实施例提供一种视频素材剪辑方法及装置,其中,该方法包括:获取多个视频素材,并确定视频分屏模板,通过在视频编辑轨道上展示基于多个视频素材所形成的多个视频轨道片段,并显示多个视频轨道片段的视频图像按照视频分屏模板所形成的视频编辑图像,以通过视频编辑图像中的多个分屏区域分屏展示多个视频轨道片段的视频图像;其中,所述多个视频素材中至少一个视频素材用于形成多个视频轨道片段中的一个视频轨道片段,用于形成不同视频轨道片段的视频素材不完全相同;在进行视频分屏拼接时,所述多个视频轨道片段在所述视频编辑轨道上对应的时间线区间至少是部分重叠的。
本公开的方法能够通过视频剪辑工具实现多个视频素材的快速分屏拼接,无需手动进行调整,满足用户想要快速拼接视频素材的视频剪辑需求,且自动进行视频分屏拼接提高了视频素材剪辑效率。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理。
为了更清楚地说明本公开实施例,下面将对实施例中所需要使用的附图作简单地介绍,显而易见地,对于本领域普通技术人员而言,在不付出创造性劳动性的前提下,还可以根据这些附图获得其他的附图。
图1为本公开一实施例提供的视频素材剪辑方法的流程示意图;
图2为本公开示例性地示出的支持2个视频轨道片段采用左右二分屏的视频分屏模板进行视频分屏拼接的示意图;
图3为本公开另一实施例提供的视频素材剪辑方法的流程示意图;
图4A至图4I为本公开提供的人机交互界面示意图;
图5为本公开一实施例提供的视频素材剪辑装置的结构示意图。
为了能够更清楚地理解本公开的上述目的、特征和优点,下面将对本公开的方案进行进一步描述。需要说明的是,在不冲突的情况下,本公开的实施例及实施例中的特征可以相互组合。
在下面的描述中阐述了很多具体细节以便于充分理解本公开,但本公开还可以采用其他不同于在此描述的方式来实施;显然,说明书中的实施例只是本公开的一部分实施例,而不是全部的实施例。
示例性地,本公开提供一种视频素材剪辑方法及装置,通过在视频编辑轨道上展示基于多个视频素材所形成的多个视频轨道片段,并显示多个视频轨道片段的视频图像按照视频分屏模板所形成的视频编辑图像,以通过视频编辑图像中的多个分屏区域分屏展示多个视频轨道片段的视频图像;其中,所述多个视频素材中至少一个视频素材用于形成多个视频轨道片段中的一个视频
轨道片段,用于形成不同视频轨道片段的视频素材不完全相同;在进行视频分屏拼接时,多个视频轨道片段在视频编辑轨道上对应的时间线区间至少是部分重叠的,保证视频分屏拼接效果。本公开的方法能够通过视频剪辑工具实现多个视频素材的快速分屏拼接,无需手动进行调整,满足用户想要快速拼接视频素材的视频剪辑需求,且自动进行视频分屏拼接提高了视频素材剪辑效率。
此外,视频剪辑工具可以向用户提供可视化组件触发视频分屏拼接,对于用户来说操作方便,有利于提升用户的剪辑体验感受。
其中,本公开的视频素材剪辑方法由电子设备来执行。电子设备可以是平板电脑、手机(如折叠屏手机、大屏手机等)、可穿戴设备、车载设备、增强现实(augmented reality,AR)/虚拟现实(virtual reality,VR)设备、笔记本电脑、超级移动个人计算机(ultra-mobile personal computer,UMPC)、上网本、个人数字助理(personal digital assistant,PDA)、智能电视、智慧屏、高清电视、4K电视、智能音箱、智能投影仪等物联网(the internet of things,IOT)设备,本公开对电子设备的具体类型不作任何限制。其中,本公开对电子设备的操作系统的类型不做限定。例如,Android系统、Linux系统、Windows系统、iOS系统等。
基于前述描述,本公开以实施例将以电子设备为例,结合附图和应用场景,对本公开提供的视频素材剪辑方法进行详细阐述。
图1为本公开一实施例提供的视频素材剪辑方法的流程示意图。请参阅图1所示,本实施例的方法包括:
S101、获取多个视频素材。
一些实施例中,电子设备可向用户展示素材选择页面,其中,素材选择页面用于聚合展示可供用户选择进行编辑的视频素材的标识信息,例如以缩略图的方式展示电子设备的相册中的视频以及图像,电子设备可以基于用户的选择获取多个视频素材。
其中,本公开对于用户选择的多个视频素材的数量、视频内容以及视频素材的格式、分辨率等等参数均不做限定。
一些实施例中,为了保证视频分屏拼接的效果,可以设置最少需要获取的视频素材的数量,例如,最少需要获取2个视频素材,若用户选择的视频素
材数量少于最少需要获取的视频素材的数量,电子设备可以向用户展示提示信息,提示用户增加视频素材。
另一些实施例中,为了避免过多数量的视频素材所形成视频轨道片段数量过多,分屏拼接可能会导致分屏拼接形成的视频编辑图像的画面混乱,拼接效果较差,因此,可以设置最多可以获取的视频素材数量,当超过最多可以获取的视频素材数量时,电子设备也可以向用户展示提示信息,提示用户删减视频素材,当然,也可以不限制最多可以获取的视频素材数量。
S102、确定视频分屏模板,所述视频分屏模板用于指示位于同一视频图像中的多个分屏区域。
电子设备可以响应用户输入的视频分屏拼接指令确定视频分屏模板,或者,电子设备也可以在用户开始进行剪辑创作但未选择视频素材之前先确定视频分屏模板。
在一些实施例中,电子设备可以在向用户展示的页面中显示目标控件,用户可通过操作目标控件向电子设备输入视频分屏拼接指令。示例性地,目标控件可以但不限于展示在素材选择页面中。其中,目标控件可以通过任意方式实现,本公开对于目标控件的尺寸、位置、样式、效果等参数不做限定。例如,目标控件的样式可采用文字、字母、数字、图标、图片等显示形式。
另一些实施例中,电子设备向用户展示弹窗询问用户是否需要进行视频素材分屏拼接,基于用户确认,向电子设备输入视频分屏拼接指令,若用户选择跳过,则可以按照电子设备中的视频剪辑工具中现有的方式基于获取的多个视频素材进入视频剪辑页面进行视频素材剪辑。当然,也不限于用户通过语音、手势、组合手势等其他方式向电子设备输入视频分屏拼接指令。
其中,本步骤确定的视频分屏模板所指示的分屏区域的数量可以与用户选择的视频素材的数量一致,也可以不一致,本公开对此不做限定。一些实施例中,可以基于视频素材的数量匹配分屏数量一致的视频分屏模板;另一些实施例中,也可以基于预设的方式或者随机选定或者基于视频素材形成视频轨道片段的策略确定视频分屏模板。
本公开对于确定视频分屏模板的实现方式不做限定。
此外,还需要说明的是,本实施例中步骤S101与步骤S102的执行顺序
可以不分先后。
电子设备中可以预先存储支持多个不同分屏区域数量的视频分屏模板组成视频分屏模板集合,当接收到视频分屏拼接指令时,基于本次视频剪辑用户选择的视频素材的数量从视频分屏模板集合中进行匹配,匹配成功的多个候选拼接模板中确定要使用的视频分屏模板。其中,可以基于候选拼接模板的使用次数、收藏次数或者用户在历史视频剪辑中使用的视频分屏模板的信息中的一项或多项确定要使用的视频分屏模板,或者,也可以采用随机的方式确定任意一个候选拼接模板为要使用的视频分屏模板。
另一些实施例中,也可以预先指定视频分屏模板,例如,可以预先指定2分屏、3分屏、4分屏分别对应的视频分屏模板,当接收到视频分屏拼接指令,可以基于确定的分屏数量将预先指定的相应分屏数量的视频分屏模板确定为要使用的视频分屏模板。
S103、在视频编辑轨道上展示多个视频轨道片段,并显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
获取多个视频素材以及确定了视频分屏模板之后,可以进入分屏拼接页,在进入分屏拼接页时,可以基于多个视频素材形成与分频分屏模板指示的分屏区域数量一致的视频轨道片段,并在视频编辑轨道上展示多个视频轨道片段以及在分屏拼接页中展示多个视频轨道片段的视频图像按照视频分屏模板所形成的视频编辑图像。
其中,多个视频轨道片段是基于多个视频素材形成的,且多个视频素材中至少一个视频素材用于形成多个视频轨道片段中的一个视频轨道片段。本公开对于通过视频素材形成视频轨道片段的具体实现方式不做限定。例如,可以不对视频素材进行任何视频处理生成与视频素材相对应的视频轨道片段;也可以通过对用于形成一视频轨道片段的至少一个视频素材进行拼接、变速、重复循环、插入指定内容的视频片段等一种或多种视频处理方式进行处理得到相应的视频轨道片段。且形成的多个视频轨道片段的时长可以完全相同,也可以不完全相同,也可以完全不同,本公开对此不做限定。
此外,用于形成多个视频轨道片段中不同视频轨道片段的视频素材是多个视频素材中的不同视频素材,即,形成不同视频轨道片段的视频素材可以是
多个视频素材中完全不同的视频素材,也可以是不完全相同的视频素材。例如,有视频素材1至3,可以基于视频素材1至3分别形成视频轨道片段1至3,该方式即形成不同视频轨道片段的视频素材是完全不同的视频素材;或者,也可以基于视频素材1和2用于形成视频轨道片段1,基于视频素材2和视频素材3用于形成视频轨道片段2,该方式即形成不同视频轨道片段的视频素材是不完全相同的视频素材。
在视频剪辑中,基于多个视频素材形成的多个视频轨道片段展示于相应的视频编辑轨道上,其中,一个视频编辑轨道对应展示一个视频轨道片段,且本方案可以包括多个视频编辑轨道,视频编辑轨道的数量可以与视频轨道片段的数量一致。
本实施例中,多个视频轨道片段在视频编辑轨道上对应的时间线区间至少是部分重叠的。其中,视频编辑轨道的时间线长度可以由时长最长的视频轨道片段决定。示例性地,若多个视频轨道片段的时长一致,且在视频编辑轨道上对应的时间线区间是对齐的,则可以理解为多个视频轨道片段在时间维度上的完全重叠的。若多个视频轨道的时长不一致,或者,多个视频轨道片段的时长一致,但在视频编辑轨道上不完全对齐,则可能存在视频剪辑轨道上不同时刻位置发生重叠的视频轨道片段不同。因此,通过保证多个视频轨道片段在视频编辑轨道上对应的时间线区间至少部分重叠,能够保证进行视频分屏拼接在时间线区间重叠的范围内是具有明显的分屏拼接效果的。
电子设备可以向用户展示分屏拼接页,并在分屏拼接页中显示视频编辑图像,使得用户能够通过分屏拼接页预览视频素材进行分屏拼接的拼接效果。其中,视频编辑图像中具有视频分屏模板指示的多个分屏区域,视频编辑图像中的一个分屏区域用于展现多个视频轨道中的一个视频轨道片段的视频图像,且视频编辑图像中的不同分屏区域用于展现多个视频轨道片段中不同视频轨道片段的图像。也可以理解为,视频编辑图像中的多个分屏区域是与多个视频轨道片段之间一一对应的。其中,视频编辑图像的尺寸可能与分屏拼接页中展示视频编辑图像的显示区域的尺寸不同,可以通过对视频编辑图像的尺寸进行调整适配相应的显示区域的尺寸,保证视频编辑图像能够完整地显示在电子设备的显示屏幕上。
由于多个视频轨道片段的时长可能存在差异以及多个视频轨道片段在视频编辑轨道上的时间线区间的对齐情况,可能导致不同预览显示位置上发生重叠的视频轨道片段有差异,例如,一些预览显示位置上所有的视频轨道片段均重叠,一些预览显示位置上仅有部分视频轨道片段发生重叠。
因此,在显示视频编辑图像时,可以基于下述方式实现:
若时间线上预览显示位置位于视频轨道片段相对应的时间线区间中,则视频编辑图像中视频轨道片段对应的分屏区域显示视频轨道片段在相应预先显示位置对应的视频图像;若时间线上预览显示位置位于视频轨道片段相对应的时间线区间之外,则视频编辑图像中视频轨道片段对应的分屏区域显示预设背景,例如黑色背景,呈现黑屏显示效果,当然,也可以显示其他设定的背景,例如其他纯色背景或者带图案的背景。
本公开的方法能够通过视频剪辑工具实现基于多个视频素材形成的多个视频轨道片段的快速分屏拼接,无需手动进行调整,满足用户想要快速视频分屏拼接的剪辑需求,且自动进行视频分屏拼接提高了视频素材剪辑效率。
在图1所示实施例的基础上,显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像可以通过如下方式实现:
步骤a1、获取视频剪辑对应的画布的尺寸。
步骤a2、基于获取所述多个视频轨道片段的视频编辑轨道信息、所述视频分屏模板所指示的多个分屏区域与视频编辑轨道之间的映射关系以及所述画布的尺寸,确定所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置。
在一些实施例中,基于多个视频轨道片段与视频分屏模板所指示的多个分屏区域之间的对应关系,确定多个视频轨道片段分别对应的分屏区域;画布的尺寸与视频图像的尺寸可以是一致,因此,视频分屏模板所指示的各分屏区域可以映射在画布的不同画布区域中,分屏区域与画布区域之间具有对应关系,确定各视频轨道片段对应的分屏区域相当于确定了各视频轨道片段对应的画布区域,视频轨道片段所对应的分屏区域的尺寸和位置即为视频轨道片段的视频图像分别在画布中的尺寸以及位置。
步骤a3、基于所述多个视频轨道片段的视频图像分别在所述画布中的尺
寸,将所述多个视频轨道片段的视频图像的尺寸调整为与相应分屏区域大小一致,之后,基于视频轨道片段的视频图像在画布中的位置将调整尺寸之后的视频图像填充在所述画布中。
步骤a4、显示所述画布,以展示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
其中,填充有多个视频轨道片段的视频图像的画布的尺寸可能与电子设备的显示屏幕中用于展示视频编辑图像的屏幕区域尺寸不一致,在显示时,可以对画布的尺寸以及画布上的视频图像的尺寸进行调整,以适配相应屏幕区域的和尺寸,需要说明的是,此处调整画布的尺寸以及视频图像的尺寸是用于适配显示,不影响在视频分屏拼接以及整个剪辑完成导出剪辑好的目标视频的尺寸。参照图2所示,以2个视频轨道片段进行视频分屏拼接为例,假设相应的视频分屏模板为左右二分屏的视频分屏模板,左右二分屏的视频分屏模板示例性如图2中的方框S1所示,将视频轨道片段1和视频轨道片段2分屏拼接时,按照视频轨道片段1和2分别对应的视频编辑轨道顺序与分屏拼接模板中分屏区域S1a和S1b之间的对应关系,可以确定视频轨道片段1对应左侧的分屏区域S1a和视频轨道片段2对应右侧的分屏区域S1b。假设,视频剪辑的画布的尺寸与视频分屏模板的尺寸一致,接下来,基于分屏区域S1a的尺寸和位置,确定视频轨道片段1的视频图像在画布中的尺寸和位置,其中,视频轨道片段1的视频图像的高度与画布的高度一致,视频轨道片段1的视频图像的宽度等于画布的宽度的一半;再基于分屏区域S1a的位置,确定视频轨道片段1的视频图像在画布中的位置,假设画布中心为原点,沿横坐标轴和纵坐标轴,归一化的坐标范围均为-0.5至0.5,一些位置的坐标点如图2中所示,视频轨道片段1的中心在画布中的坐标位置为(-0.25,0)。对于视频轨道片段2采用类似的方式处理,可以确定视频轨道片段2的视频图像在画布中的高度与画布一致,宽度为画布的宽度的一半、视频素材2的中心在画布中的归一化坐标位置为(0.25,0)。之后,将调整尺寸的视频轨道片段1和2的视频图像按照各自的中心在画布中的归一化坐标位置填充至画布中即可。
需要说明的是,图2是以2个视频轨道片段采用左右二分屏的视频分屏
模板为例进行示例说明,针对更多数量的视频轨道片段采用视频分屏模板实现视频分屏拼接的处理过程类似。
图3为本公开另一实施例提供的视频素材剪辑方法的流程示意图。
参照图3所示,可选地,在图1所示实施例的基础上,S103之后还包括:
S104、响应视频分屏模板切换指令,显示所述多个视频轨道片段的视频图像按照所述视频分屏模板切换指令所指示的视频分屏模板所形成的视频编辑图像。
其中,视频分屏模板切换指令所指示的视频分屏模板与切换之前所采用的视频分屏模板所指示的多个分屏区域的布局不同,但支持的分屏区域的数量是相同的。其中,视频分屏模板所指示的多个分屏区域的布局可以通过多个分屏区域的尺寸以及位置来体现。
电子设备通过响应视频分屏模板切换指令,将视频分屏模板切换指令所指示的视频分屏模板应用于多个视频轨道片段,基于多个视频轨道片段的视频图像按照视频分屏模板切换指令所指示的视频分屏模板形成视频编辑图像,并展示在分屏拼接页中,使得用户预览视频分屏拼接效果。其中,为多个视频轨道片段应用视频分屏模板切换指令所指示的视频分屏模板形成视频编辑图像的实现方式以及展示视频编辑图像的实现方式与图1所示的实施方式类似,可参照图1所示实施例的详细描述,简明起见,此处不再赘述。
其中,电子设备可以在分屏拼接页中展示一个或者多个视频分屏模板供用户选择,响应用户针对其中任一视频分屏模板的触发操作,获取视频分屏模板切换指令。
本公开通过向用户展示可供选择的其他视频分屏模板,并支持用户切换视频分屏模板,更换视频轨道片段的图像视频基于视频分屏模板形成的视频编辑图像呈现的分屏拼接效果,满足用户的剪辑需求。
参照图3所示,可选地,在图1所示实施例的基础上,S103之后,还包括:
S105、响应预览播放指令,基于所述时间线播放所述多个视频轨道片段的视频图像按照所述视频分屏模板形成的视频编辑图像。
在一些实施例中,在分屏拼接页中可以展示播放控件,用户可以通过操作
播放控件,向电子设备输入预览播放指令,电子设备响应预览播放指令,基于所述时间线播放所述多个视频轨道片段的视频图像按照所述视频分屏模板形成的视频编辑图像,其中,预览播放呈现的效果是多个视频轨道片段在各自对应的分屏区域中同时播放。
其中,在播放过程中,预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间时,在所述视频编辑图像中通过相应分屏区域显示所述视频轨道片段的视频图像;若预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间之外,在所述视频编辑图像中所述视频轨道片段对应的分屏区域显示预设背景。
结合图2所示的视频分屏模板为例,假设视频轨道片段1的时长为5秒,视频轨道片段2的时长为8秒,视频轨道片段1和2在视频编辑轨道的时间线区间是对齐的,则在时间线上的0至5秒内2个视频轨道片段在各自对应的分屏区域内同时播放,在第5至8秒,由于视频轨道片段1的时长不够,其对应的时间线区间未覆盖时间线上的第5至8秒,则第5至8秒这一时间段内,分屏区域S1a显示黑色背景,分屏区域S1b中显示视频轨道片段2的视频图像。在播放结束后,自动定位至时间线的起始时间位置。
此外,在预览播放过程中,用户的一些触发操作可以中断预览播放,例如,在播放过程中点击切换选中的视频轨道片段、切换视频分屏或者点击分屏拼接页中用于显示视频编辑图像(即预览播放画面)的显示区域等等,则会触发暂停播放,暂停播放呈现的视觉效果是指多个视频轨道片段同时暂停播放。在此基础上,若用户再次触发播放控件,则会从触发中断的预览显示位置继续播放。
此外,在预览播放过程中,用户再次触发播放控件也能够控制暂停播放。
本公开通过支持用户在分屏拼接页中预览播放视频分屏拼接效果,用户通过预览也可以清楚了解到按照当前的视频分屏方式导出的视频的剪辑效果是否符合预期。
参照图3所示,可选地,在图1所示实施例的基础上,S103之后,还包括:
S106、响应针对视频轨道片段的调整指令,调整视频轨道片段的视频图
像在相应分屏区域中的位置、方向或者尺寸;或者,交换不同视频轨道片段的视频图像对应的分屏区域;或者;替换视频轨道片段;或者,将视频轨道片段的视频图像在相应分屏区域中镜像显示。
其中,基于不同触发方式输入的调整指令对应如上所示的不同的调整方式。例如,在分屏拼接页中展示视频编辑图像的显示区域中选中要调整的视频轨道片段后,可以通过手势或者手势组合控制调整选中的视频轨道片段的视频图像在相应分屏区域中的位置、方向或者尺寸;选中要调整的视频轨道片段后,可以通过拖动视频轨道片段至其他视频轨道片段所对应的分屏区域中并松手,可以实现交换两个视频轨道片段对应的分屏区域;选中要调整的视频轨道片段后,可以通过触发电子设备显示相应地功能面板,基于功能面板中提供的控件触发替换视频轨道片段或者将视频轨道片段的视频图像在相应分屏区域中镜像显示。需要说明的是,如上所示的调整可以但不限于是针对视频轨道片段的所有视频图像的。
当然,触发不同调整可以但不限于通过如上示例实现,例如,对于视频轨道片段的视频图像在分屏区域中的位置、尺寸以及方向的调整也可以通过分屏拼接页中的相应控件触发。
本公开通过支持用户在视频分屏拼接过程中对视频轨道片段进行如上一种或多种调整,得到满足用户预期的视频拼接效果,满足用户的视频剪辑需求。
需要说明的是,用户可以选择执行步骤S104至步骤S106中的任一步骤,也可以根据需求重复执行其中的一个或者多个步骤,当用户执行其中多个步骤时,多个步骤的先后顺序不做限定,例如,用户可以执行步骤S105进行预览,之后再执行步骤S106对视频轨道片段进行调整,然后再执行步骤S105进行预览播放。
在图1以及图3所示的步骤S104至S106的基础上,用户确定视频分屏拼接效果满足预期,电子设备可以进入视频剪辑页面,执行其他剪辑操作、导出视频操作等等。参照图3所示,可选地,还可以包括:
S107、基于所述多个视频轨道片段所采用的视频分屏模板,进入视频剪辑页面,并在视频剪辑页面中显示所述多个视频轨道片段的视频图像按照所
述视频分屏模板所形成的视频编辑图像。
分屏拼接完成后,可以响应用户的触发操作将多个视频轨道片段以及多个视频轨道片段对应的视频分屏拼接的信息导入相应的剪辑草稿文件中,并展示相应的剪辑页面,在剪辑页面的预览区域中显示多个视频轨道片段的视频图像按照视频分屏模板形成视频编辑图像。
且在相应剪辑草稿文件中,设置多个视频轨道片段中的一个轨道视频片段为主轨道上的视频轨道片段,设置除主轨道上的视频片段之外的其他所有视频轨道片段为画中画轨道上的视频轨道片段。
在一些实施例中,基于多个视频素材形成与视频素材一一对应的视频轨道片段,则可以在剪辑草稿文件中,基于获取多个视频素材的先后顺序设置由第一个获取的视频素材形成的视频轨道片段为主轨道上的视频轨道片段,设置由其他视频素材分别形成的视频轨道片段为画中画轨道上的视频轨道片段。
为多个视频轨道片段赋予主轨道以及画中画轨道,在视频剪辑工具中可以利用视频剪辑工具现有的逻辑框架,按照画中画的处理逻辑响应用户后续的剪辑操作。
需要说明的是,在进入剪辑页面之后,用户可以通过剪辑页面执行新的剪辑操作,之后再导出剪辑好的目标视频,或者,也可以不执行新的剪辑操作导出剪辑好的目标视频。
为了更加清楚地介绍本公开提供的视频素材剪辑方法,接下来,结合图4A-图4I,介绍本公开的视频素材剪辑方法的具体实现过程。为了便于说明,图4A-图4I中,以电子设备为手机,手机中安装有支持视频剪辑功能的APP1(简称应用1)。
请参阅图4A-图4I,图4A-图4I为本公开实施例提供的人机交互界面示意图。
应用1可以在手机上显示如图4A所示的用户界面11,用户界面11用于显示应用1的素材选择页面,可以通过应用1的主页面中提供的创作入口进入素材选择页面,或者,可以通过应用1提供的其他入口/路径进入素材选择页面,本公开对于进入素材选择页面的方式不做限定。其中,应用1可以在素材选择页面向用户聚合展示素材的标识、基于用户操作选中素材、触发分屏
拼接、进入剪辑工程等等。
参照图4A所示,用户界面11包括:区域101、控件102以及控件103。
其中,区域101中可以展示相册中包含的图像、照片、视频等素材的缩略图,且可以按照不同类别展示,在区域101中设置多个标签,基于选中的标签聚合展示相应标签下的素材的缩略图,由于区域101尺寸的限制,可能一次无法显示全部的素材的缩略图,用户可以通过上下滑动操作查看更多素材。区域101中,每个素材对应一展示区域a1,区域a1中展示相应素材的缩略图,如果是视频素材,还可以在区域a1中显示视频素材的时长。当用户选中某个素材,可以在区域a1中显示选中标记以及顺序信息,且被用户选中的素材所对应的区域a1的显示样式(如颜色、区域a1的边缘线条、亮度等等)可以不同,以区分用户选中的素材和未选中的素材。
控件102用于触发针对获取的视频素材进行视频分屏拼接,因此,控件102也可以理解为是进入分屏拼接页的入口。控件102的名称可以为“拼接”。
控件103用于触发将选中的视频素材导入剪辑草稿中并进入剪辑页面中进行视频素材剪辑,其中,应用1可以在剪辑页面中对导入的素材执行一个功能集合,例如,添加特效、贴纸、画中画、文本、音频等等。
其中,本公开对于控件102和控件103的显示样式不作限定。
需要说明的是,若用户选择素材为图像或者照片,在基于控件102进入分屏拼接页或者直接通过控件103进入视频剪辑页时,应用1可基于图像或者照片生成预设时长的视频素材,本公开对于预设时长不限定,例如,预设时长可以为2秒、3秒等。
假设在素材选择页面中,用户选择了4个视频素材,且在进入分屏拼接页时,应用1基于4个视频素材形成与4个视频素材一一对应的视频轨道片段1至4,此处以未进行任何视频处理为例进行说明,因此,形成的视频轨道片段1至4可以理解为是原始的视频素材1至4。
在应用1接收到用户在图4A所示的用户界面11中执行如点击控件102的操作后,应用1基于用户的操作生成视频分屏拼接指令,并响应视频分屏拼接指令,应用1在手机上示例性地显示如图4B所示的用户界面12,用户界面12用于显示分屏拼接页,在分屏拼接页可以预览视频分屏拼接效果以及
对视频分屏拼接效果进行调整。
用户界面12中包括:区域104、区域105、区域106、控件107以及控件108。
其中,区域104为多个视频素材的视频图像按照应用1确定的视频分屏拼接模板形成的视频编辑图像的预览区域。假设,应用1自动为多个视频素材匹配视频分屏模板1,如图4B所示,视频分屏模板1所指示的4个分屏区域尺寸相同,且按照2行2列排列,视频素材1对应第1行第1列的分屏区域,视频素材2对应第1行第2列的分屏区域,视频素材3对应第2行第1列的分屏区域,视频素材4对应第2行第2列的分屏区域。
区域105用于展示进度条。其中,进度条能够体现播放进度,还能够示出视频编辑轨道的时间线。且进度条支持用户手动拖动。
区域106中可以包括标签109,其中,标签109用于触发在区域106中显示更多支持4个视频素材的视频分屏模板,当用户点击标签109,区域106中可以显示预设数量个支持4个视频素材的视频分屏模板,按照设定的方式排列显示,例如按照从左向右的方式水平排列显示,且默认定位至当前所采用的视频分屏模板1上,视频分屏模板1可以显示在区域106的最左侧位置。示例性地,如图4B所示,区域106中展示4个视频分屏模板,分别为视频分屏模板1至4,默认选中视频分屏模板1,假设用户点击视频分屏模板2,应用1可以示例性地在手机上显示如图4C所示的用户界面13,用户界面13中区域104展示画布,画布上4个视频素材的视频图像按照视频分屏模板2所指示多个分屏区域形成视频编辑图像,并将相应的视频编辑图像显示在区域104中,且区域106中视频分屏模板2为选中状态。在区域106中展示分屏区域数量与用户选择的视频素材数量一致的视频分屏模板,能够方便用户快速选择切换自己想要使用的视频分屏模板,有利于提升编辑效率。
需要说明的是,区域106中还可以包括其他标签,例如,区域106中名称为“AA”的标签,该标签可以对应指定的功能面板,此处不做具体限定,在分屏拼接页中设置何种功能的标签可根据需求而定。
控件107为退出分屏拼接页的入口,应用1接收到用户针对控件107的触发操作,可以返回显示如图4A所示的素材选择页面,素材选择页面为进入
前状态,之前选中的素材仍为选中状态。用户可以多次通过控件102进入分屏拼接页,再通过控件107退出分屏拼接页返回至素材选择页面。
控件108为播放控件,能够控制区域104中按照时间线展示多个视频素材按照视频分屏模板形成的视频编辑图像,从而呈现出各视频素材在各自对应的分屏区域中同时播放或者暂停的效果。在一些实施例中,由素材选择页面进入图4B所示的分屏拼接页,播放进度默认为第一帧的时间位置,且处于暂停播放状态。用户可以通过拖动区域105中的进度条重新定位预览显示位置,并通过操作控件108控制4个视频素材在各自对应的分屏区域同时播放/暂停。由于多个视频素材时长不同,其中,时长短的视频素材播放结束后相应的分屏区域可以但不限于黑屏显示。
请继续参阅图4B所示,区域104中展示不同视频素材对应的分屏区域是可操作的,用户通过视频素材对应的分屏区域选中视频素材以及可以对区域104中的视频素材进行调整。
在一些实施例中,用户可以通过点击区域104中视频素材对应的分屏区域选中视频素材,区域104中被选中的视频素材的显示样式可以与其他未被选中的视频素材的显示样式不同,例如,被选中的视频素材可以半透明显示,针对选中的视频素材通过不同的操作,可以执行如前所述的任一调整。
一、调整视频素材的视频图像在相应分屏区域中的位置
假设,长按选中视频素材并在该视频素材对应的分屏区域内拖动,可以移动视频素材的视频图像在相应分屏区域的位置,当视频素材拖动到达分屏区域边界位置,应用1可以通过震动或者文字或者其他方式提示用户。例如图4D所示的用户界面14,用户通过点击视频素材2分屏区域选中视频素材2,并沿箭头所示的方向拖动视频素材2,移动视频素材2的视频图像在相应分屏区域中向下,其中,移动至分屏区域之外的视频素材2的图像区域部分可以不显示在区域104中。
二、调整视频素材在相应分屏区域中的尺寸
假设,用户通过双指同时长按选中视频素材并在该视频素材对应的区域内通过双指缩放可以调整视频素材在分屏区域中的尺寸,当视频素材的边界与相应分屏区域尺寸对齐时,应用1可以通过震动或者文字或者其他方式提
示用户。例如图4E所示的用户界面15,用户通过点击视频素材2对应的分屏区域选中视频素材2,通过双指操作缩小视频素材2的视频图像在相应分屏区域中的尺寸。(放大视频素材的实现方式类似)。
三、交换不同视频素材的分屏区域
长按选中视频素材并拖动视频素材至其他视频素材的分屏区域的位置,松开手指,触发分屏区域交换,并在区域104中展示交换分屏区域后的视频分屏拼接效果,交换分屏区域后视频素材自适应相应的分屏区域的尺寸。其中,选中某个视频素材时,该视频素材对应的图像区域边缘框可以高亮,当拖动选中的视频素材到其他分屏区域位置时,手指拖动的视频素材对应的视频图像区域边缘框始终保持高亮显示,松手触发视频素材交换分屏区域或视频素材回归原始的分屏区域时,相应视频素材的视频图像区域边缘框高亮消失。
需要说明的是,还可以采用其他显示样式突出显示用户选中的调整分屏区域的视频素材,并不限于图像区域边框高亮的方式,例如,还可以通过为选中的视频素材添加具有特定效果的蒙层的方式突出显示。
例如图4F所示的用户界面16以及图4G所示的用户界面17,其中,参阅用户界面16所示,用户通过长按视频素材2对应的分屏区域选中视频素材2,通过拖动视频素材2至视频素材1对应的分屏区域的位置,触发视频素材1和视频素材2交换分屏区域,用户界面16示出的状态为触发交换分屏区域但未松开手指的状态,视频素材1移动至视频素材2的分屏区域的位置,但视频素材2还未显示在视频素材1的分屏区域中。用户松开手指后,如用户界面17所示,视频素材1和视频素材2成功交换分屏区域,且成功交换分屏区域后,2个视频素材自适应调整尺寸适应分屏区域的尺寸。
请继续参阅图4B所示,用户通过单击视频素材的分屏区域时可以触发显示功能面板,功能面板可以用于执行一些针对视频素材的编辑操作,例如,素材、垂直翻转、水平翻转以及旋转等等。
假设应用1接收到针对区域104中视频素材2对应的分屏区域的触发操作(如单击操作),则应用1可以示例性地显示如图4H所示的用户界面18,用户界面18中包括:视频素材2对应的功能面板a2,功能面板a2中可以包括:控件110至控件113。
控件110,用于触发替换视频素材,用户操作控件110(如点击)后,应用1可以进入素材选择页面,使得用户在素材选择页面中选择另一段素材(视频素材或者基于图像/照片生成的视频素材)替换视频素材2,其中,本公开可以允许视频素材重复导入。
控件111用于触发视频素材的视频图像在相应分屏区域中垂直翻转,应用1在接收到用户针对控件111的触发操作(如点击操作),应用1在视频素材2对应的分屏区域内将视频素材2的视频图像垂直翻转,呈现左右镜像对称显示。
控件112用于触发视频素材的视频图像在相应分屏区域中水平翻转,应用1在接收到用户针对控件112的触发操作(如点击操作),应用1在视频素材2对应的分屏区域内将视频素材2的视频图像水平翻转,呈现水平镜像对称显示。
控件113用于触发视频素材的视频图像在分屏区域中旋转,旋转角度可以为预先设定的角度,例如,每触发一次控件113,选中的视频素材向左旋转90度或者也可以向右旋转90度。
应理解,功能面板中还可以包括其他触发调整的控件,并不限于此处示例的控件110至控件113。
继续参阅图4B所示的用户界面12,还包括:导入控件114。应用1接收到针对导入控件114的触发操作,应用1将多个视频素材以及当前的采用的视频分屏模板的信息导入剪辑草稿文件中进行记录,并跳转至剪辑页面,在剪辑页面的预览区域中显示多个视频素材的视频图像按照当采用的视频分屏模板形成的视频编辑图像。
其中,在剪辑草稿文件中,为多个视频素材分别赋予素材轨道信息,用户选择的第1个视频素材,即视频素材1为主轨道上的视频素材,其余视频素材为画中画轨道上的视频素材(也可以理解为主轨素材),即视频素材2至3为画中画轨道上的视频素材(也可以理解为画中画素材),且多个画中画轨道的顺序按照获取相应视频素材的顺序排列。
假设在视频剪辑过程中用户未切换视频分屏模板,应用1接收到针对导入控件114的触发操作,示例性地在手机上显示如图4I所示的用户界面19,
用户界面19包括的预览区域115中显示视频素材1至4的视频图像按照视频分屏模板1形成的视频编辑图像。之后,应用1可以在剪辑页面中按照主轨道素材以及画中画轨道素材的方式响应用户的剪辑操作;例如,可以删除某个画中画素材,该画中画素材对应的分屏区域显示预设背景;又如,新增画中画,若是在已有的画中画轨道中新增画中画素材,则该画中画素材填充至已有的画中画轨道的分屏区域,但在视频编辑轨道上覆盖不同的时间线区间;若是新增画中画轨道,则新增画中画素材覆盖指定的分屏区域中显示。
在视频分屏拼接完成进入剪辑页面后,通过复用应用1支持的画中画处理逻辑,用户依然可以基于需求重新手动调整画面大小,无需在视频剪辑页中再单独设置视频分屏拼接的处理逻辑,在满足用户的视频分屏拼接剪辑需求的基础上还能够降低应用1的处理逻辑复杂度。
需要说明的是,用户也可以在分屏拼接页中预览视频分屏拼接效果以及对视频素材进行调整之后再通过操作导入控件114进入剪辑工程,即可以通过图4C至图4I任一实施例所示的用户界面中显示的导入控件进入剪辑页面。
图5为本公开一实施例提供的视频素材剪辑装置的结构示意图。请参阅图5所示,本实施例提供的装置500包括:
获取模块501,用于获取多个视频素材。
模板确定模块502,用于确定视频分屏模板,所述视频分屏模板用于指示位于同一视频图像中的多个分屏区域。
视频处理模块503,用于在视频编辑轨道上展示多个视频轨道片段,并将所述多个视频轨道片段的视频图像按照所述视频分屏模板形成视频编辑图像。
显示模块504,用于显示所述视频编辑图像。
其中,所述多个视频轨道片段是基于所述多个视频素材所形成的,所述多个视频素材中至少一个视频素材用于形成所述多个视频轨道片段中的一个视频轨道片段,用于形成所述多个视频轨道片段中不同视频轨道片段的视频素材是所述多个视频素材中的不同视频素材;
所述多个视频轨道片段在所述视频编辑轨道上对应的时间线区间至少是部分重叠的;
所述视频编辑图像中具有所述视频分屏模板所指示的所述多个分屏区域,
所述视频编辑图像中的一个分屏区域用于展现所述多个视频轨道中的一个视频轨道片段的视频图像,所述视频编辑图像中的不同分屏区域用于展现所述多个视频轨道片段中不同视频轨道片段的图像。
在一些实施例中,视频处理模块503,还用于基于所述多个视频轨道片段所采用的视频分屏模板,进入视频剪辑页面,并在所述视频剪辑页面中显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
在一些实施例中,所述视频处理模块503,还用于进入所述视频剪辑页面后,在相应的剪辑草稿文件中,设置其中一个所述轨道视频片段为主轨道上的视频轨道片段,设置其他所有所述视频轨道片段为画中画轨道上的视频轨道片段。
在一些实施例中,所述视频处理模块503,具体用于在所述多个视频轨道片段与所述多个视频素材一一对应时,基于获取所述多个视频素材的先后顺序,设置由第一个获取的视频素材形成的视频轨道片段为主轨道上的视频轨道片段,设置由其他视频素材分别形成的视频轨道片段为画中画轨道上的视频轨道片段。
在一些实施例中,视频处理模块503,具体用于获取视频剪辑对应的画布的尺寸;基于获取所述多个视频轨道片段的视频编辑轨道信息、所述视频分屏模板所指示的多个分屏区域与视频编辑轨道之间的映射关系以及所述画布的尺寸,确定所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置;基于所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置,将所述多个视频轨道片段的视频图像填充在所述画布中。
所述显示模块504,具体用于显示所述画布,以展示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
在一些实施例中,基于所述多个视频素材形成的所述多个视频轨道片段的时长与所述时间线的长度一致,且所述多个视频轨道片段的起始时间在所述时间线上对齐。
在一些实施例中,基于所述多个视频素材中至少一个视频素材形成所述多个视频轨道片段中的一个视频轨道片段时,所述视频处理模块503,具体用
于采用视频变速、插入或者拼接指定内容的视频片段中的一种或者多种视频处理方式针对所述至少一个视频素材进行处理得到时长与所述时间线的长度一致的视频轨道片段。
在一些实施例中,视频处理模块503,还用于响应视频分屏模板切换指令,基于所述多个视频轨道片段的视频图像按照所述视频分屏模板切换指令所指示的视频分屏模板所形成的视频编辑图像;所述视频分屏模板切换指令所指示的视频分屏模板与切换之前所采用的视频分屏模板所指示的多个分屏区域的布局不同。
显示模块504,还用于显示所述多个视频轨道片段的视频图像按照所述视频分屏模板切换指令所指示的视频分屏模板所形成的视频编辑图像。
在一些实施例中,显示模块504,还用于响应预览播放指令,基于所述时间线显示所述多个视频轨道片段的视频图像按照所述视频分屏模板形成的视频编辑图像;其中,在播放过程中,预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间时,在所述视频编辑图像中通过相应分屏区域显示所述视频轨道片段的视频图像;若预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间之外,在所述视频编辑图像中所述视频轨道片段对应的分屏区域显示预设背景。
在一些实施例中,视频处理模块503,还用于响应针对视频轨道片段的调整指令,调整所述视频轨道片段的视频图像在相应分屏区域中的位置、方向或者尺寸;或者;交换不同所述视频轨道片段的视频图像对应的分屏区域;或者;替换所述分屏区域中的视频轨道片段;或者,将所述视频轨道片段的视频图像在相应分屏区域中镜像显示。
本实施例提供的装置可以用于执行前述任一方法实施例的技术方案,其实现原理以及技术效果类似,可参照前述方法实施例的详细描述,简明起见,此处不再赘述。
示例性地,本公开提供一种电子设备,包括:一个或多个处理器;存储器;以及一个或多个计算机程序;其中一个或多个计算机程序被存储在存储器中;一个或多个处理器在执行一个或多个计算机程序时,使得电子设备实现前文实施例的视频素材剪辑方法。
示例性地,本公开提供一种芯片系统,芯片系统应用于包括存储器和传感器的电子设备;芯片系统包括:处理器;当处理器执行前文实施例的视频素材剪辑方法。
示例性地,本公开提供一种计算机可读存储介质,其上存储有计算机程序,计算机程序被处理器使得电子设备执行时实现前文实施例的视频素材剪辑方法。
示例性地,本公开提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行前文实施例的视频素材剪辑方法。
在上述实施例中,全部或部分功能可以通过软件、硬件、或者软件加硬件的组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本公开实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中。计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
需要说明的是,在本文中,诸如“第一”和“第二”等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅是本公开的具体实施方式,使本领域技术人员能够理解或实现本公开。对这些实施例的多种修改对本领域的技术人员来说将是显而易见
的,本文中所定义的一般原理可以在不脱离本公开的精神或范围的情况下,在其它实施例中实现。因此,本公开将不会被限制于本文所述的这些实施例,而是要符合与本文所公开的原理和新颖特点相一致的最宽的范围。
Claims (14)
- 一种视频素材剪辑方法,包括:获取多个视频素材;确定视频分屏模板,所述视频分屏模板用于指示位于同一视频图像中的多个分屏区域;在视频编辑轨道上展示多个视频轨道片段,并显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像;其中,所述多个视频轨道片段是基于所述多个视频素材所形成的,所述多个视频素材中至少一个视频素材用于形成所述多个视频轨道片段中的一个视频轨道片段,用于形成所述多个视频轨道片段中不同视频轨道片段的视频素材是所述多个视频素材中的不同视频素材;所述多个视频轨道片段在所述视频编辑轨道上对应的时间线区间至少是部分重叠的;所述视频编辑图像中具有所述视频分屏模板所指示的所述多个分屏区域,所述视频编辑图像中的一个分屏区域用于展现所述多个视频轨道中的一个视频轨道片段的视频图像,所述视频编辑图像中的不同分屏区域用于展现所述多个视频轨道片段中不同视频轨道片段的图像。
- 根据权利要求1所述的方法,还包括:基于所述多个视频轨道片段所采用的视频分屏模板,进入视频剪辑页面,并在所述视频剪辑页面中显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
- 根据权利要求2所述的方法,其中,进入所述视频剪辑页面后,在相应的剪辑草稿文件中,设置其中一个所述轨道视频片段为主轨道上的视频轨道片段,设置其他所有所述视频轨道片段为画中画轨道上的视频轨道片段。
- 根据权利要求3所述的方法,其中,所述设置其中一个所述轨道视频片段为主轨道上的视频轨道片段,设置其他所有所述视频轨道片段为画中画轨道上的视频轨道片段,包括:所述多个视频轨道片段与所述多个视频素材一一对应时,基于获取所述 多个视频素材的先后顺序,设置由第一个获取的视频素材形成的视频轨道片段为主轨道上的视频轨道片段,设置由其他视频素材分别形成的视频轨道片段为画中画轨道上的视频轨道片段。
- 根据权利要求1至4任一项所述的方法,其中,所述显示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像,包括:获取视频剪辑对应的画布的尺寸;基于获取所述多个视频轨道片段的视频编辑轨道信息、所述视频分屏模板所指示的多个分屏区域与视频编辑轨道之间的映射关系以及所述画布的尺寸,确定所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置;基于所述多个视频轨道片段的视频图像分别在所述画布中的尺寸以及位置,将所述多个视频轨道片段的视频图像填充在所述画布中,并显示所述画布,以展示所述多个视频轨道片段的视频图像按照所述视频分屏模板所形成的视频编辑图像。
- 根据权利要求1至5任一项所述的方法,其中,基于所述多个视频素材形成的所述多个视频轨道片段的时长与所述时间线的长度一致,且所述多个视频轨道片段的起始时间在所述时间线上对齐。
- 根据权利要求6所述的方法,其中基于所述多个视频素材中至少一个视频素材形成所述多个视频轨道片段中的一个视频轨道片段时,采用视频变速、插入或者拼接指定内容的视频片段中的一种或者多种视频处理方式针对所述至少一个视频素材进行处理得到时长与所述时间线的长度一致的视频轨道片段。
- 根据权利要求1至7任一项所述的方法,还包括:响应视频分屏模板切换指令,显示所述多个视频轨道片段的视频图像按照所述视频分屏模板切换指令所指示的视频分屏模板所形成的视频编辑图像;所述视频分屏模板切换指令所指示的视频分屏模板与切换之前所采用的视频分屏模板所指示的多个分屏区域的布局不同。
- 根据权利要求1至8任一项所述的方法,还包括:响应预览播放指令,基于所述时间线播放所述多个视频轨道片段的视频 图像按照所述视频分屏模板形成的视频编辑图像;其中,在播放过程中,预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间时,在所述视频编辑图像中通过相应分屏区域显示所述视频轨道片段的视频图像;若预览播放位置位于时间线上所述视频轨道片段覆盖的时间线区间之外,在所述视频编辑图像中所述视频轨道片段对应的分屏区域显示预设背景。
- 根据权利要求1至9任一项所述的方法,还包括:响应针对视频轨道片段的调整指令,调整所述视频轨道片段的视频图像在相应分屏区域中的位置、方向或者尺寸;或者;交换不同所述视频轨道片段的视频图像对应的分屏区域;或者;替换所述分屏区域中的视频轨道片段;或者,将所述视频轨道片段的视频图像在相应分屏区域中镜像显示。
- 一种视频素材剪辑装置,包括:获取模块,用于获取多个视频素材;模板确定模块,用于确定视频分屏模板,所述视频分屏模板用于指示位于同一视频图像中的多个分屏区域;视频处理模块,用于在视频编辑轨道上展示多个视频轨道片段,并将所述多个视频轨道片段的视频图像按照所述视频分屏模板形成视频编辑图像;显示模块,用于显示所述视频编辑图像;其中,所述多个视频轨道片段是基于所述多个视频素材所形成的,所述多个视频素材中至少一个视频素材用于形成所述多个视频轨道片段中的一个视频轨道片段,用于形成所述多个视频轨道片段中不同视频轨道片段的视频素材是所述多个视频素材中的不同视频素材;所述多个视频轨道片段在所述视频编辑轨道上对应的时间线区间至少是部分重叠的;所述视频编辑图像中具有所述视频分屏模板所指示的所述多个分屏区域,所述视频编辑图像中的一个分屏区域用于展现所述多个视频轨道中的一个视频轨道片段的视频图像,所述视频编辑图像中的不同分屏区域用于展现所述多个视频轨道片段中不同视频轨道片段的图像。
- 一种可读存储介质,包括:计算机程序指令;电子设备执行所述计算机程序指令,使得所述电子设备实现如权利要求1至10任一项所述的视频素材剪辑方法。
- 一种电子设备,包括:存储器和处理器;所述存储器被配置为存储计算机程序指令;所述处理器被配置为执行所述计算机程序指令,电子设备执行所述计算机程序指令,使得所述电子设备实现如权利要求1至10任一项所述的视频素材剪辑方法。
- 一种计算机程序产品,电子设备执行所述计算机程序产品,使得所述电子设备实现如权利要求1至10任一项所述的视频素材剪辑方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP23821110.6A EP4373104A1 (en) | 2022-09-30 | 2023-09-25 | Video material clipping method and apparatus |
US18/389,621 US20240119971A1 (en) | 2022-09-30 | 2023-12-19 | Video material editing method and apparatus |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211231854.1 | 2022-09-30 | ||
CN202211231854.1A CN117857719A (zh) | 2022-09-30 | 2022-09-30 | 视频素材剪辑方法及装置 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US18/389,621 Continuation US20240119971A1 (en) | 2022-09-30 | 2023-12-19 | Video material editing method and apparatus |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2024067494A1 true WO2024067494A1 (zh) | 2024-04-04 |
Family
ID=89473796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2023/121138 WO2024067494A1 (zh) | 2022-09-30 | 2023-09-25 | 视频素材剪辑方法及装置 |
Country Status (4)
Country | Link |
---|---|
US (1) | US20240119971A1 (zh) |
EP (1) | EP4373104A1 (zh) |
CN (1) | CN117857719A (zh) |
WO (1) | WO2024067494A1 (zh) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118474274B (zh) * | 2024-07-11 | 2024-10-29 | 广州手拉手互联网股份有限公司 | 基于人工智能的短视频生成方法及系统 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109120950A (zh) * | 2018-09-30 | 2019-01-01 | 北京金山安全软件有限公司 | 视频拼接方法、装置、终端设备和存储介质 |
CN112449232A (zh) * | 2020-10-28 | 2021-03-05 | 北京三快在线科技有限公司 | 界面展示方案的生成方法、装置、设备及存储介质 |
CN113411664A (zh) * | 2020-12-04 | 2021-09-17 | 腾讯科技(深圳)有限公司 | 基于子应用的视频处理方法、装置和计算机设备 |
CN114374872A (zh) * | 2021-12-08 | 2022-04-19 | 卓米私人有限公司 | 视频生成方法、装置、电子设备及存储介质 |
CN114450935A (zh) * | 2019-08-02 | 2022-05-06 | 黑魔法设计私人有限公司 | 视频编辑系统、方法和用户界面 |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5836770A (en) * | 1996-10-08 | 1998-11-17 | Powers; Beth J. | Multimedia product for use in physical fitness training and method of making |
KR100386579B1 (ko) * | 2000-07-18 | 2003-06-02 | 엘지전자 주식회사 | 멀티 소스용 포맷 변환 장치 |
US20080278628A1 (en) * | 2006-10-06 | 2008-11-13 | Sharp Kabushiki Kaisha | Content display device, content display method, content display system, content display program, and recording medium |
JP2012004739A (ja) * | 2010-06-15 | 2012-01-05 | Sony Corp | 情報処理装置、情報処理方法、及びプログラム |
CN113542581A (zh) * | 2020-04-22 | 2021-10-22 | 华为技术有限公司 | 多路录像的取景方法、图形用户界面及电子设备 |
-
2022
- 2022-09-30 CN CN202211231854.1A patent/CN117857719A/zh active Pending
-
2023
- 2023-09-25 WO PCT/CN2023/121138 patent/WO2024067494A1/zh unknown
- 2023-09-25 EP EP23821110.6A patent/EP4373104A1/en active Pending
- 2023-12-19 US US18/389,621 patent/US20240119971A1/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109120950A (zh) * | 2018-09-30 | 2019-01-01 | 北京金山安全软件有限公司 | 视频拼接方法、装置、终端设备和存储介质 |
CN114450935A (zh) * | 2019-08-02 | 2022-05-06 | 黑魔法设计私人有限公司 | 视频编辑系统、方法和用户界面 |
CN112449232A (zh) * | 2020-10-28 | 2021-03-05 | 北京三快在线科技有限公司 | 界面展示方案的生成方法、装置、设备及存储介质 |
CN113411664A (zh) * | 2020-12-04 | 2021-09-17 | 腾讯科技(深圳)有限公司 | 基于子应用的视频处理方法、装置和计算机设备 |
CN114374872A (zh) * | 2021-12-08 | 2022-04-19 | 卓米私人有限公司 | 视频生成方法、装置、电子设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
EP4373104A1 (en) | 2024-05-22 |
US20240119971A1 (en) | 2024-04-11 |
CN117857719A (zh) | 2024-04-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6986186B2 (ja) | 可視化編集方法、装置、デバイス及び記憶媒体 | |
US11698721B2 (en) | Managing an immersive interface in a multi-application immersive environment | |
KR101867644B1 (ko) | 멀티-애플리케이션 환경 | |
US20170024110A1 (en) | Video editing on mobile platform | |
US8261191B2 (en) | Multi-point representation | |
WO2021258821A1 (zh) | 视频编辑方法、装置、终端及存储介质 | |
US20120299968A1 (en) | Managing an immersive interface in a multi-application immersive environment | |
US20130263031A1 (en) | Dynamic user interface for previewing live content | |
WO2024067494A1 (zh) | 视频素材剪辑方法及装置 | |
US11941728B2 (en) | Previewing method and apparatus for effect application, and device, and storage medium | |
WO2022126664A1 (zh) | 视频编辑方法、终端设备及计算机可读存储介质 | |
WO2023030122A1 (zh) | 信息回复方法、装置、电子设备、可读存储介质及程序产品 | |
WO2024067692A1 (zh) | 一种信息展示方法及装置 | |
WO2022194070A1 (zh) | 应用程序的视频处理方法和电子设备 | |
JP2009129223A (ja) | 画像編集装置と画像編集プログラムと記録媒体と画像編集方法 | |
US20240295950A1 (en) | Video collection presentation method and apparatus, electronic device, and readable storage medium | |
JP2009129224A (ja) | 画像操作装置と画像操作プログラムと記録媒体と画像操作方法 | |
WO2024104468A1 (zh) | 视频剪辑方法及装置 | |
US20240170025A1 (en) | Video editing method and apparatus | |
WO2024046268A1 (zh) | 渲染层级顺序调整方法及装置 | |
WO2023066270A1 (zh) | 视频生成方法、装置、电子设备及可读存储介质 | |
US12125503B2 (en) | Method, apparatus, electronic device, and readable storage medium for video editing | |
US20230289048A1 (en) | Managing An Immersive Interface in a Multi-Application Immersive Environment | |
WO2024131648A1 (zh) | 视频剪辑方法、装置、电子设备及可读存储介质 | |
JP2024522681A (ja) | 映像生成方法、装置、デバイス及び記憶媒体 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2023579311 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 2023821110 Country of ref document: EP Effective date: 20231219 |