CN117201865A - Video editing method, electronic equipment and storage medium - Google Patents

Video editing method, electronic equipment and storage medium Download PDF

Info

Publication number
CN117201865A
CN117201865A CN202210910809.2A CN202210910809A CN117201865A CN 117201865 A CN117201865 A CN 117201865A CN 202210910809 A CN202210910809 A CN 202210910809A CN 117201865 A CN117201865 A CN 117201865A
Authority
CN
China
Prior art keywords
video
frame
added
video frame
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210910809.2A
Other languages
Chinese (zh)
Inventor
韩笑
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Honor Device Co Ltd
Original Assignee
Honor Device Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Honor Device Co Ltd filed Critical Honor Device Co Ltd
Publication of CN117201865A publication Critical patent/CN117201865A/en
Pending legal-status Critical Current

Links

Landscapes

  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a method for editing a video, electronic equipment and a storage medium, and relates to the technical field of video editing. Generating a target video according to the target video frame in the first video and the video frame added with the material in the first video, wherein the target video frame exists in a second video, and the second video is a corresponding video after the first operation is performed on the first video. The method provided by the embodiment of the application can determine how the video frames added with the materials change along with the adjustment of the first video, so that the change of the materials is more in line with the expectation of a user, the overall change of the video and the materials is consistent, the video and the materials are displayed to the user in the first interface, the convenience of video editing is improved, and the use experience of the user is improved.

Description

Video editing method, electronic equipment and storage medium
Technical Field
The present application relates to the field of video editing technologies, and in particular, to a method for editing video, an electronic device, and a storage medium.
Background
With the rapid development of internet technology, more and more users like to share videos on a social platform. In order to increase the interest and gorgeousness of the video to attract attention of the net friends, users often edit the video by using video editing software, for example, adding materials such as music, special effects, characters, stickers, filters and the like to the video to increase the display effect of the video. In the process of editing the video, the user may need to adjust the video repeatedly, and after adding the material, how the material changes along with the adjustment of the video is a problem to be solved at present.
Disclosure of Invention
The application provides a method for editing video, electronic equipment and a storage medium, which can enable materials to change along with adjustment of the video during video editing and improve the use experience of users.
In a first aspect, an embodiment of the present application provides a method for editing video, including:
detecting a first operation from a first interface, wherein the first interface is an interface for editing video, the first operation requests to delete part of video frames in the first video or to add video frames in the first video, and materials are added in the first video;
Generating a target video according to the target video frame in the first video and the video frame added with the material in the first video, wherein the target video frame exists in a second video, and the second video is a corresponding video after the first operation is performed on the first video.
The video frames added with the materials in the first video represent the video frames of the materials displayed in the first video before the first operation, and the electronic equipment can determine the video frames which can be reserved to the second video after the first operation according to the target video frames in the first video; according to whether the target video frame in the first video includes a video frame to which the material is added in the first video, a video frame for adding the material in the second video after the first operation may be determined, and further the material is added in the determined video frame when the target video is generated. The method provided by the embodiment of the application can determine how the video frames added with the materials change along with the adjustment of the first video, and further display the video frames to the user in the first interface, thereby improving the convenience of video editing and improving the use experience of the user.
In one possible implementation manner, generating the target video according to the target video frame in the first video and the video frame with the material added in the first video includes: determining a video frame for adding the material in the second video according to the target video frame in the first video and the video frame added with the material in the first video; and generating a target video according to the second video.
In one possible implementation manner, after detecting the first operation from the first interface, the method for editing video provided by the embodiment of the application further includes: obtaining a mapping relation between a start-stop position of a material in a first video and a video frame added with the material in the first video, wherein the start-stop position comprises a start position and an end position, the mapping relation comprises a first mapping relation or a second mapping relation, and the first mapping relation is as follows: the starting position of the material changes along with the first frame in the video frames added with the material in the first video, and the second mapping relation is as follows: the starting position of the material changes with the first frame in the video frame added with the material in the first video, and the ending position of the material changes with the last frame in the video frame added with the material in the first video.
The first mapping relation is determined after detecting a third operation from the first interface, and the third operation is used for adding materials in the first video. The second mapping relationship is determined after detecting a second operation from the first interface requesting adjustment of a start-stop position of the material in the first video. That is, after adding the material, the user may adjust the start-stop position of the material in the first video in a manner including: adjusting the handle, deleting after cutting, intercepting, long-pressing mobile materials and other modes.
Considering that the user adjusts the starting position or the ending position of the material, the material is possibly added to the video to be more desirable, and in the application, different conditions are set for whether the user adjusts the starting and ending positions of the material, namely, the conditions corresponding to the first mapping relation and the second mapping relation are different. Under the first mapping relation, deleting or adding the video frames added with the materials in the first video, wherein the frames of the video frames added with the materials can be unchanged, and the contents of the corresponding video frames are changed. Under the second mapping relation, deleting the video frames added with the materials in the first video, reducing the number of frames of the video frames added with the materials, and keeping the content of the corresponding video frames unchanged. After the mobile phone detects a first operation from the first interface, a mapping relation between a start-stop position of a material in a first video and a video frame added with the material in the first video is obtained, so that the change of the material after the video is adjusted is determined according to which condition. Because the conditions corresponding to the first mapping relation and the second mapping relation are different, the mobile phone detects the same first operation, and the mapping relation of the materials is different, different material change results are brought, so that based on the scheme, the positions of the materials can be adjusted more flexibly, the integral changes of the video and the materials are more in line with the expected effect of the user, the situation that the position of the added materials is required to be adjusted after the user adjusts the video is avoided, the convenience of video editing is improved, and the use experience of the user is improved.
Determining the video frame for adding the material in the second video according to the target video frame in the first video and the video frame added with the material in the first video, comprising: and determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation.
The mobile phone determines a first frame of a video frame for adding the material in a second video according to the content of the video frame added with the material in the first video included in the target video frame; and determining the frame number of the video frames for adding the material in the second video according to whether the mapping relation corresponding to the material is the first mapping relation or the second mapping relation.
It should be noted that, for the material, such as a subtitle, where the initial length of the material is associated with the frame, for the material, such as music, where the initial length of the material is associated with the entire length of the video clip, the mapping relationship of such material is the second mapping relationship no matter whether the user adjusts the starting position and/or the ending position of the material.
In one possible implementation manner, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation includes:
In a case where the first operation is to delete a part of the video frames in the first video from the first frame of the first video and delete the first frame of the video frames with the material added in the first video, determining the first frame of the remaining target video frames as the first frame of the video frames for adding the material in the second video;
under the condition that the mapping relation is a second mapping relation, determining the tail frame in the video frame added with the material in the first video as the tail frame of the video frame used for adding the material in the second video;
in the case where the mapping relationship is the first mapping relationship, the number of frames of the video frame to which the material is added in the first video is determined as the number of frames of the video frame to which the material is added in the second video.
That is, after the first video is adjusted, the number of frames of the video frame to which the material is added is unchanged when the mapping relationship is the first mapping relationship, the end frame of the video frame to which the material is added in the second video has no fixed content, and the number of frames of the video frame to which the material is added is reduced when the mapping relationship is the second mapping relationship.
In one possible implementation manner, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation includes: in a case where the first operation is to prune a part of the video frames in the first video from the end frame of the first video and prune the end frame in the video frames to which the material is added in the first video, the end frame of the remaining target video frames is determined as the end frame of the video frames for adding the material in the second video.
In one possible implementation manner, after determining a video frame for adding material to the second video, the method for editing video provided by the embodiment of the present application further includes: detecting a fourth operation from the first interface, wherein the fourth operation requests to enable all video frames in the video frames with the materials added in the first video to be included in the target video frames; and in response to the fourth operation, restoring the start and end positions of the display material to the first frame and the last frame in the video frames added with the material in the first video. The fourth operation occurs after the first operation.
In one possible implementation, the first video is divided into a first segment and a second segment between the first video frame and the second video frame, the video frame from the first frame to the first video frame in the video frame with the material added in the first video is located in the first segment, and the video frame from the second frame to the last frame in the video frame with the material added in the first video is located in the second segment. That is, after adding the material, the material is displayed in the partial video frame of the first section and in the partial video frame of the second section.
In one possible implementation manner, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation includes: when the first operation is to delete part of video frames in the first video from the tail frame of the first segment or the first frame of the second segment, and the first frame of the video frame added with the material in the first video is not deleted, if the mapping relation is a second mapping relation, determining the video frame added with the material in the first video as the video frame used for adding the material in the second video, wherein the video frame is the same as the video frame in the target video frame;
If the mapping relation is the first mapping relation, determining the first frame of the video frame added with the material in the first video as the first frame of the video frame used for adding the material in the second video, and determining the frame number of the video frame added with the material in the first video as the frame number of the video frame used for adding the material in the second video.
In one possible implementation manner, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation includes: in a case where the first operation is to delete a part of the video frames in the first video from the end frame of the first segment and delete the first frame of the video frame to which the material is added in the first video, the first frame of the second segment is determined as the first frame of the video frame to which the material is added in the second video.
In one possible implementation manner, at least one segment exists between a first segment and a second segment in a first video, and determining a video frame for adding material in the second video according to a target video frame in the first video, a video frame added with material in the first video and a mapping relation includes: if the mapping relationship is the second mapping relationship, determining the video frame added with the material in the first video and the video frame added with the fragment or the video frame after deleting the fragment from the video frame added with the material in the first video as the video frame used for adding the material in the second video.
In one possible implementation manner, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation includes: under the condition that the playing double speed of the first segment is adjusted to be a first playing double speed and the playing double speed of the second segment is adjusted to be a second playing double speed, determining the number of frames of the first segment, to which the material is added, of which the video frames increase or decrease, according to the first playing double speed, and determining the number of frames of the second segment, to which the material is added, of which the video frames increase or decrease, according to the second playing double speed; the video frames for adding material in the second video are determined according to the number of frames in which the video frames with the material added in the first segment are increased or decreased and the number of frames in which the video frames with the material added in the second segment are increased or decreased.
Based on the scheme, the playing multiple speeds of the video clips can be adjusted simultaneously, the playing multiple speeds of the video clips can be different, the playing effect of the video can be set more conveniently, and the use experience of a user is improved.
In one possible implementation manner, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation includes: determining the coverage duration of the transition animation under the condition that the transition animation is added between the first segment and the second segment in the first operation; and determining the frame number for deleting the video frames from the first frame to the first video frame in the video frames added with the materials in the first video according to the coverage time. That is, after adding the transition, the number of frames of the video frame to which the material is added in the first clip is shortened.
In a possible implementation manner, the method provided by the embodiment of the application further includes: in the case that the first operation is to add a new video frame to the video frame to which the material is added in the first video, if the type of the material added in the first video is music, the playing content of the music is prolonged according to the number of frames of the new video frame.
In a possible implementation manner, the method provided by the embodiment of the application further includes: when any video frame of the first video is deleted, the play content of the music itself is deleted from the end of the music.
In one possible implementation, in the case where the type of the material added in the first video is a picture-in-picture of the video type, the maximum number of video frames of the first video to which the material is added is determined according to the length of the picture-in-picture itself of the added video type.
In a second aspect, an embodiment of the present application provides an electronic device, including: one or more processors; one or more memories; a module in which a plurality of application programs are installed; the memory stores one or more programs which, when executed by the processor, cause the electronic device to perform the method of editing video possible in any of the above first aspects.
In a third aspect, an embodiment of the present application provides an apparatus, where the apparatus is included in an electronic device, and the apparatus has a function of implementing the foregoing aspects and a behavior of the electronic device in a possible implementation manner of the foregoing aspects. The functions may be realized by hardware, or may be realized by hardware executing corresponding software. The hardware or software includes one or more modules or units corresponding to the functions described above. Such as a display module or unit, a detection module or unit, a processing module or unit, etc.
In a fourth aspect, embodiments of the present application provide a computer readable storage medium having instructions stored therein, which when run on a computer, cause the computer to perform the method of editing video described in the first aspect.
In a fifth aspect, embodiments of the present application provide a computer program product comprising instructions which, when run on a computer, cause the computer to perform the method of editing video of the first aspect described above.
The technical effects obtained by the second, third, fourth and fifth aspects are similar to the technical effects obtained by the corresponding technical means in the first aspect, and are not described in detail herein.
Drawings
FIG. 1 is a schematic view of an exemplary video editing scenario according to an embodiment of the present application;
FIG. 2 is a schematic diagram of a video editing scenario provided in an embodiment of the present application;
FIG. 3 is a schematic diagram of a video editing scenario provided by an embodiment of the present application;
FIG. 4 is a schematic flow chart of an exemplary method for video editing according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an example of an edited video provided by an embodiment of the application;
FIG. 6 is a schematic flow chart diagram of a method for editing video according to another embodiment of the present application;
FIG. 7 is a schematic diagram showing a change of a material along with a video length when editing a video according to an embodiment of the present application;
FIG. 8 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 9 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 10 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 11 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 12 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 13 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 14 is a schematic diagram showing a change of material along with a video length when editing a video according to another embodiment of the present application;
FIG. 15 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 16 is a schematic diagram showing a change of material along with a video length when editing a video according to another embodiment of the present application;
FIG. 17 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 18 is a schematic diagram showing a change of material along with a video length when editing a video according to an embodiment of the present application;
FIG. 19 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 20 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 21 is a schematic diagram showing a change of material along with a video length when editing a video according to another embodiment of the present application;
FIG. 22 is a schematic diagram of a scene of video editing according to an embodiment of the present application;
FIG. 23 is a schematic diagram showing a change of a material along with a video length when editing a video according to another embodiment of the present application;
FIG. 24 is a block diagram showing a software architecture of an example of a mobile phone according to an embodiment of the present application;
fig. 25 is a schematic structural diagram of an example of a mobile phone 100 according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, embodiments of the present application will be described in further detail below with reference to the accompanying drawings. The terms "first" and "second" are used below for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defining "a first" or "a second" may explicitly or implicitly include one or more such feature. In the description of the present embodiment, unless otherwise specified, the meaning of "plurality" is two or more.
For easy understanding, first, description will be made of data related to an embodiment of the present application:
(1) Video editing: the present application relates to a process in which a user creates an existing video file into videos having different expressive forces according to the user's own ideas, for example, a plurality of images are combined into one video, for example, a material is added to one video, for example, a plurality of video segments are divided and combined.
(2) Material: is visual content and auditory content for adding to a video to increase the expressive power of the video, and the material includes types of subjects, filters, background music, text, stickers, special effects, picture-in-picture, and the like. The material length refers to a duration corresponding to the number of frames of a video frame to which a material is continuously added in the video.
(3) Video track: in the interface for editing video, an area for previewing video frames of a video to be edited (i.e., a first video) is displayed. After the user selects the video to be edited, the electronic equipment determines the track length according to the playing time of the video to be edited, creates a video track according to the track length, and adds video frames for previewing the video to be edited into the video track. Through video frames in the video track, the user can roughly preview the content of the video to be edited, so that the later user can position to a position where video editing is required based on the preview content.
(4) Material track: in an interface for editing video, an area of an identification of an added material is displayed, and the length of a material track is the same as the length of a video track.
(5) Time axis: the number axis representing time is used for displaying the playing time of the video to be edited, so that a user can conveniently control the picture content displayed at a certain time point. The unit length of the time axis is fixed, and the head and tail are distinguished according to the positive direction of the time axis. The lengths referred to below are all lengths in the temporal sense determined according to the time axis.
The electronic device according to the embodiment of the present application may be a tablet computer, a personal computer (personal computer, PC), a personal digital assistant (personal digital assistant, PDA), a smart watch, a netbook, a wearable electronic device, an augmented reality (augmented reality, AR) device, a Virtual Reality (VR) device, a vehicle-mounted device, a smart car, a smart sound, a robot, smart glasses, a smart television, or the like, in addition to a mobile phone.
It should be noted that, in some possible implementations, the electronic device may also be referred to as a terminal device, a User Equipment (UE), or the like, which is not limited by the embodiment of the present application.
The following description will take an electronic device as an example of a mobile phone. Fig. 1 shows a schematic view of a video editing scenario provided in an embodiment of the present application, after a user opens a mobile phone, a mobile phone display interface displays a mobile phone home screen interface, as shown in fig. 1 (a), where the mobile phone home screen interface includes icons of a plurality of application programs, for example, icons 101 of a "gallery" application, and the user may click on the icons 101 of the "gallery" application to open the gallery, and in response to the user clicking on the icons 101 of the "gallery" application in the mobile phone home screen interface, the mobile phone displays an interface 102 as shown in fig. 1 (b). The interface 102 includes a video 103 to be edited, an image, and other videos. The video 103 to be edited may be photographed by a mobile phone. In response to the user's selection of the video 103 to be edited, the mobile phone displays an interface 104 as shown in fig. 1 (c). The interface 104 is a playback interface of the video 103 to be edited. The interface 104 includes an editing control 105, and when the user needs to edit the video to be edited, the user can click on the editing control 105, and respond to the operation of the user on the editing control 105, the mobile phone displays an interface 106 as shown in (d) in fig. 1.
The interface 106 is an interface for editing the video 103 to be edited, and corresponding to the first interface, the interface 106 includes a preview area 107, a track area 108, and a control area 111, and the mobile phone displays the video 103 to be edited in the preview area 107. The track area 108 includes a video track 109 and a cursor 110, and the video track 109 can slide left and right to display video frames for previewing the video 103 to be edited. Cursor 110 is used to facilitate the user in determining the currently located video frame. The control area 111 includes a plurality of controls for adding filters, music, text, special effects, and the like, and the control area 111 can slide left and right to display more controls for editing video.
If the user wants to adjust the playing duration of the video 103 to be edited, the user may click on the video track 109, and in response to the user clicking on the video track 109, the mobile phone displays the left handle 201 at the head end of the video track 109, the interface before the mobile phone detects the clicking operation is shown in fig. 2 (a), the interface before the mobile phone detects the clicking operation is shown in fig. 2 (b), and the mobile phone may also display the right handle (not shown in the drawing) at the tail end of the video track 109. The handle is a control for adjusting the length. The user can adjust the handle to delete part of video frames or increase video frames in the video to be edited, so as to adjust the playing time of the video to be edited. When the playing time of the video to be edited is shortened or increased, the length of the video track is shortened or increased.
Adjusting the handles includes holding and dragging left handle 201 to the right, or holding and dragging right handle to the left. If the mobile phone detects an operation of pressing and dragging the left handle 201 to the right by the user, in response to the user operation, a part of video frames in the video 103 to be edited is deleted from the first frame of the video 103 to be edited, and the video track is shortened to the right. If the mobile phone detects that the user holds down and drags the right handle to the left, the video track is shortened to the left by deleting part of the video frames in the video 103 to be edited from the tail frame of the video 103 to be edited in response to the user operation. The pruned video frames are not displayed in the video track. Adjusting the handles further includes holding and dragging left handle 201 to the left, holding and dragging right handle to the right to redisplay the pruned video frames in the video track. Wherein, the first frame refers to the video frame when a video starts, and the last frame refers to the video frame when a video ends.
It should be noted that, the user may also operate the control for adjusting the video 103 to be edited in the control area 111 to adjust the playing duration of the video 103 to be edited. For example, a portion of a video frame taken from the video to be edited by the "taken" control. For example, the video to be edited is divided into a plurality of segments by a "divide" control, and then one or more of the segments are deleted by a "delete" control. For example, the play speed of the video is changed by a control. For example, by adding a transition through a control, the transition is a transition or transition between two video segments, and after adding the transition, a portion of the video frame may overlap between the two video segments. For example, adding video frames in the video to be edited by a control for adding new video content, and so on.
If the user wants to add material to the video to be edited 103, the control for adding material in the control area 111 may be operated to add a length of material at the target location. The length of a material refers to a duration corresponding to the number of frames of a video frame to which the material is continuously added in the video, and may also be referred to as a duration for continuously displaying the material in the video. The start position of a material refers to the position in the video of the first frame of the video frame to which the material is continuously added in the video, that is, the position in the video of the video frame at which the display of the material is started. The end position of a material refers to the position in the video of the end frame of the video frame to which the material is continuously added, i.e., the position in the video of the video frame to which the display of the material is stopped. For example, the video frames with the material continuously added thereto start from 00:00 seconds to 00:02 seconds, the length of the material is 2 seconds, the starting position of the material is 0 th second of the video to be edited, and the ending position of the material is 2 nd second of the video to be edited. The starting position or the ending position of the material is changed, the picture content of the corresponding video frame added with the material may be changed, and the length of the material, that is, the number of frames of the video frame added with the material may not be changed.
For example, fig. 3 shows a schematic view of a scenario when adding material, where in an interface shown in fig. 3 (a), a user positions a cursor 110 at a first frame of a video 103 to be edited, the user clicks a "text" control, and in response to a user operation, a mobile phone adds text type material in the video 103 to be edited, and as shown in fig. 3 (b), the mobile phone creates a material track 301 according to a length of a video track 109, and adds an identifier 302 for representing the text type material in the material track. The mobile phone determines the starting position of the identifier 302 in the material track 301 according to the position of the cursor 110, and determines the length of the material identifier 302 in the material track 301 according to the initial adding length of the material, thereby determining the position of the identifier 302. The video track 109 and the material track 301 are vertically arranged in the user display interface, and the start position of the video track 109 and the start position of the material track 301 are aligned. After the user can adjust the length of the material after adding the material, the mobile phone detects the operation of clicking the mark 302, and then, in response to the operation, the mobile phone displays the handle 303 and the handle 304 as shown in (c) of fig. 3, and the user drags the handle 303 or the handle 304 to adjust the starting position or the ending position of the material in the material track 301 to change the adding position or the length of the material, as shown in (d) of fig. 3.
The video segment corresponding to the starting position of the added material is the affiliated segment of the material, and the material only moves and deletes along with the affiliated segment. For the types of materials such as characters, stickers and the like, the initial adding length of the materials is preset, and for the types of materials such as music, picture-in-picture, captions and the like, the initial adding length of the materials is related to the content of the materials.
It should be noted that, the position of the video frame where the cursor 110 is located is adjusted, the starting position of the added material can be located when the material is added, the user can click a certain position on the video track 109 to set the position of the cursor 110, and the position of the cursor 110 can be reset when the material is added each time, so that the user can more conveniently add the material.
It should be noted that, the user may also directly press the identifier 302 and move the identifier 302 in the material track 301 to change the starting position and the ending position of the material. The user can also adjust the length of the material in a mode of deleting after segmentation.
It should be noted that, in the embodiment of the present application, according to the types of materials added in the video to be edited, a corresponding material track may be created for each type of material; for example, when background music, characters and a sticker are added to a video to be edited, a music material track, a character material track and a sticker material track are respectively created; in addition, a plurality of material of the same type, a plurality of material identifications may be displayed on the same material track. In practical application, in order to distinguish various types of material tracks, a corresponding track type identifier can be set in front of each material track; the identification for representing the material added in the material track may be a material name and a material attribute, for example: music name, special effect name. In addition, when editing a certain material, the material is displayed in an enlarged manner corresponding to the material track, the unedited material is displayed above the video track in the form of lines, the lengths of the lines represent the positions covered by the material, and different materials can be represented by lines with different colors.
The method for editing video provided by the embodiment of the application is described below with reference to the above scenario. Fig. 4 shows a video editing method according to an embodiment of the present application, where the method is applied to a mobile phone. As shown in fig. 4, the method includes:
s401, the mobile phone detects an operation of requesting editing of the first video.
The operation of requesting the first video to be edited may refer to the embodiment shown in fig. 1, and in practical application, the user may request the first video to be edited in other manners.
S402, the mobile phone displays a first interface, wherein the first interface is an interface for editing video.
The first interface may include an interface as shown in (d) of fig. 1, an interface as shown in (a) and (b) of fig. 2, and an interface as shown in (a) to (d) of fig. 3. In practical applications, the first interface may also include other interfaces.
S403, after detecting the operation of adding the material, the mobile phone adds the material in the first video.
In the embodiment of the application, the user may add a plurality of materials, and after each time the user adds the materials, the mobile phone detects the operation of adding the materials and adds the materials in the first video (the video to be edited). Multiple additions of one type of material also belong to multiple materials.
S404, the mobile phone detects a first operation from the first interface, and the first operation requests to prune part of video frames in the first video or add video frames in the first video frames.
The first operation may be an operation of adjusting a handle as described in the embodiment of fig. 2, or an operation of adjusting a control for adjusting the video 103 itself to be edited in the control area 111, which is not limited in the present application.
S405, the mobile phone generates a target video according to the target video frame in the first video and the video frame added with the material in the first video, wherein the target video frame exists in a second video, and the second video is a corresponding video after the first operation is performed on the first video.
Wherein the target video frame is a video frame that exists in both the first video and the second video. Fig. 5 shows a schematic diagram of comparison between a first video and a second video, where an upper image represents the first video, and a lower image represents the second video, as shown in (a) in fig. 5, a first operation requests to prune a part of video frames in the first video, and after pruning, the remaining video frames in the first video form the second video, and after pruning, the remaining video frames in the first video are target video frames. As shown in fig. 5 (b), the first operation requests to add video frames to the first video frame, and after adding video frames, all video frames in the first video exist in the second video, and all video frames in the first video are target video frames.
The video frames with the material added in the first video represent the video frames of the material displayed in the first video before the first operation, the starting position and the ending position of the material before the first operation can be determined, and the mobile phone can determine the video frames which can be reserved to the second video after the first operation according to the target video frames in the first video. Generating a target video according to a target video frame in the first video and a video frame added with materials in the first video, wherein the generating comprises the following steps: the mobile phone determines a video frame for adding the material in the second video according to whether the target video frame in the first video comprises the video frame added with the material in the first video, namely, the intersection of the target video frame in the first video and the video frame added with the material in the first video, and generates the target video according to the second video. The first frame of the video frame for adding material in the second video is determined, for example, according to whether the first frame of the video frame to which material is added is included in the target video frame. Or determining the end frame of the video frame for adding the material in the second video according to whether the end frame of the video frame added with the material is included in the target video frame. Determining whether to delete the added material according to whether all video frames with the material added in the first video are included, determining whether the number of frames of the video frames with the material added in the second video after the first operation needs to be increased or decreased, and the like, after the second video is obtained through the first operation, the second video can be directly saved to generate a target video, and the video frames used for adding the material in the second video are the video frames with the material added in the target video.
Because the first operation mode is various, the method provided by the embodiment of the application can flexibly determine the change of the video frame added with the material along with the adjustment of the first video, and further display the changed material mark to the user in the material track when the starting position or the ending position or the length of the material is changed, so that the user can timely see the change of the material along with the first operation, the convenience of video editing is improved, and the use experience of the user is improved.
Considering that the user adjusts the starting position or the ending position of the material, the material may be added to the video to be more desirable. Fig. 6 shows a method for editing video according to an embodiment of the present application, as shown in fig. 6, the method includes:
s601, the mobile phone detects a first operation from a first interface, the first operation requests to delete part of video frames in a first video or to add video frames in the first video, and materials are added in the first video.
S602, a mapping relation between a start-stop position of a material in a first video and a video frame added with the material in the first video is obtained, wherein the start-stop position comprises a start position and an end position.
According to the embodiment of the application, the mobile phone determines the mapping relation between the starting and ending positions of the materials in the first video and the video frames added with the materials in the first video according to whether the starting position and/or the ending position of the materials are adjusted. The mapping relation is divided into a first mapping relation and a second mapping relation, and the conditions for adjusting the positions of the materials are different.
The user performs an operation (corresponding to a third operation) on the control for adding the material on the first interface, for example, clicks on the "text" control as shown in (a) of fig. 3, and after the mobile phone detects the operation for adding the material from the first interface, the mobile phone adds the material in the first video according to the position of the cursor and the default initial adding length of the material, and at this time, the material is in an initial state. The mobile phone generates a first mapping relation, wherein the first mapping relation is as follows: the starting position of the material changes along with the first frame in the video frame added with the material in the first video, namely the picture content corresponding to the starting position of the material is the picture content of the first frame in the video frame added with the material in the first video, and the ending position of the material does not have corresponding fixed picture content. Under the first mapping relation, deleting the video frame added with the material in the first video, wherein the length of the material can be unchanged, and the content of the video frame corresponding to the material is changed.
If the mobile phone detects a second operation from the first interface after adding the material, the second operation requests to adjust the start and stop positions of the material in the first video, that is, the user adjusts the start position and/or the end position of the material, for example, as shown in (c) in fig. 3, after executing the second operation, the material changes from the initial state to the non-initial state, the mobile phone generates a second mapping relationship, where the second mapping relationship is: the starting position of the material changes with the first frame in the video frame added with the material in the first video, and the ending position of the material changes with the last frame in the video frame added with the material in the first video. That is, the picture content corresponding to the starting position of the material is fixed as the picture content of the first frame, that is, the picture content corresponding to the ending position of the material is fixed as the picture content of the last frame. Under the second mapping relation, deleting the video frame added with the material in the first video, shortening the length of the material, keeping the content of the corresponding video frame unchanged, adding the video frame in the first video, and determining the added length of the material according to the position of the added video frame, the type of the material and the like.
The mobile phone stores the mapping relation of each material and the length of each material, and the mapping relation of a plurality of materials of the same type may be different, for example, the starting position and/or the ending position of the material is not adjusted after the user adds the first text material, and the mapping relation of the first text material is the first mapping relation; after the user adds the second text material, the starting position and/or the ending position of the material are adjusted, and the mapping relation of the second text material is the second mapping relation. When the mobile phone stores the first mapping relation, the mobile phone also stores the time point of the first frame in the video frame added with the material in the first video. When the mobile phone stores the second mapping relation, the mobile phone also stores the time point of the first frame and the last frame in the video frames added with the materials in the first video, so that whether the first frame and the last frame are deleted is determined when the first operation is detected subsequently.
After the mobile phone detects a first operation from the first interface, a mapping relation between a start-stop position of a material in a first video and a video frame added with the material in the first video is obtained, so that the change of the material after the video is adjusted is determined according to which condition.
S603, determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation.
Specifically, the mobile phone determines a first frame of a video frame for adding the material in a second video according to the content of the video frame added with the material in the first video included in the target video frame; and determining the frame number of the video frames for adding the materials in the second video according to whether the mapping relation corresponding to the materials is the first mapping relation or the second mapping relation, namely determining whether the length of the materials added in the second video needs to be changed.
Considering that the initial position or the end position of the material is adjusted by the user, the picture content of the video added with the material possibly accords with the mind of the user, so that after the initial position or the end position of the material is adjusted by the user is detected, a second mapping relation is generated, and after the initial position or the end position of the material is not adjusted by the user, a first mapping relation is generated, and as the conditions corresponding to the first mapping relation and the second mapping relation are different, the mobile phone detects the same first operation, and the mapping relation of the material is different, different material change results are brought, therefore, the method provided by the embodiment of the application can adjust the position of the material more flexibly, ensure that the integral change of the video and the material accords with the expected effect of the user, avoid the need of adjusting the position of the added material after the video is adjusted by the user, improve the convenience of video editing and improve the use experience of the user.
It should be noted that, for the material with the initial length related to the picture, such as AI subtitles, and for the material with the initial length related to the overall length of the video segment, such as music, the mapping relationship of such material is the second mapping relationship no matter whether the user adjusts the starting position and/or the ending position of the material. AI subtitles, i.e. intelligent subtitle functionality, is a subtitle automatically generated using artificial intelligence (Artificial Intelligence, AI) technology, which is currently mainly used to convert speech in video into text by speech recognition, and is presented on a screen in the form of subtitles.
In some scenarios the first video is a video clip, or material is added in its entirety to a video clip, such a scenario being hereinafter referred to as a single-segment scenario. In some scenarios, the first video may be composed of two or more video clips, and when the first video has multiple clips, material may be added to the video across the clips, for example, the first video is divided into the first clip and the second clip between the first video frame and the second video frame, the video frame from the first frame to the first video frame in the first video to which the material is added is located in the first clip, and the video frame from the second video frame to the last frame in the first video to which the material is added is located in the second clip, and such a scenario is hereinafter referred to as a cross-clip scenario.
In the following, specific scenes are described in connection with the mapping relation of the materials, the form of the first operation, the type of the materials, and the like, in order to more intuitively see the change of the positions of the materials with the video, the track areas are individually displayed in fig. 7 to 22, and the video track and the material track of the first video, and the video track and the material track of the second video are sequentially and longitudinally arranged.
Scene 1: the mapping relation of the single-segment scene and the materials is a first mapping relation
Fig. 7 is a schematic diagram showing a change of a start position of a material according to a video length according to an embodiment of the present application. Fig. 7 (a) shows a first video and a corresponding material track, an arrow in (a) shows an operation of dragging a left handle 701 of the first video to the right, fig. 7 (b) shows a second video and a corresponding material track obtained after the first operation is performed, an arrow in (b) shows an operation of dragging a left handle of the second video to the left (corresponding to a fourth operation), and fig. 7 (c) shows a third video obtained after the fourth operation is performed, and video frames deleted due to the first operation are redisplayed in a video track corresponding to the third video.
As shown in fig. 7 (a), a material is added to the first video, the first frame of the video frame to which the material is added is the video frame at the position a, and the last frame of the video frame to which the material is added is the video frame at the position C. An operation of deleting a part of the video frames in the first video from the first frame of the first video is detected, and deleting the first frame (position a) of the video frames with the material added in the first video, that is, the user drags the left handle 701 of the first video to move rightward, the video track is shortened rightward, the video frames between the first frame of the first video to the position B are deleted, the remaining video frames, that is, the target video frames, constitute the second video, as shown in (B) in fig. 7, the starting position of the material moves rightward with the video clip, the first frame (position B) of the target video frame is determined as the first frame of the video frame for adding the material in the second video, the frame number of the video frames between the position a to the position C is determined as the frame number for adding the material in the second video, that is, the length of the material is unchanged, but the content of the corresponding video frame is changed. In this scenario, if the user drags the left handle of the second video to the left after the first operation, and detects that the user has operated, the starting position of the material moves to the left along with the video clip, as shown in fig. 7 (c), and when the deleted video frame is restored to the position a, the starting position of the material is restored to the original starting position.
In one case, after the mobile phone performs the first operation, it is determined whether the length of the material is greater than the length of the video itself, and if so, the length of the material is shortened from the tail end of the material. That is, if the number of frames of the target video frame is smaller than the number of frames of the video frame between the position a to the position C, the head end of the material is ensured to be unchanged, and the tail end part material is deleted. And if the length corresponding to the target video frame is smaller than the minimum adding length of the materials, deleting the added materials.
The arrows in the subsequent fig. 8 to 23 and the legends with the longitudinal arrangement are similar to those in fig. 7, the arrows drag the handle to the left, the black solid areas represent the deleted parts, and the description thereof will not be repeated for the sake of brevity.
Fig. 8 is a schematic diagram showing the change of the end position of a material according to the length of a video according to an embodiment of the present application. As shown in fig. 8 (a), material is added to the first video, where the first video includes two segments, and the material is located in the first segment, and an operation of deleting a part of the video frames in the first video from the tail frame of the first segment is detected, that is, the user drags the right handle 801 of the first segment to move left, the video track of the first segment is shortened to the left, and if the total track length of the first video is sufficient, the picture and the length corresponding to the starting position of the material are not changed, as shown in fig. 8 (b).
Fig. 9 is a schematic diagram showing the change of the end position of still another material according to the embodiment of the present application along with the length of the video. As described in connection with the embodiment of fig. 8, as shown in (a) and (b) of fig. 9, after the video track of the first segment is shortened to the left, if the total track length of the first video is insufficient, the picture corresponding to the starting position of the material is unchanged, the tail frame of the target video frame is determined as the tail frame of the video frame for adding the material to the second video, and the length of the material is shortened. If the user drags the right handle of the first segment to the right after the first operation, and detects that the user operation, the end position of the material moves to the right along with the video segment, as shown in (c) of fig. 9, the end position of the material may be restored to the original end position.
Scene 2: the mapping relation of the single-segment scene and the materials is a second mapping relation
Fig. 10 is a schematic diagram showing a change of a start position of still another material according to a video length according to an embodiment of the present application. As shown in fig. 10 (a), a material is added to the first video, the first frame of the video frame to which the material is added is the video frame at the position a, and the last frame of the video frame to which the material is added is the video frame at the position C. An operation of deleting a part of the video frames in the first video from the first frame of the first video is detected, and deleting the first frame (position a) of the video frames in which the material is added in the first video, that is, the user drags the left handle 1001 of the first video to move rightward, the video track is shortened rightward, the video frames between the first frame of the first video to the position B are deleted, the remaining video frames, that is, the target video frames, are deleted, as shown in (B) in fig. 10, the starting position of the material moves rightward with the video clip, the first frame (position B) of the target video frame is determined as the first frame of the video frame for adding the material in the second video, the last frame (position C) of the video frame for adding the material in the first video is determined as the last frame of the video frame for adding the material in the second video, that is between the position B to the position C, and the length of the material is shortened. In this scenario, if the user drags the left handle of the second video to the left after the first operation, and detects that the user has operated, the starting position of the material moves to the left along with the video clip, as shown in (c) of fig. 10, and the starting position of the material may be restored to the original starting position.
In one case, if the operation of deleting a part of the video frames in the first video starts from the first frame of the first video, the video frames from the first frame of the first video to the position C are deleted, that is, the target video frame does not include the video frame with the material added in the first video, the added material is deleted. In this case, when the pruned video frame is restored to before the position a, the deleted material cannot be restored. The same applies to scenario 1.
Fig. 11 is a schematic diagram showing the change of the end position of a material according to the length of a video according to an embodiment of the present application. As shown in fig. 11 (a), the mobile phone detects an operation of deleting a part of the video frames in the first video from the end frame of the first video, that is, the user drags the right grip 1101 of the first video to move to the left, the video track of the first video is shortened to the left, the video frames between the end frame of the first video and the position B are deleted, the remaining video frames, that is, the target video frames, constitute the second video, and as shown in fig. 11 (B), the end frame of the target video frame is determined as the end frame of the video frame for adding the material in the second video, the picture corresponding to the start position of the material is unchanged, and the length of the material is shortened. If the user drags the right handle of the second video to the right after the first operation, and detects that the user has operated, the end position of the material moves to the right along with the video clip, as shown in (c) of fig. 11, and the end position of the material may be restored to the original end position.
In one case, if the operation of deleting a part of the video frames in the first video starts from the tail frame of the first video, the first frame of the video frames with the added materials in the first video is deleted, the target video frame does not include the video frames with the added materials in the first video, the added materials are deleted, and the deleted materials cannot be recovered. The same applies to scenario 1.
Scene 3: the mapping relation of the materials of the cross-fragment scene is a first mapping relation
Fig. 12 is a schematic diagram showing the position of still another material according to the embodiment of the present application according to the length of the video. As shown in fig. 12 (a), the first video is divided into a first segment and a second segment, and the first segment is located before the second segment on the video track, and the right handle of the first segment and the left handle of the second segment are overlapped and are handles 1201. The handle 1201 is dragged to the left to prune part of the video frames in the first video starting from the tail frame of the first segment. The handle 1201 is dragged to the right to delete a portion of the video frames in the first video starting from the first frame of the second segment. As shown in fig. 12 (b), if the target video frame includes the first frame of the video frame with the material added to the first video, the handle 1201 is dragged left and right, the frame corresponding to the start position of the material is unchanged, and the frame corresponding to the end position of the material changes with the movement of the handle 1201.
If the left drag handle 1201 is detected as shown in fig. 12 (c) and (d), the first frame of the video frame to which the material is added in the first video is deleted, the first frame of the second section is determined as the first frame of the video frame to which the material is added in the second video, and then the start position of the material moves with the second section, and when the handle 1201 is dragged to the right as shown in fig. 12 (e) to restore the deleted video frame, the start position of the material cannot be restored to the original start position.
If a new segment is added between the first segment and the second segment, or at least one segment is also present between the first segment and the second segment, the segment is deleted between the first segment and the second segment, the picture corresponding to the starting position of the material is unchanged, and the content of the video frame to which the material is added is changed.
In this scene, if the total length of the video track is sufficient, the length of the material is unchanged, and if the total length of the video track is insufficient, the length of the material is shortened. The change in the material position caused by the movement of the left handle of the first section and the right handle of the second section refers to the condition of scene 1.
Scene 4: the mapping relation of the materials of the cross-fragment scene is a second mapping relation
Fig. 13 is a schematic diagram showing the position of still another material according to the embodiment of the present application according to the length of the video. As shown in fig. 13 (a), when the handle 1301 is dragged to the left, the video frame between the position a and the position B in the first video is deleted from the end frame (position B) of the first segment, and the target video frame includes the first frame and the end frame of the video frame to which the material is added in the first video, the picture content corresponding to the start position of the material is unchanged, and the picture content corresponding to the end position (position C) of the material is also unchanged. As shown in (B) of fig. 13, the first frame of the video frame to which the material is added in the first video to the position a, and the video frame between the position B to the position C are video frames for adding the material in the second video, and the length of the material is shortened.
As shown in fig. 13 (c) and (d), if the handle 1301 is dragged to the left, part of the video frames in the first video are deleted from the tail frame of the first segment, and the first frame of the video frame with the material added thereto in the first video is deleted, that is, the material in the first segment is deleted, the first frame of the second segment is determined as the first frame of the video frame for adding the material to the second video, the picture content corresponding to the end position of the material is unchanged, and the material length is correspondingly shortened. Then, the starting position of the material moves with the second segment, and when the handle 1301 is dragged to the right again to restore the deleted video frame, as shown in (e) of fig. 13, the starting position of the material cannot be restored to the original starting position. The change in the material position caused by the movement of the left handle of the first section and the right handle of the second section refers to the condition of scene 2.
If a new segment is added between the first segment and the second segment, or at least one segment is also present between the first segment and the second segment, the segment is deleted between the first segment and the second segment, and the video frame added with the material in the first video and the video frame added with the segment, or the video frame after deleting the segment from the video frame added with the material in the first video, is determined as the video frame for adding the material in the second video. The picture content corresponding to the starting position and the picture content corresponding to the ending position of the material are unchanged, and the length of the material is increased along with the added fragments or is shortened according to the deleted fragments.
Scene 5: the initial length of the material is related to the overall length of the video, and the material is not adjusted
For materials whose initial length is related to the overall length of the video, for example, materials of the music type, the conditions of the type of materials as they change with the video are referred to as scene 2 and scene 4. The following describes a part of music other than text, special effects, and the like.
Music has its own length and content. The addition from the origin of the time axis is always kept when music is added. When the starting position or the ending position of the music is adjusted, the music content is always played back from the starting position of the music, and the length of the music is shortened from the end of the music forward.
If the user adds music, the position of the music is not adjusted, the music material is in an initial state, the music and all video clips have a second mapping relation, the initial position of the music corresponds to the origin of the time axis, and the end position of the music is determined according to the length of the music on the time axis and the length of the first video on the time axis. If the length of the music is greater than that of the first video, the length of the music is automatically intercepted according to the length of the current first video, for example, the length of the first video is 2 minutes, and the length of the music is 3 minutes, then the first 2 minutes of the music are added to the first video, and the contents from the 2 nd minute to the 3 rd minute of the music are not added to the first video. If the music length is smaller than the length of the first video, adding all the content of a piece of music to the first video, then for the subsequent music track, in one implementation, the content of the subsequent music track may be determined according to the user operation without automatically adding music, in another implementation, the existing music may be added in a loop.
Fig. 14 to 18 are schematic diagrams showing the position of a music material according to the embodiment of the present application as a function of the length of a video. As shown in fig. 14 (a), the handle 1401 is dragged rightward to delete a part of the video frame, or as shown in fig. 14 (b), the middle handle 1402 is dragged leftward to delete a part of the video frame, or a certain video clip is deleted, resulting in a shortening of the overall length of the first video, the start position of music remains corresponding to the origin of the time axis, and music of the corresponding length is cut off from the end of the music. For example, the video track is shortened by 30 seconds, and the length of the first video is changed from 2 minutes to 1 minute and 30 seconds, then the first 1 minute and 30 seconds of music is added to the first video, and the music content of 1 minute and 30 seconds to 2 minutes is deleted.
As shown in (a) and (b) of fig. 15, if a video clip is added or a video track is elongated rightward, so that the overall length of a first video increases, the music length increases accordingly, and if the video length is changed from 2 minutes to 4 minutes, and the entire piece of music can be included, the contents of 2 nd to 3 rd minutes of the music are added to the first video, in which case there is a corresponding music track from 3 rd to 4 th minutes of the first video, but in the case where the user does not add a second piece of music, there is no music content in the corresponding length of the video.
In the case where the music length is smaller than that of the first video, the user may add a second musical material to the first video, as shown in fig. 16 (a), after the second musical material may be added to the first musical material, the identifications of the two musical materials may be displayed in one music track, and when it is detected that the length of the first video is shortened, for example, an operation of dragging the middle handle to the left as shown in fig. 16 (b) is detected, the content of the tail end of the second musical material is preferentially shortened to improve the user experience.
Scene 6: the initial length of the material is related to the overall length of the video, and the material is adjusted
After the user adds the music, the length and position of the music can be adjusted, for example, the length of the music can be shortened or increased by adjusting a handle, for example, the length of the music can be shortened by deleting after division (dividing the music into a plurality of paragraphs and deleting one or more of the plurality of paragraphs), for example, the adding position of the music can be changed by long pressing the identification of the music material and moving the material (for example, moving the music from 0-1 minute to 2-3 minutes on the time axis).
After the length and the position of the music material are adjusted, music is added in only part of video frames in the first video, namely the music covers part of the fragments but not all the fragments, and a second mapping relation exists between the starting position and the ending position of the music and the picture of the video frames added with the music in the first video. When the length of a music covered segment changes, the music length changes with the length of the covered segment and not with the non-covered segment.
For example, as shown in fig. 17, the left handle 1601 of the first video is dragged to the right to prune a part of the video frames from the first frame of the first video, after the part of the video frames is pruned, the start position of the music remains corresponding to the origin of the time axis, that is, the first frame of the target video frame, the picture of the video frame corresponding to the end position of the music is unchanged, and the length of the music is shortened from the end of the music forward. For example, as shown in (a) of fig. 17, the length of the first video is 3 minutes, the length of the added music is 2 minutes, the start position and the end position of the music are respectively associated with the video frame of the first video corresponding to the 2 nd minute of the first video, when the length of the first video is shortened by 1 minute, the first video becomes the content from the 1 st minute to the 3 rd minute, the position of the music becomes the content from the 1 st minute to the 2 nd minute of the first video, and the music content is the content from 0 second to the 1 st minute, as shown in (b) of fig. 17.
As shown in fig. 18 (a), the first piece is an overlay piece of music, for example, background music is present when the first piece is played, and the second piece is an uncovered piece of music, and background music is absent when the second piece is played. When the length of the non-covered segment is changed, as shown in (b) of fig. 18, the length of the second segment is elongated rightward, and music is not changed. For example, the length of the first video is increased from 3 minutes to 4 minutes, the added video content is not related to the music added 2 minutes before the first video, and neither the length nor the content of the music changes.
It should be noted that, a plurality of musical materials may be added to the first video, and when the user adjusts the position of one of the musical materials to be in the non-initial state, the other musical materials are all in the non-initial state.
Scene 7: adjusting the playing speed of the clip
And when the playing speed of the first video is regulated, keeping the picture content of the video frames corresponding to the starting position and the ending position of the material unchanged all the time, and determining the frame number of the video frames for adding the material in the second video according to the detected playing speed of the first video before and after the first operation. In the embodiment of the application, the playing double speeds of a plurality of video clips can be adjusted simultaneously, and the playing double speeds of the plurality of video clips can be different.
Fig. 19 to 20 are schematic diagrams showing a change of a length of a material according to a playing speed of a video according to an embodiment of the present application. And under the condition that the first video is a video clip, changing the playing speed of the first video, wherein the length of the material is changed in the same proportion based on the speed change multiple. As shown in (a) of fig. 19, the play double speed of the first video is set to 1 time, and in a state before the play double speed is not adjusted, as shown in (b) of fig. 19, the play double speed of the first video is set to 2 times from 1 time, in which case the number of frames of the video frame to which the material is added in the first video is reduced, and the material length is shortened. As shown in (c) of fig. 19, the play speed of the first video is set to 0.2 times the 1 times the speed, and in this case, the number of frames of the video frame to which the material is added in the first video increases, and the material length becomes long.
When the first video is a plurality of video clips, the clip portion added with the material changes in proportion based on the change of the playing double speed of the clip, and does not follow the change of the playing double speed of other clips. For example, the first video includes a first segment and a second segment, and the video frame of the add material spans the two segments. Under the condition that the playing double speed of the first segment is adjusted to be a first playing double speed and the playing double speed of the second segment is adjusted to be a second playing double speed, determining the number of frames of the first segment, to which the material is added, of which the video frames increase or decrease, according to the first playing double speed, and determining the number of frames of the second segment, to which the material is added, of which the video frames increase or decrease, according to the second playing double speed; the video frames for adding material in the second video are determined according to the number of frames in which the video frames with the material added in the first segment are increased or decreased and the number of frames in which the video frames with the material added in the second segment are increased or decreased.
As shown in fig. 20 (a), when the play double speed of the first clip and the play double speed of the second clip are set to 1 time and the play double speed of the first clip is set to 2 times from 1 time, as shown in fig. 20 (b), the number of frames of the video frames to which the material is added in the first clip decreases, the number of frames of the video frames to which the material is added in the second clip does not change, and the length of the material corresponding to the first clip becomes shorter. When the play double speed of the first clip is adjusted from 1 to 2 times, and when the play double speed of the second clip is adjusted from 1 to 0.2 times, as shown in fig. 20 (c), the number of frames of the video frames to which the material is added in the first clip decreases, the number of frames of the video frames to which the material is added in the second clip increases, the length of the material corresponding to the first clip becomes shorter, and the length of the material corresponding to the second clip becomes longer. The method provided by the embodiment of the application can more conveniently set the playing effect of the video and improve the use experience of the user.
Scene 8: adding transitions
When multiple clips are included in the first video, the user adds a transition between two adjacent clips, and can select different transition animations to enrich the visual effects of the video. Taking the example that the first video includes the first segment and the second segment as shown in fig. 21 (a), before the transition is added, the tail frame of the first segment is connected with the first frame of the second segment, as shown in fig. 21 (b), after the transition is added, there is an overlapping portion of the first segment and the second segment on the time axis, and the content of the first frames of the second segment overlaps with the last frames of the first segment, so that the video is shortened. Wherein the number of overlapping video frames may be determined according to the length of the transition animation.
If two materials of the same type are added to the first segment and the second segment, after the transition is added, a conflict may exist between the two materials, for example, text type materials are added to the first segment and the second segment at the same time, after the transition is added, elements in a video picture may be excessive, the content of a video frame cannot be seen, or music type materials are added to the first segment and the second segment at the same time, after the transition is added, two pieces of music are played at the same time, so that the music content cannot be heard, in this case, the length of the video frame of the added material in the first segment is shortened, and the initial position of the material in the second segment is preferentially ensured to correspond to the first frame of the second segment. Specifically, after detecting an operation of adding a transition animation between the first segment and the second segment, the mobile phone determines the coverage duration of the transition animation; and determining the frame number for deleting the video frames from the first frame of the video frames added with the materials in the first segment to the tail frame of the video frames added with the materials in the first segment according to the coverage duration.
It should be noted that, after canceling the added transition or switching the transition, the length of the video frame shortened by the first segment may be restored to the original length.
Scene 9: the material type is picture-in-picture
The pip is a video content presentation mode, that is, one video is played in full screen while another video is played in a small area of a picture, fig. 22 shows a schematic effect of adding the pip, such as the interface shown in (a) in fig. 22 includes a preview area 2201, a video track 2202 and a control area 2203, the mobile phone displays a video 2204 to be edited in the preview area 2201, when detecting that a user operates a "pip" control, the mobile phone displays a pip 2205 in the preview area 2201, such as shown in (b) in fig. 22, where the pip 2205 floats above the video 2204 to be edited, that is, the pip 2205 can cover a part of the picture content of the video 2204 to be edited, and the pip is not shown in the material track diagram.
The picture-in-picture 2205 may be a video type or a picture type. The conditions for the change of picture-in-picture of picture type may refer to what is described in scenes 1 to 4.
For the video type pip, the initial length at the time of addition is the initial length of the video pip itself, and if the user does not adjust the video pip material after adding it, the changing conditions of the video pip may refer to what is described in scene 1 and scene 3. If the video track length is sufficient, the length of the video picture-in-picture is unchanged, and if the video track length is insufficient to play the video picture-in-picture from beginning to end, the length of the video picture-in-picture is shortened, and the video picture-in-picture is not completely added to the first video. It should be noted that, unlike the text material, in the case where the length of the video track is insufficient to shorten the length of the video pip, if the user continues to add a clip to the first video, and the length of the video pip increases with the length of the video, that is, the video frame to which the video pip is added increases, and the length of the video frame to which the video pip is added may increase up to the initial length of the video pip itself.
If the user shortens the length of the video pip after adding the video pip material, the changing conditions of the video pip may refer to what is described in scene 2 and scene 4. When it is noted that, when adding a video frame to the first video, the maximum length of the video pip that can be adjusted is the initial duration of the video pip itself, as shown in fig. 23, (a) in fig. 23 is the first video before the first operation is performed, the tail frame of the video frame to which the video pip is added in the first video is the tail frame of the second segment, and the duration corresponding to the video frame to which the video pip is added in the first video is less than the initial duration of the video pip itself. The clip is added between the first clip and the second clip in the first video, as shown in (b) in fig. 23, the video frame to which the video pip is added increases, but when the video pip reaches its original length, the video frame to which the video pip is added reaches the maximum length, and no longer increases with the addition of the clip.
In summary, according to the embodiment of the application, based on the mapping relation between the starting and ending positions of the materials and the video frames, the video frames added with the materials in the first video and the target video frames after the length of the first video is changed, how the materials change along with the change of the video length under various scenes such as single-segment, cross-segment, video frame addition, video frame deletion, different types of materials and the like is determined, so that the change of the materials is more in line with the expected of a user, the overall change of the video and the materials is consistent, and the use experience of the user is improved.
Fig. 24 is a diagram showing a software architecture of a mobile phone implementing a method for editing video according to an embodiment of the present application. The layered architecture divides the software into several layers, each with distinct roles and branches. The layers communicate with each other through a software interface. In some embodiments, the Android system is divided into four layers, from top to bottom, an application layer, an application framework layer, an Zhuoyun row (Android run) and system libraries, and a kernel layer, respectively.
The application layer may include a series of application packages. As shown in fig. 24, the application package may include applications for cameras, gallery, calendar, phone calls, map, navigation, WLAN, bluetooth, music, video editing, short message, etc.
The application framework layer provides an application programming interface (application programming interface, API) and programming framework for application programs of the application layer. The application framework layer includes a number of predefined functions.
As shown in fig. 24, the application framework layer may include a window manager, a content provider, a view system, a phone manager, a resource manager, a notification manager, a display manager, a gesture manager, and the like.
The window manager is used for managing window programs. The window manager can acquire the size of the display screen, judge whether a status bar exists, lock the screen, intercept the screen and the like.
The content provider is used to store and retrieve data and make such data accessible to applications. The data may include video, images, audio, calls made and received, browsing history and bookmarks, phonebooks, etc.
The view system includes visual controls, such as controls to display text, controls to display pictures, and the like. The view system may be used to build applications. The display interface may be composed of one or more views. For example, a display interface including a text message notification icon may include a view displaying text and a view displaying a picture.
The telephony manager is used to provide the communication functions of the electronic device 100. Such as the management of call status (including on, hung-up, etc.).
The resource manager provides various resources for the application program, such as localization strings, icons, pictures, layout files, video files, and the like.
The notification manager allows the application to display notification information in a status bar, can be used to communicate notification type messages, can automatically disappear after a short dwell, and does not require user interaction. Such as notification manager is used to inform that the download is complete, message alerts, etc. The notification manager may also be a notification in the form of a chart or scroll bar text that appears on the system top status bar, such as a notification of a background running application, or a notification that appears on the screen in the form of a dialog window. For example, a text message is prompted in a status bar, a prompt tone is emitted, the electronic device vibrates, and an indicator light blinks, etc.
Android run time includes a core library and virtual machines. Android run time is responsible for scheduling and management of the Android system.
The core library consists of two parts: one part is a function which needs to be called by java language, and the other part is a core library of android.
The application layer and the application framework layer run in a virtual machine. The virtual machine executes java files of the application program layer and the application program framework layer as binary files. The virtual machine is used for executing the functions of object life cycle management, stack management, thread management, security and exception management, garbage collection and the like.
The system library may include a plurality of functional modules. For example: surface manager (surface manager), media Libraries (Media Libraries), three-dimensional graphics processing Libraries (e.g., openGL ES), 2D graphics engines (e.g., SGL), etc.
The surface manager is used to manage the display subsystem and provides a fusion of 2D and 3D layers for multiple applications.
Media libraries support a variety of commonly used audio, video format playback and recording, still image files, and the like. The media library may support a variety of audio video encoding formats, such as: MPEG4, h.264, MP3, AAC, AMR, JPG, PNG, etc.
The three-dimensional graphic processing library is used for realizing three-dimensional graphic drawing, image rendering, synthesis, layer processing and the like.
The 2D graphics engine is a drawing engine for 2D drawing.
The kernel layer is a layer between hardware and software. The kernel layer at least contains display drivers.
The hardware layer at least comprises a display screen.
It will be appreciated that the above described software architecture is exemplary only and not limiting of the handset software architecture, and that in other embodiments, the handset may have more or less architecture, as the application is not limited in this regard.
The mobile phone detects that a user opens a gallery, detects that the user clicks an editing control after selecting a video in the gallery, and responds to the detected operation, the mobile phone jumps to an interface for editing the video from the gallery to allow the user to edit the video.
A hardware configuration of the mobile phone 100 for implementing the above method is described below with reference to fig. 25.
The handset 100 may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charge management module 140, a power management module 141, a battery 142, an antenna 1, an antenna 2, a mobile communication module 150, a wireless communication module 160, an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, a sensor module 180, keys 190, a motor 191, an indicator 192, a camera 193, a display 194, and a subscriber identity module (subscriber identification module, SIM) card interface 195, etc. The sensor module 180 may include a pressure sensor 180A, a gyro sensor 180B, an air pressure sensor 180C, a magnetic sensor 180D, an acceleration sensor 180E, a distance sensor 180F, a proximity sensor 180G, a fingerprint sensor 180H, a temperature sensor 180J, a touch sensor 180K, an ambient light sensor 180L, a bone conduction sensor 180M, and the like.
The processor 110 may include one or more processing units, such as: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processor (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), a controller, a video codec, a digital signal processor (digital signal processor, DSP), a baseband processor, and/or a neural network processor (neural-network processing unit, NPU), etc. Wherein the different processing units may be separate devices or may be integrated in one or more processors.
The controller can generate operation control signals according to the instruction operation codes and the time sequence signals to finish the control of instruction fetching and instruction execution.
A memory may also be provided in the processor 110 for storing instructions and data. In some embodiments, the memory in the processor 110 is a cache memory. The memory may hold instructions or data that the processor 110 has just used or recycled. If the processor 110 needs to reuse the instruction or data, it can be called directly from the memory. Repeated accesses are avoided and the latency of the processor 110 is reduced, thereby improving the efficiency of the system.
In some embodiments, the processor 110 may include one or more interfaces. The interfaces may include an integrated circuit (inter-integrated circuit, I2C) interface, an integrated circuit built-in audio (inter-integrated circuit sound, I2S) interface, a pulse code modulation (pulse code modulation, PCM) interface, a universal asynchronous receiver transmitter (universal asynchronous receiver/transmitter, UART) interface, a mobile industry processor interface (mobile industry processor interface, MIPI), a general-purpose input/output (GPIO) interface, a subscriber identity module (subscriber identity module, SIM) interface, and/or a universal serial bus (universal serial bus, USB) interface, among others.
The charge management module 140 is configured to receive a charge input from a charger. The charger can be a wireless charger or a wired charger. In some wired charging embodiments, the charge management module 140 may receive a charging input of a wired charger through the USB interface 130. In some wireless charging embodiments, the charge management module 140 may receive wireless charging input through a wireless charging coil of the cell phone 100. The charging management module 140 can also supply power to the mobile phone through the power management module 141 while charging the battery 142.
The power management module 141 is used for connecting the battery 142, and the charge management module 140 and the processor 110. The power management module 141 receives input from the battery 142 and/or the charge management module 140 to power the processor 110, the internal memory 121, the display 194, the camera 193, the wireless communication module 160, and the like. The power management module 141 may also be configured to monitor battery capacity, battery cycle number, battery health (leakage, impedance) and other parameters. In other embodiments, the power management module 141 may also be provided in the processor 110. In other embodiments, the power management module 141 and the charge management module 140 may be disposed in the same device.
The wireless communication function of the mobile phone 100 may be implemented by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, a modem processor, a baseband processor, and the like.
The wireless communication module 160 may provide solutions for wireless communication including wireless local area network (wireless local area networks, WLAN) (e.g., wireless fidelity (wireless fidelity, wi-Fi) network), bluetooth (BT), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM), near field wireless communication technology (near field communication, NFC), infrared technology (IR), etc. applied to the handset 100. The wireless communication module 160 may be one or more devices that integrate at least one communication processing module. The wireless communication module 160 receives electromagnetic waves via the antenna 2, modulates the electromagnetic wave signals, filters the electromagnetic wave signals, and transmits the processed signals to the processor 110. The wireless communication module 160 may also receive a signal to be transmitted from the processor 110, frequency modulate it, amplify it, and convert it to electromagnetic waves for radiation via the antenna 2.
The mobile phone 100 implements display functions through a GPU, a display 194, an application processor, and the like. The GPU is a microprocessor for image processing, and is connected to the display 194 and the application processor. The GPU is used to perform mathematical and geometric calculations for graphics rendering. Processor 110 may include one or more GPUs that execute program instructions to generate or change display information. The display screen 194 is used to display images, videos, and the like.
The internal memory 121 may be used to store computer executable program code including instructions. The internal memory 121 may include a storage program area and a storage data area. The storage program area may store an application program (such as a sound playing function, an image playing function, etc.) required for at least one function of the operating system, etc. The storage data area may store data (e.g., audio data, phonebook, etc.) created during use of the handset 100, etc. In addition, the internal memory 121 may include a high-speed random access memory, and may further include a nonvolatile memory such as at least one magnetic disk storage device, a flash memory device, a universal flash memory (universal flash storage, UFS), and the like. The processor 110 performs various functional applications and data processing of the mobile phone 100 by executing instructions stored in the internal memory 121 and/or instructions stored in a memory provided in the processor.
The handset 100 may implement audio functions through an audio module 170, a speaker 170A, a receiver 170B, a microphone 170C, an earphone interface 170D, an application processor, and the like. Such as music playing, recording, etc.
The pressure sensor 180A is used to sense a pressure signal, and may convert the pressure signal into an electrical signal. In some embodiments, the pressure sensor 180A may be disposed on the display screen 194. The pressure sensor 180A is of various types, such as a resistive pressure sensor, an inductive pressure sensor, a capacitive pressure sensor, and the like. The capacitive pressure sensor may be a capacitive pressure sensor comprising at least two parallel plates with conductive material. The capacitance between the electrodes changes when a force is applied to the pressure sensor 180A. The handset 100 determines the strength of the pressure from the change in capacitance. When a touch operation is applied to the display 194, the mobile phone 100 detects the intensity of the touch operation according to the pressure sensor 180A. The mobile phone 100 may also calculate the position of the touch based on the detection signal of the pressure sensor 180A. In some embodiments, touch operations that act on the same touch location, but at different touch operation strengths, may correspond to different operation instructions. For example: and executing an instruction for checking the short message when the touch operation with the touch operation intensity smaller than the first pressure threshold acts on the short message application icon. And executing an instruction for newly creating the short message when the touch operation with the touch operation intensity being greater than or equal to the first pressure threshold acts on the short message application icon.
The gyro sensor 180B may be used to determine the motion gesture of the cell phone 100. In some embodiments, the angular velocity of the handset 100 about three axes (i.e., x, y, and z axes) may be determined by the gyro sensor 180B. The gyro sensor 180B may be used for photographing anti-shake. For example, when the shutter is pressed, the gyro sensor 180B detects the shake angle of the mobile phone 100, calculates the distance to be compensated by the lens module according to the angle, and makes the lens counteract the shake of the mobile phone 100 through the reverse motion, thereby realizing anti-shake. The gyro sensor 180B may also be used for navigating, somatosensory game scenes.
The acceleration sensor 180E can detect the magnitude of acceleration of the mobile phone 100 in various directions (typically three axes). The magnitude and direction of gravity can be detected when the handset 100 is stationary. The method can also be used for recognizing the gesture of the mobile phone, and is applied to the applications of horizontal and vertical screen switching, pedometers and the like.
The touch sensor 180K, also referred to as a "touch device". The touch sensor 180K may be disposed on the display screen 194, and the touch sensor 180K and the display screen 194 form a touch screen, which is also called a "touch screen". The touch sensor 180K is for detecting a touch operation acting thereon or thereabout. The touch sensor may communicate the detected touch operation to the application processor to determine the touch event type. Visual output related to touch operations may be provided through the display 194. In other embodiments, the touch sensor 180K may also be disposed on the surface of the electronic device 100 at a different location than the display 194.
The keys 190 include a power-on key, a volume key, etc. The keys 190 may be mechanical keys. Or may be a touch key. The handset 100 may receive key inputs, generating key signal inputs related to user settings and function control of the handset 100.
The indicator 192 may be an indicator light, may be used to indicate a state of charge, a change in charge, a message indicating a missed call, a notification, etc.
The SIM card interface 195 is used to connect a SIM card. The SIM card may be inserted into the SIM card interface 195 or removed from the SIM card interface 195 to enable contact and separation with the handset 100. The handset 100 may support 1 or N SIM card interfaces, N being a positive integer greater than 1. The SIM card interface 195 may support Nano SIM cards, micro SIM cards, and the like. The same SIM card interface 195 may be used to insert multiple cards simultaneously. The types of the plurality of cards may be the same or different. The SIM card interface 195 may also be compatible with different types of SIM cards. The SIM card interface 195 may also be compatible with external memory cards. The mobile phone 100 interacts with the network through the SIM card to realize functions such as call and data communication. In some embodiments, handset 100 employs esims, namely: an embedded SIM card. The eSIM card can be embedded in the handset 100 and cannot be separated from the handset 100.
It should be understood that the structure illustrated in the embodiments of the present application is not limited to the specific embodiment of the mobile phone 100. In other embodiments of the application, the handset 100 may include more or fewer components than shown, or certain components may be combined, or certain components may be split, or different arrangements of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the application also provides electronic equipment, which comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, which when executed by the processor performs the steps of any of the various method embodiments described above.
Embodiments of the present application also provide a computer readable storage medium storing a computer program which, when executed by a processor, implements steps for implementing the various method embodiments described above.
Embodiments of the present application provide a computer program product which, when run on a mobile terminal, causes the mobile terminal to perform steps that enable the implementation of the method embodiments described above.
In addition, embodiments of the present application also provide an apparatus, which may be embodied as a chip, component or module, which may include a processor and a memory coupled to each other; the memory is configured to store computer-executable instructions, and when the device is operated, the processor may execute the computer-executable instructions stored in the memory, so that the chip performs the methods in the above method embodiments.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/electronic device and method may be implemented in other manners. For example, the apparatus/electronic device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted, or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
In addition, each functional unit in the embodiments of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above-described embodiments, and may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of the method embodiments described above when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing device/terminal apparatus, recording medium, computer memory, read-only memory (ROM), random access memory (random access memory, RAM), electrical carrier signals, telecommunications signals, and software distribution media. Such as a U-disk, removable hard disk, magnetic or optical disk, etc. In some jurisdictions, computer readable media may not be electrical carrier signals and telecommunications signals in accordance with legislation and patent practice.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In the description above, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It should be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. The terms "comprising," "including," "having," and variations thereof mean "including but not limited to," unless expressly specified otherwise.
It should also be understood that in the description of the present application, "/" means or, unless otherwise indicated, for example, A/B may represent A or B; "and/or" herein is merely a mapping relationship describing an associated object, and refers to any and all possible combinations of one or more of the associated listed items, and includes such combinations, e.g., a and/or B, which may represent: a exists alone, A and B exist together, and B exists alone.
As used in the present description and the appended claims, the term "if" may be interpreted as "when..once" or "in response to a determination" or "in response to detection" depending on the context. Similarly, the phrase "if a determination" or "if a [ described condition or event ] is detected" may be interpreted in the context of meaning "upon determination" or "in response to determination" or "upon detection of a [ described condition or event ]" or "in response to detection of a [ described condition or event ]".
Reference in the specification to "one embodiment" or "some embodiments" or the like means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," and the like in the specification are not necessarily all referring to the same embodiment, but mean "one or more but not all embodiments" unless expressly specified otherwise.
The above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (22)

1. A method of editing video, the method comprising:
detecting a first operation from a first interface, wherein the first interface is an interface for editing a video, the first operation requests deleting part of video frames in a first video or adding video frames in the first video, and materials are added in the first video;
generating the target video according to the target video frame in the first video and the video frame added with the material in the first video, wherein the target video frame exists in a second video, and the second video is the video corresponding to the first video after the first operation is executed on the first video.
2. The method of claim 1, wherein the generating the target video from the target video frame in the first video and the video frame in the first video to which the material is added comprises:
determining a video frame for adding the material in a second video according to a target video frame in the first video and a video frame added with the material in the first video;
and generating the target video according to the second video.
3. The method of claim 2, wherein after detecting the first operation from the first interface, the method further comprises:
the method comprises the steps of obtaining a mapping relation between a start-stop position of a material in a first video and a video frame added with the material in the first video, wherein the start-stop position comprises a start position and an end position, the mapping relation comprises a first mapping relation or a second mapping relation, and the first mapping relation is as follows: the starting position of the material changes along with the first frame in the video frames added with the material in the first video, and the second mapping relation is as follows: the starting position of the material changes along with the first frame in the video frame added with the material in the first video, and the ending position of the material changes along with the last frame in the video frame added with the material in the first video;
The determining the video frame for adding the material in the second video according to the target video frame in the first video and the video frame added with the material in the first video comprises the following steps:
and determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame added with the material in the first video and the mapping relation.
4. A method according to claim 3, wherein the second mapping is determined after detecting a second operation from the first interface, the second operation requesting adjustment of a start-stop position of the material in the first video.
5. A method as claimed in claim 3, wherein the first mapping relationship is determined after detecting a third operation from the first interface, the third operation being for adding the material in the first video.
6. A method according to claim 3, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation comprises:
In the case where the first operation is to delete a part of the video frames in the first video from the first frame of the first video and delete the first frame of the video frames in the first video to which the material is added, the first frame of the target video frame remaining is determined as the first frame of the video frame for adding the material in the second video.
7. The method of claim 6, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relationship, further comprises:
and determining the tail frame in the video frame added with the material in the first video as the tail frame for adding the video frame of the material in the second video under the condition that the mapping relation is the second mapping relation.
8. The method of claim 6, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relationship, further comprises:
And when the mapping relation is the first mapping relation, determining the frame number of the video frame added with the material in the first video as the frame number of the video frame added with the material in the second video.
9. A method according to claim 3, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation comprises:
in a case where the first operation is to delete a part of the video frames in the first video from the end frame of the first video and delete the end frame of the video frames in the first video to which the material is added, the end frame of the target video frame remaining is determined as the end frame of the video frame for adding the material in the second video.
10. The method according to any one of claims 6 to 9, wherein after determining a video frame for adding the material in the second video, the method further comprises:
detecting a fourth operation from the first interface, wherein the fourth operation requests that all video frames in the video frames with the materials added in the first video are included in the target video frames;
And responding to the fourth operation, and restoring the start and stop positions of the display of the materials to the first frame and the last frame in the video frames added with the materials in the first video.
11. A method according to claim 3, wherein the first video is divided into a first segment and a second segment between a first video frame and a second video frame, the first video having the first frame of the video frames with the material added thereto and the first video frame being located in the first segment, and the first video having the second frame of the video frames with the material added thereto and the second video frame being located in the second segment.
12. The method according to claim 11, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation includes:
and if the mapping relation is the second mapping relation, determining the video frame added with the material in the first video and the video frame same as the target video frame as the video frame used for adding the material in the second video under the condition that the first operation is to delete part of video frames in the first video from the tail frame of the first segment or the first frame of the second segment and the first frame of the video frame added with the material in the first video is not deleted.
13. The method of claim 12, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relationship, further comprises:
and if the mapping relation is the first mapping relation, determining the first frame of the video frame added with the material in the first video as the first frame of the video frame added with the material in the second video, and determining the frame number of the video frame added with the material in the first video as the frame number of the video frame added with the material in the second video.
14. The method according to claim 11, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation includes:
and determining the first frame of the second segment as the first frame of the video frame for adding the material in the second video in the case that the first operation is to delete part of the video frames in the first video from the tail frame of the first segment and delete the first frame of the video frame with the material added in the first video.
15. The method according to claim 11, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation includes:
and if the mapping relationship is the second mapping relationship, determining the video frame added with the material and the video frame added with the fragment in the first video or the video frame after deleting the fragment from the video frame added with the material in the first video as the video frame used for adding the material in the second video.
16. The method according to claim 11, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation includes:
when the first operation is to adjust the playing speed of the first segment to a first playing speed and adjust the playing speed of the second segment to a second playing speed, determining the number of frames of the first segment added with the material added or reduced according to the first playing speed, and determining the number of frames of the second segment added with the material added or reduced according to the second playing speed;
And determining the video frame for adding the material in the second video according to the increased or decreased frame number of the video frame added with the material in the first segment and the increased or decreased frame number of the video frame added with the material in the second segment.
17. The method according to claim 11, wherein the determining the video frame for adding the material in the second video according to the target video frame in the first video, the video frame to which the material is added in the first video, and the mapping relation includes:
determining a coverage duration of a transition animation in the case that the first operation is to add the transition animation between the first segment and the second segment;
and determining the number of frames for deleting the video frames from the first video frame to the first video frame in the video frames added with the materials in the first video according to the coverage duration.
18. The method according to any one of claims 1 to 17, further comprising:
and under the condition that the first operation is to add a new video frame in the video frames added with the materials in the first video, if the type of the materials added in the first video is music, the playing content of the music is prolonged according to the frame number of the new video frame.
19. The method of claim 18, wherein the method further comprises:
when any video frame of the first video is deleted, the playing content of the music itself is deleted from the end of the music.
20. The method according to any one of claims 1 to 17, wherein in the case where the type of the material added in the first video is a picture-in-picture of a video type, the maximum number of frames of the video frames to which the material is added in the first video is determined according to the length of the picture-in-picture itself of the added video type.
21. An electronic device, comprising: one or more processors; one or more memories; the memory stores one or more programs that, when executed by the processor, cause the electronic device to perform the method of any of claims 1-20.
22. A computer readable storage medium having instructions stored therein which, when run on a computer, cause the computer to perform the method of any one of claims 1 to 20.
CN202210910809.2A 2022-05-30 2022-07-29 Video editing method, electronic equipment and storage medium Pending CN117201865A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN2022106010618 2022-05-30
CN202210601061 2022-05-30

Publications (1)

Publication Number Publication Date
CN117201865A true CN117201865A (en) 2023-12-08

Family

ID=88996712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210910809.2A Pending CN117201865A (en) 2022-05-30 2022-07-29 Video editing method, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117201865A (en)

Similar Documents

Publication Publication Date Title
CN109981885B (en) Method for presenting video by electronic equipment in incoming call and electronic equipment
CN114201097B (en) Interaction method between multiple application programs
CN116095413B (en) Video processing method and electronic equipment
CN114844984B (en) Notification message reminding method and electronic equipment
CN111711838A (en) Video switching method, device, terminal, server and storage medium
CN115237316A (en) Audio track marking method and electronic equipment
CN113448658A (en) Screen capture processing method, graphical user interface and terminal
CN114979785A (en) Video processing method and related device
WO2022237317A1 (en) Display method and electronic device
CN117201865A (en) Video editing method, electronic equipment and storage medium
CN113497888B (en) Photo preview method, electronic device and storage medium
CN115037872A (en) Video processing method and related device
CN116132790B (en) Video recording method and related device
CN116112780B (en) Video recording method and related device
CN116095460B (en) Video recording method, device and storage medium
CN116095465B (en) Video recording method, device and storage medium
CN116055799B (en) Multi-track video editing method, graphical user interface and electronic equipment
WO2023226695A1 (en) Video recording method and apparatus, and storage medium
WO2023160455A1 (en) Object deletion method and electronic device
CN111182361B (en) Communication terminal and video previewing method
WO2024113999A1 (en) Game management method and terminal device
CN116935850A (en) Data processing method and electronic equipment
CN116567394A (en) Method for sharing multimedia file, transmitting terminal equipment and receiving terminal equipment
CN117631919A (en) Operation method, electronic equipment and medium
CN117440082A (en) Screen capturing method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination