CN113242464A - Video editing method and device - Google Patents

Video editing method and device Download PDF

Info

Publication number
CN113242464A
CN113242464A CN202110116746.9A CN202110116746A CN113242464A CN 113242464 A CN113242464 A CN 113242464A CN 202110116746 A CN202110116746 A CN 202110116746A CN 113242464 A CN113242464 A CN 113242464A
Authority
CN
China
Prior art keywords
video
area
input
thumbnail
thumbnails
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110116746.9A
Other languages
Chinese (zh)
Inventor
杨友华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202110116746.9A priority Critical patent/CN113242464A/en
Publication of CN113242464A publication Critical patent/CN113242464A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/441Acquiring end-user identification, e.g. using personal code sent by the remote control or by inserting a card
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The application discloses a video editing method and device, and belongs to the technical field of video processing. The video editing method comprises the following steps: receiving a first input for a first thumbnail within a first region in a video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one; in response to a first input, dividing a first video corresponding to a first thumbnail into a plurality of video segments; displaying a plurality of thumbnails corresponding to the plurality of video segments one to one in a second area in the video editing page; receiving a second input for a second thumbnail within a second region; displaying a second thumbnail in a third area in the video editing page in response to a second input; the third area is used for displaying thumbnails of candidate video clips of the spliced video, so that the problem that video editing input is complicated can be solved.

Description

Video editing method and device
Technical Field
The application belongs to the technical field of communication, and particularly relates to a video editing method and device.
Background
At present, when editing and editing videos, video cutting software or a video cutting function is usually used, a starting point and an end point are set in a video containing required materials, a video segment between the starting point and the end point is extracted, a new video is generated, and the required materials are cut. Then, a plurality of cut video clips are selected through video splicing software or a video splicing function and are spliced into a final video. However, this input method is complicated, and it is necessary to cut out video segments for each video, store the video segments, and then select the stored video segments from the local video library.
Disclosure of Invention
The embodiment of the application aims to provide a video editing method and device, and the problem that video editing input is complicated can be solved.
In order to solve the technical problem, the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a video editing method, where the method includes: receiving a first input for a first thumbnail within a first region in a video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one; in response to a first input for a first thumbnail, dividing a first video corresponding to the first thumbnail into a plurality of video segments; displaying a plurality of thumbnails corresponding to the plurality of video segments one to one in a second area in the video editing page; receiving a second input for a second thumbnail within a second region; displaying a second thumbnail within a third area in the video editing page in response to a second input for the second thumbnail within the second area; and the third area is used for displaying thumbnails of the candidate video clips of the spliced video.
In a second aspect, an embodiment of the present application provides a video editing apparatus, including: the first receiving module is used for receiving first input aiming at a first thumbnail in a first area in a video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one; the first dividing module is used for responding to a first input aiming at the first thumbnail and dividing a first video corresponding to the first thumbnail into a plurality of video segments; the first display module is used for displaying a plurality of thumbnails which are in one-to-one correspondence with a plurality of video segments in a second area of the video editing page; the second receiving module is used for receiving second input aiming at a second thumbnail in a second area after displaying a plurality of thumbnails which are in one-to-one correspondence with a plurality of video segments in the second area of the video editing page; a second display module for displaying a second thumbnail in a third area in the video editing page in response to a second input for the second thumbnail in the second area; and the third area is used for displaying thumbnails of the candidate video clips of the spliced video.
In a third aspect, an embodiment of the present application provides an electronic device, which includes a processor, a memory, and a program or instructions stored on the memory and executable on the processor, where the program or instructions, when executed by the processor, implement the steps of the video editing method according to the first aspect.
In a fourth aspect, the present application provides a readable storage medium, on which a program or instructions are stored, which when executed by a processor implement the steps of the video editing method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the video editing method according to the first aspect.
In the embodiment of the application, the video editing page is divided into three areas, the first area is used for displaying the thumbnail of the original video material, the second area is used for displaying the thumbnail of the divided video clip, and the third area is used for displaying the thumbnail of the candidate video clip of the spliced video, so that the dividing and splicing input of the video can be integrated in the same video editing page for inputting, when a user edits the video, the video clip divided by the original material in the first area can be added to the second area, and the video clip in the second area can be added to the third area.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an alternative embodiment of a video editing method provided herein;
FIG. 2 is an interface schematic diagram of yet another alternative embodiment of a video editing method provided herein;
FIG. 3 is a schematic flow chart diagram illustrating an alternative embodiment of a video editing method provided herein;
fig. 4 is a block diagram of an alternative embodiment of a video editing apparatus provided in the present application;
FIG. 5 is a block diagram of an alternative embodiment of an electronic device as provided herein.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in sequences other than those illustrated or described herein. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The video editing method provided by the embodiment of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof.
As shown in fig. 1, an embodiment of the present application provides a video editing method, which may include the following steps 101 to 105:
step 101:
a first input is received for a first thumbnail within a first region in a video editing page.
The video editing page may be a page corresponding to a video editing function in video editing software, or a page in a browser, and the like, which is not limited in the embodiment of the present application. The video editing page can be displayed on a display screen of the electronic device, wherein when the display screen cannot display all contents of the video editing page, all contents of the video editing page can be displayed in a sliding or page-turning manner, for example, a user can touch a blank of the video editing page on a touch display screen of a mobile phone, slide in the up, down, left and right directions and the like without lifting a finger, and display all contents of the video editing page in a sliding manner, or input and display the contents of the video editing page in a page-turning manner.
In an embodiment of the present application, a video editing page may include three regions: a first region for displaying a thumbnail of an original video material; a second area for displaying thumbnails of the divided video clips; and a third area for displaying thumbnails of the candidate video segments of the spliced video. That is, the contents displayed in the three areas are thumbnails of the corresponding videos/video clips.
For example, fig. 2 is an alternative interface interaction diagram of a video editing page. As shown in fig. 2, the video editing page includes a first area 201, a second area 202 and a third area 203, in some application scenarios, the contents of three regions of a video editing page may not be simultaneously displayed on the screen due to the limitations of the screen presentation area, and, then, the display can be performed by the above-mentioned sliding or page-turning manner, and an alternative example is that the display space of three areas in the video editing page is preset, is fixed, in the case where the content in any one area cannot be completely displayed in the preset display space, the area may be subjected to pagination display, for example, in the second area 202 shown in fig. 2, since there are too many thumbnails, they cannot be fully presented in the second area 202, pages may be paged and the user may select which thumbnail of page is currently displayed by entering the page flip input icon 205.
The first area is used for displaying at least one thumbnail corresponding to at least one video.
The thumbnail may be a dynamic image generated from a plurality of frame images extracted in a corresponding video (or video clip). For example, the dynamic graph may be a GIF Format (Graphics exchange Format, translatable to a Graphics exchange Format) dynamic graph. The dynamic image generated by the multi-frame image is used as the cover of the video, more information can be displayed than a static image, a user can conveniently and quickly preview more contents of the video, the user can conveniently determine the video according to the thumbnail, the video is represented in the form of the thumbnail, the thumbnail in the shape of a card can be conveniently clicked, dragged, sorted and the like, and the operation is convenient and quick.
Optionally, the number of extracted frames or the extraction time interval may be set in advance, and the specific setting mode may be user-defined or factory default setting.
For example, for a first video, 10 frames of images may be extracted to generate a dynamic image in advance, and when the first video needs to be added to the first area, 10 frames of images may be extracted at the same time interval for the first video, and then the 10 frames of images are synthesized into a dynamic image in the extraction order, and the dynamic image is used as a thumbnail of the first video, and the thumbnail is dynamically displayed in the first area.
The video displayed in the first area may be part or all of the video in the local video set, and optionally, the user may select part of the video in the local video set to be displayed in the first area. The videos in the local video collection may be locally generated (e.g., recorded or edited) or downloaded over a network to a local video.
For example, the thumbnails displayed in the first area may be displayed in a certain layout. In one example, when thumbnails are displayed in the first region, the display may be performed based on a preset layout manner in an order selected by a user.
For example, when a user browses videos on the internet, the user can download interested videos and store the videos to the local part of the mobile phone; for another example, a user can record a video segment through a mobile phone and store the video segment locally in the mobile phone; the user may then select to "open" the local video collection after opening the video editing software, select at least one video in the video list that may be needed to edit the video, and display a thumbnail of the selected video in the first area after the user determines the selection. Specifically, after the user determines to select at least one video, 15 frames of images are randomly extracted for each video, the extracted 15 frames of images are combined into dynamic thumbnails in the order in the video, and further, thumbnails of the videos selected by the user are additionally displayed behind the currently displayed thumbnails in the first area. If the requirement is met, the video in the local video set can be continuously selected to be added into the first area.
The first input is used for indicating that the video corresponding to the thumbnail image receiving the first input is subjected to video segmentation and is segmented into a plurality of video segments. The first input may be an operation input received through an interactive device such as a mouse, a keyboard, a touch screen, and the like, and a specific form of the operation input may be preset, for example, a user may press a thumbnail for a long time on the touch screen, or click the thumbnail through a left mouse button, and the like. After receiving the first input, step 102 is performed.
Step 102:
in response to a first input for a first thumbnail, a first video corresponding to the first thumbnail is divided into a plurality of video segments. Wherein each video clip is a clip in the first video, and different video clips are not overlapped in time, and a plurality of divided video clips can form the first video.
Optionally, the video segmentation method may be segmentation according to images or sounds of the video; alternatively, the video may be divided equally by time, for example, after the user selects "divide equally by time" in the menu option 204 shown in fig. 2, a new menu option may be popped up to provide options of 2 equally dividing, 3 equally dividing, 4 equally dividing, 5 equally dividing, and after the user selects an option, the video may be divided equally by time into the selected number of equally dividing according to the selected number of equally dividing; alternatively, the user may set several division times in the video, and further divide the video into a plurality of video segments according to the division times, for example, after the user selects "custom division" as shown in the menu option 204 of fig. 2, a time axis may be displayed below the corresponding thumbnail, one division time may be set at a position along the time axis, and optionally, when the user clicks any position on the time axis, an image frame corresponding to the time in the video may be displayed in the display area of the thumbnail, so that the user can preview the specific image content of the video at the time.
The method is an automatic segmentation method, and the user does not need to set the segmentation time, but segments the video according to the characteristics of the video, such as image, sound, time and the like, so that the operation of manually selecting the segmentation time can be avoided, the efficiency is improved, and when the video is segmented based on the characteristics of the video, such as image, sound, time and the like, video segments different in background, character, scene and music can be obtained by fully utilizing the video characteristics, the segmentation purpose of the user is better met, a more precise video segmentation result can be obtained, and the condition that the segmentation time is set wrongly due to insufficient precision in operation when the segmentation time is manually selected is avoided.
Illustratively, as shown in fig. 3, when the step 102 is executed to divide the first video corresponding to the first thumbnail into a plurality of video segments, the following steps may be executed:
step 1021:
in the first video, the moment when the change of the image picture and/or the sound intensity of the adjacent frame reaches the preset condition is determined, and the segmentation moment is obtained.
The change of the image picture of the adjacent frame refers to the change of the image pictures of the two adjacent frames. Alternatively, the change of the image picture may be determined according to the pixel value of the image, for example, the change of the image may be a difference value of the average values of the pixel values of two adjacent frames of images.
Or, the change of the image frame may be comparing the scene, the object, and the background in two adjacent image frames, respectively, to determine the degree of the change, for example, on the basis of person (object) identification, it is identified that the persons in the two adjacent image frames are not the same person, which indicates that the change reaches a preset condition, and on the basis of background identification, it is identified that the difference value of the average values of the background colors of the two adjacent image frames exceeds a preset threshold, which determines that the preset condition is reached.
For example, as shown in fig. 3, when determining a time when the change of the image reaches a preset condition in the first video in step 1021 is performed, and obtaining the segmentation time, the following steps 1211 to 1213 may be performed:
step 1211:
executing preset image recognition processing aiming at the first video, wherein the preset image recognition processing comprises at least one processing mode of the following steps: object recognition, scene recognition, background recognition.
The object recognition, the scene recognition, and the background recognition may be performed by using an existing image recognition method, for example, an object and a background are recognized in an image frame by using an image threshold segmentation method, extracting an image contour, and the like, and an image recognition model for recognizing a scene is obtained by deep learning and a large amount of image training, which is not specifically limited in this embodiment of the present application.
Step 1212:
and determining adjacent image frames with changes of objects and/or scenes and/or backgrounds according to the recognition result of the preset image recognition processing.
After at least one of an object, a scene, and a background is identified in the image of the first video, two adjacent image frames in which a change exists are determined. For example: different people appear in the next frame; or if the person in the previous frame disappears in the next frame, determining that the objects in the two adjacent frames of images are different, and determining that the two adjacent frames of images have changes; alternatively, if it is recognized that the scene of the previous frame is indoors and the scene of the next frame is outdoors, it may be determined that the scenes of the two adjacent frames are not the same scene, that is, there is a change in the images of the two adjacent frames.
An alternative embodiment is: firstly, extracting a plurality of frames of images in a first video at preset intervals, and then respectively identifying objects and/or scenes and/or backgrounds of the extracted images of each frame. Then, comparing whether the object and/or scene and/or background of two frames of images close to each other in time changes or not according to the extraction sequence. If there is a change, a plurality of frames of images may be further extracted between the two extracted frames of images, and after performing the preset image recognition process, a further iterative determination may be made as to whether there is a change based on the recognition result. And if there is no change, no further identification of the image frame between the two extracted images is required. Through the comparison mode similar to the binary search method, the times of executing the preset image identification processing can be reduced, and the operation time and the occupied operation memory can be reduced.
Step 1213:
the segmentation instants are set according to the instants of adjacent image frames at which there is a change.
If a change in adjacent image frames is found, a split time may be set between two image frames. After all the adjacent image frames with changes in the first video are found, corresponding division time is set between every two adjacent image frames with changes.
Next, after performing step 1021, perform step 1022:
the first video is divided into a plurality of video segments based on the division time.
After the division time is obtained, the first video is divided into a plurality of video segments according to the division time. For example, for a 15-second first video, a first division time of 2 seconds, a second division time of 5 seconds, a third division time of 8 seconds, and a fourth division time of 14 seconds are obtained, and then the first video may be divided into a first video segment of 0-2 seconds, a second video segment of 2-5 seconds, a third video segment of 5-8 seconds, a fourth video segment of 8-14 seconds, and a fifth video segment of 14-15 seconds.
Optionally, in addition to setting the video segmentation time according to the image change of the video, the video segmentation time may also be set according to the sound intensity change. For example, a percentage threshold of sound intensity change may be preset, a value with the strongest sound intensity and a value with the weakest sound intensity in the video are found, a value obtained after difference is obtained is used as a maximum sound intensity difference value, then, a percentage value of a difference between sound intensities of every two adjacent sound sampling points and the maximum sound intensity difference value is calculated, if the percentage value exceeds the percentage threshold, a division time is set between the two sound sampling points, and if the percentage value does not exceed the percentage threshold, the division time is not set.
In the embodiment of the application, a plurality of video segmentation modes can be provided for a user to select. In this case, when the first input for the first thumbnail is received, a selection of a plurality of division methods by the user may be received, and the video may be divided according to the division method selected by the user. Therefore, multiple segmentation modes can be provided for a user, the user can conveniently select one segmentation mode for segmentation, and the operation efficiency is improved.
Accordingly, in this example, step 101 receiving a first input for a first thumbnail within a first region in a video editing page may include steps 1011 to 1013 as follows:
step 1011, receiving a seventh input for the first thumbnail;
step 1012, responding to a seventh input, and popping up and displaying a plurality of options for representing a plurality of segmentation modes;
step 1013, a first input for a first option of the plurality of options is received.
The first input and the seventh input may be operation inputs received through an interactive device such as a mouse, a keyboard, a touch screen, and the like, and a specific form of the operation inputs may be preset. For example, the seventh input may be a long press of a thumbnail, and as shown in fig. 2, after the user long presses the thumbnail in the first area, the seventh input is received based on the long press operation of the user, and in response to the seventh input, an option menu 204 of a plurality of options pops up at the upper right of the thumbnail, and the plurality of options displayed in the option menu 204 are respectively used for representing different dividing manners. When a user clicks one of the options, a first input for the option is received, and therefore, the segmentation mode selected by the user is determined to be the segmentation mode represented by the option clicked by the user.
Step 103:
and displaying a plurality of thumbnails corresponding to the video clips in a one-to-one manner in a second area of the video editing page.
And a second area for displaying thumbnails of the divided video clips. After the division into the plurality of video segments, you can generate a corresponding thumbnail for each video segment. The detailed implementation of the thumbnail generation manner may be performed as described in step 101, and is not described herein again. After obtaining a plurality of thumbnails corresponding to the plurality of video segments one by one, adding and displaying a plurality of thumbnails in the second area, and optionally adding a plurality of thumbnails behind the existing thumbnails. Optionally, any thumbnail of the received first input may be kept to display a thumbnail corresponding to an un-divided video in the original area, except that the thumbnail of the divided video segment is displayed in the second area.
When a plurality of thumbnails corresponding to a plurality of video segments are displayed, an interesting display effect can be provided for a user through a dynamic special effect, wherein the dynamic special effect can be displayed after a first operation is received, so that a dynamic interactive display effect can be provided within the processing time of dividing videos and generating thumbnails, and the user experience is improved. For example, the dynamic special effect may be that some fragmented image elements may be displayed around the screen and then aggregated to the location of the newly added thumbnails, thereby forming the thumbnails. The foregoing description is by way of example only and is not intended as a limitation upon the embodiments of the present application.
As an alternative example, as shown in fig. 3, step 103 may include displaying, in a second area of the video editing page, a plurality of thumbnails corresponding to the plurality of video segments one to one, and performing step 1031 as follows:
and displaying that the first thumbnail explodes into a plurality of fragments, and scattering the fragments into the second area to form a dynamic special effect of the thumbnails.
Step 104:
a second input is received for a second thumbnail within a second region.
The second input is used for indicating that the video segment corresponding to the thumbnail of which the second input is received is added to the third area, namely, the area added to the thumbnail of the candidate video segment for displaying the spliced video. The second input may be an operation input received through an interactive device such as a mouse, a keyboard, a touch screen, and the like, and a specific form of the operation input may be preset, for example, the user may touch the second thumbnail and drag the second thumbnail to the third area without lifting a finger, or the user may click the second thumbnail by an email, select an option of "add as a candidate video clip" in a pop-up right-click menu, and the like, which is not limited in this embodiment of the present application.
Step 105:
in response to a second input for a second thumbnail within the second region, the second thumbnail is displayed within a third region in the video editing page.
And the third area is used for displaying thumbnails of the candidate video clips of the spliced video. After receiving a second input for a second thumbnail within the second region, the second thumbnail is displayed within the third region. The original thumbnail in the third area can be kept unchanged, and a second thumbnail is added and displayed behind the original thumbnail in the third area. And the second thumbnail in the second area receiving the second input may still be displayed in the second area. Optionally, after the thumbnail image arbitrarily receiving the second input is added to the third area for display, the display may be still maintained in the original area.
Optionally, the first input and the second input received above may be received for a plurality of thumbnails at the same time. For example, the user may first select a plurality of thumbnails in the first area and/or the second area, and then perform the first input, which may indicate that the video or video segment corresponding to the selected plurality of thumbnails is to be divided. For another example, the user may select a plurality of thumbnails in the first area and/or the second area and then perform a second input, which may indicate that the selected plurality of thumbnails is to be added for display within the third area.
As an alternative example, after displaying the second thumbnail in the third area in the video editing page, the method may further include:
step 106, receiving a third input;
and step 107, responding to a third input, splicing the candidate video clips corresponding to all the thumbnails displayed in the third area to obtain a target video.
And the third input is used for indicating that the candidate video segments corresponding to all the thumbnails in the third area are spliced. The candidate video segment displayed in the third region is selected from the first region and/or the second region, and may include the original non-segmented video in the first region, or may include the segmented video segment in the second region.
Optionally, when the video/video clips in the third area are spliced, the splicing may be performed according to the display order in the third area. Specifically, before the third input is received at step 106, a fourth input for sorting the thumbnails within the third area may also be received. For example, the user may press any one of the thumbnails in the three areas for a long time to make each thumbnail in a draggable state, then release the finger, and then touch the thumbnail to be moved again to drag the thumbnail to a position to be dragged. The thumbnail dragging method in the above example may be applied to sorting in any one of the three areas, and may also be applied to an operation of dragging a thumbnail in any one of the areas to another area.
Furthermore, in response to a fourth input, the display order of the thumbnails in the third area may be sorted according to an instruction of the fourth input, and accordingly, when step 107 is executed to splice the candidate video segments corresponding to all the thumbnails displayed in the third area to obtain the target video, the candidate video segments corresponding to all the thumbnails may be spliced into the target video according to the display order of the thumbnails in the third area. Optionally, after the target video is spliced, the target video may be saved to a designated location (such as a local or network address), and then, three areas in the video editing page may not be cleared, and the current video/video clip/candidate video clip is still retained, so that the user may enter into the next video splicing by using the current content.
In a specific embodiment, the video in the first area is added to the third area, before the third input is received in step 106, the following steps 108 and 109 are performed:
step 108, receiving a fifth input aiming at a third thumbnail in the first area;
the fifth input is for indicating that the video corresponding to the thumbnail image for which the second input was received is added to the third area. The fifth input is similar to the second input and will not be described in detail herein. Alternatively, the fifth input may operate the same as the second input.
And step 109, responding to a fifth input, and displaying a third thumbnail in the third area.
Step 109 may be similar to step 105, and reference may be made to the description of step 105, which is not repeated herein.
Further, the video or video segment corresponding to the thumbnail displayed in the second area or the third area can still be continuously divided. It should be noted that, if the video or the video segment corresponding to the thumbnail in the third area has already been split, even if the indication of splitting is received, the splitting may not be continued, and optionally, in an example, if a different splitting manner is selected, the video or the video segment that has been split in the third area may be re-split according to the different splitting manner, and the result is put into the second area.
As an alternative example, prior to receiving the second input for the second thumbnail within the second area, a sixth input for a third thumbnail may also be received. And the third thumbnail is one of the thumbnails displayed in the second area or the third area. Then, in response to a sixth input, the video or video segment corresponding to the third thumbnail is divided into a plurality of video segments. Then, in the second area, a plurality of thumbnails corresponding to the video corresponding to the third thumbnail or a plurality of video segments divided from the video segment are displayed.
In the embodiment of the application, the video editing page is divided into three areas, wherein the first area is used for displaying the thumbnail of the original video material, the second area is used for displaying the thumbnail of the divided video clip, and the third area is used for displaying the thumbnail of the candidate video clip of the spliced video, so that the input of the division and the splicing of the video can be integrated into the same video editing page for input.
When the user edits the video, the video segment obtained by dividing the original material in the first area can be added to the second area, and the video segment in the second area can be added to the third area. In some alternative examples, the original video material that was not segmented within the first region may be placed directly into the third region. In other alternative examples, the video segment in the second area or the video/video segment in the third area may be further segmented, and the segmentation result may be placed in the second area. In other examples, videos of candidate video segments in the third area that are not intended to be used as the stitched video may also be deleted, wherein if the thumbnails in the original area continue to remain after the thumbnails in the first area or the second area are added to the third area, the unneeded candidate video segments may be deleted directly in the third area; if the thumbnail in the first area or the second area is added to the third area, the thumbnail in the original area is not retained, that is, in the case that the thumbnail in the first area or the second area is moved to the third area, the thumbnail of the candidate video segment that is not needed in the third area can be moved back to the original area.
Like this, through the linkage operation to the thumbnail that contracts in the three region, the operation of cutting apart and splicing a plurality of videos that the user can be convenient, the operation mode is nimble, has simplified the input of video editing greatly, has solved the loaded down with trivial details problem of video editing input.
Optionally, the video editing method provided in this embodiment of the present application may control the electronic device to execute through application software installed in the electronic device, for example, APP (application program) in the mobile phone, an input system component in the mobile phone, application software in the computer, and the like.
The embodiment of the application also provides a video editing device. It should be noted that, in the video editing method provided in the embodiment of the present application, the execution main body may be a video editing apparatus, or a control module in the video editing apparatus for executing the loaded video editing method. In the embodiment of the present application, a video editing apparatus is taken as an example to execute a method for loading video editing, and a video editing method provided in the embodiment of the present application is described.
The video editing apparatus provided in the embodiments of the present application is described in detail below with reference to the accompanying drawings through specific embodiments and application scenarios thereof. For the content that is not described in detail in the video editing apparatus provided in the embodiment of the present application, reference may be made to the video editing method provided in the embodiment of the present application, and details are not described herein again.
As shown in fig. 4, the present embodiment provides a video editing apparatus 10, where the apparatus 10 includes a first receiving module 11, a first dividing module 12, a first display module 13, a second receiving module 14, and a second display module 15.
The first receiving module 11 is configured to receive a first input for a first thumbnail in a first area in a video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one;
the first dividing module 12 is configured to divide a first video corresponding to a first thumbnail into a plurality of video segments in response to a first input for the first thumbnail;
the first display module 13 is configured to display a plurality of thumbnails corresponding to the plurality of video segments one to one in a second area of the video editing page;
the second receiving module 14 is configured to receive a second input for a second thumbnail in a second area after displaying, in the second area of the video editing page, a plurality of thumbnails corresponding to the plurality of video segments one to one;
the second display module 15 is configured to display a second thumbnail in a third area in the video editing page in response to a second input for the second thumbnail in the second area; and the third area is used for displaying thumbnails of the candidate video clips of the spliced video.
In an optional example, the apparatus 10 may further include:
the third receiving module is used for receiving a third input after the second thumbnail is displayed in a third area in the video editing page;
and the splicing module is used for responding to a third input, splicing the candidate video clips corresponding to all the thumbnails displayed in the third area to obtain the target video.
Further, in an optional example, the apparatus 10 may further include: a fourth receiving module, configured to receive a fourth input before receiving the third input; the sorting module is used for responding to a fourth input and sorting the display sequence of the thumbnails in the third area according to the indication of the fourth input; the splicing module can splice the candidate video clips corresponding to all the thumbnails into the target video according to the display sequence of the thumbnails in the third area.
Based on this, an alternative example is provided, the apparatus 10 may further comprise:
a fifth receiving module for receiving a fifth input for a third thumbnail within the first area before receiving the third input;
and a third display module for displaying a third thumbnail in a third area in response to a fifth input.
In an alternative embodiment, the apparatus 10 may further comprise:
a sixth receiving module for receiving a sixth input for a third thumbnail before receiving a second input for a second thumbnail within the second area; the third thumbnail is one of thumbnails displayed in the second area or the third area;
the second division module is used for responding to a sixth input and dividing the video or the video segment corresponding to the third thumbnail into a plurality of video segments;
and the fourth display module is used for displaying a plurality of thumbnails which correspond to the videos corresponding to the third thumbnail or a plurality of video segments obtained by dividing the video segments in the second area.
As an optional example, the first segmentation module 12 may include:
the first determining module is used for determining the moment when the change of the image picture and/or the sound intensity of the adjacent frame reaches the preset condition in the first video to obtain the segmentation moment;
and the third segmentation module is used for segmenting the first video into a plurality of video segments based on the segmentation moment.
Wherein the first determining module may include:
the execution module is used for executing preset image recognition processing on the first video, wherein the preset image recognition processing comprises at least one of the following processing modes: object identification, scene identification and background identification;
the second determining module is used for determining adjacent image frames with changes of objects and/or scenes and/or backgrounds according to the recognition result of the preset image recognition processing;
and the setting module is used for setting the segmentation moment according to the moment of the adjacent image frame with the change.
In one example, the first display module 13 may include:
and the fifth display module is used for displaying that the first thumbnail is exploded into a plurality of fragments, and the fragments are scattered to the second area to form a dynamic special effect of the thumbnails.
In an optional example, the first receiving module 11 may include:
a seventh receiving module for receiving a seventh input for the first thumbnail;
a sixth display module, configured to pop up and display a plurality of options for representing a plurality of segmentation modes in response to a seventh input;
an eighth receiving module, configured to receive a first input for a first option of the multiple options.
The thumbnail described in the embodiment of the present application may be a dynamic image generated according to a plurality of frames of images extracted from a corresponding video or video clip.
In the embodiment of the application, the video editing page is divided into three areas, the first area is used for displaying the thumbnail of the original video material, the second area is used for displaying the thumbnail of the divided video clip, and the third area is used for displaying the thumbnail of the candidate video clip of the spliced video, so that the dividing and splicing input of the video can be integrated in the same video editing page for inputting, when a user edits the video, the video clip divided by the original material in the first area can be added to the second area, and the video clip in the second area can be added to the third area.
The video editing apparatus in the embodiment of the present application may be an apparatus, or may be a component, an integrated circuit, or a chip in a terminal. The device can be mobile electronic equipment or non-mobile electronic equipment. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and the non-mobile electronic device may be a server, a Network Attached Storage (NAS), a Personal Computer (PC), a Television (TV), a teller machine or a self-service machine, and the like, and the embodiments of the present application are not particularly limited.
The video editing apparatus in the embodiment of the present application may be an apparatus having an input system. The input system may be an Android (Android) input system, an iOS input system, or another possible input system, and the embodiment of the present application is not particularly limited.
The video editing apparatus provided in the embodiment of the present application can implement each process implemented by the video editing apparatus in the method embodiments of fig. 1 to fig. 3, and is not described here again to avoid repetition.
Optionally, an embodiment of the present application further provides an electronic device, which includes a processor, a memory, and a program or an instruction stored in the memory and capable of running on the processor, where the program or the instruction, when executed by the processor, implements each process of the video editing method embodiment, and can achieve the same technical effect, and details are not repeated here to avoid repetition.
It should be noted that the electronic devices in the embodiments of the present application include the mobile electronic devices and the non-mobile electronic devices described above.
Fig. 5 is a schematic diagram of a hardware structure of an electronic device implementing an embodiment of the present application.
The electronic device 400 includes, but is not limited to: radio unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit 406, user input unit 407, interface unit 408, memory 409, and processor 410.
Those skilled in the art will appreciate that the electronic device 400 may further include a power source (e.g., a battery) for supplying power to various components, and the power source may be logically connected to the processor 410 through a power management system, so as to implement functions of managing charging, discharging, and power consumption through the power management system. The input unit 404 may include a graphics processor, a microphone, and the like. Display component 406 may include a display panel. The user input unit 407 may include a touch panel and other input devices, and the like. The memory 409 may store an application program, an input system, and the like. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
Wherein the user input unit 407 is configured to receive a first input for a first thumbnail within a first region in the video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one;
the processor 410 is configured to, in response to a first input for a first thumbnail, segment a first video corresponding to the first thumbnail into a plurality of video segments;
the display unit 406 is configured to display a plurality of thumbnails corresponding to the plurality of video segments one to one in a second area in the video editing page;
the user input unit 407 is further configured to receive a second input for a second thumbnail within the second area;
the display unit 406 is further configured to display a second thumbnail in a third area in the video editing page in response to a second input for the second thumbnail in the second area; and the third area is used for displaying thumbnails of the candidate video clips of the spliced video.
In the embodiment of the application, the video editing page is divided into three areas, the first area is used for displaying the thumbnail of the original video material, the second area is used for displaying the thumbnail of the divided video clip, and the third area is used for displaying the thumbnail of the candidate video clip of the spliced video, so that the dividing and splicing input of the video can be integrated in the same video editing page for inputting, when a user edits the video, the video clip divided by the original material in the first area can be added to the second area, and the video clip in the second area can be added to the third area.
Optionally, the user input unit 40 is further configured to receive a third input;
the processor 410 is further configured to, in response to a third input, splice the candidate video segments corresponding to all the thumbnails displayed in the third area to obtain a target video.
Optionally, the user input unit 40 is further configured to receive a fourth input before receiving the third input; the processor 410 is further configured to, in response to a fourth input, sort the display order of the thumbnails within the third area according to an indication of the fourth input; and splicing the candidate video clips corresponding to all the thumbnails into the target video according to the display sequence of the thumbnails in the third area.
Optionally, the user input unit 40 is further configured to receive a fifth input for a third thumbnail within the first area before receiving the third input;
the display unit 406 is further configured to display a third thumbnail in the third area in response to a fifth input.
Optionally, the user input unit 40 is further configured to receive a sixth input for a third thumbnail before receiving a second input for a second thumbnail within the second area; the third thumbnail is one of thumbnails displayed in the second area or the third area;
the processor 410 is further configured to divide the video or video segment corresponding to the third thumbnail into a plurality of video segments in response to a sixth input;
the display unit 406 is further configured to display, in the second area, a plurality of thumbnails corresponding to the third thumbnail, or a plurality of video segments obtained by dividing the video segment.
Optionally, the processor 410 is further configured to determine, in the first video, a time when a change of an image picture and/or a sound intensity of an adjacent frame reaches a preset condition, so as to obtain a segmentation time; the first video is divided into a plurality of video segments based on the division time.
Optionally, the processor 410 is further configured to perform a preset image recognition process on the first video, where the preset image recognition process includes at least one of the following processing manners: object identification, scene identification and background identification; determining adjacent image frames with changes of objects and/or scenes and/or backgrounds according to the recognition result of preset image recognition processing; the segmentation instants are set according to the instants of adjacent image frames at which there is a change.
Optionally, the display unit 406 is further configured to display that the first thumbnail is exploded into a plurality of fragments, and the plurality of fragments are scattered to the second area to form a dynamic special effect of the plurality of thumbnails.
Optionally, the user input unit 40 is further configured to receive a seventh input for the first thumbnail;
the display unit 406 is further configured to pop up a plurality of options for representing a plurality of division manners in response to a seventh input;
the user input unit 40 is further configured to receive a first input for a first option of the plurality of options.
Optionally, the processor 410 is further configured to generate a dynamic image according to a plurality of frames of images extracted from the corresponding video or video segment, so as to obtain a thumbnail.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the video editing method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer-readable storage medium, and the computer-readable storage medium may include a nonvolatile Memory, such as a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above video editing method embodiment, and can achieve the same technical effect, and the details are not repeated here to avoid repetition.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Aspects of the present application are described above with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the application. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, enable the implementation of the functions/acts specified in the flowchart and/or block diagram block or blocks. Such a processor may be, but is not limited to, a general purpose processor, a special purpose processor, an application specific processor, or a field programmable logic circuit. It will also be understood that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware for performing the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (12)

1. A method of video editing, the method comprising:
receiving a first input for a first thumbnail within a first region in a video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one;
in response to the first input, dividing a first video corresponding to the first thumbnail into a plurality of video segments;
displaying a plurality of thumbnails corresponding to the plurality of video segments one to one in a second area in the video editing page;
receiving a second input for a second thumbnail within the second region;
displaying the second thumbnail in a third area in the video editing page in response to the second input; wherein the third area is used for displaying thumbnails of the candidate video segments of the spliced video.
2. The video editing method of claim 1, wherein after displaying the second thumbnail in a third area in the video editing page, the method further comprises:
receiving a third input;
and responding to the third input, splicing the candidate video clips corresponding to all the thumbnails displayed in the third area to obtain the target video.
3. The video editing method according to claim 2,
prior to receiving the third input, the method further comprises: receiving a fourth input; in response to the fourth input, sorting a display order of the thumbnails within the third area according to an indication of the fourth input;
the splicing of the candidate video clips corresponding to all the thumbnails displayed in the third area to obtain the target video comprises the following steps: and splicing the candidate video clips corresponding to all the thumbnails into the target video according to the display sequence of the thumbnails in the third area.
4. The video editing method of claim 2, wherein prior to receiving the third input, the method further comprises:
receiving a fifth input for a third thumbnail within the first region;
in response to the fifth input, displaying the third thumbnail within the third area.
5. The video editing method of claim 4, wherein prior to receiving a second input for a second thumbnail within the second region, the method further comprises:
receiving a sixth input for a third thumbnail; wherein the third thumbnail is one of thumbnails displayed in the second area or the third area;
in response to the sixth input, dividing the video or video segment corresponding to the third thumbnail into a plurality of video segments;
and displaying a plurality of thumbnails corresponding to the videos corresponding to the third thumbnail or a plurality of video segments obtained by dividing the video segments in the second area.
6. The video editing method according to claim 1, wherein the dividing the first video corresponding to the first thumbnail into a plurality of video segments comprises:
in the first video, determining the time when the change of the image picture and/or the sound intensity of the adjacent frame reaches a preset condition to obtain a segmentation time;
and dividing the first video into the plurality of video segments based on the dividing time.
7. A video editing apparatus, characterized in that the apparatus comprises:
the first receiving module is used for receiving first input aiming at a first thumbnail in a first area in a video editing page; the first area is used for displaying at least one thumbnail corresponding to at least one video one by one;
a first dividing module, configured to divide a first video corresponding to the first thumbnail into a plurality of video segments in response to the first input;
the first display module is used for displaying a plurality of thumbnails corresponding to the video clips in a one-to-one mode in a second area of the video editing page;
a second receiving module, configured to receive a second input for a second thumbnail in a second area in the video editing page after displaying, in the second area, a plurality of thumbnails in one-to-one correspondence with the plurality of video segments;
a second display module to display the second thumbnail in a third area of the video editing page in response to the second input; wherein the third area is used for displaying thumbnails of the candidate video segments of the spliced video.
8. The video editing apparatus according to claim 7, said apparatus further comprising:
a third receiving module, configured to receive a third input after the second thumbnail is displayed in a third area in the video editing page;
and the splicing module is used for responding to the third input and splicing the candidate video clips corresponding to all the thumbnails displayed in the third area to obtain the target video.
9. The video editing apparatus according to claim 8,
the device further comprises: a fourth receiving module, configured to receive a fourth input before receiving the third input; the sorting module is used for responding to the fourth input and sorting the display sequence of the thumbnails in the third area according to the indication of the fourth input;
and the splicing module is further used for splicing the candidate video segments corresponding to all the thumbnails into the target video according to the display sequence of the thumbnails in the third area.
10. The video editing apparatus according to claim 9, said apparatus further comprising:
a fifth receiving module, configured to receive a fifth input for a third thumbnail within the first area before receiving a third input;
a third display module to display the third thumbnail in the third area in response to the fifth input.
11. The video editing apparatus according to claim 10, said apparatus further comprising:
a sixth receiving module for receiving a sixth input for a third thumbnail before receiving a second input for a second thumbnail within the second area; wherein the third thumbnail is one of thumbnails displayed in the second area or the third area;
a second dividing module, configured to, in response to the sixth input, divide the video or the video segment corresponding to the third thumbnail into a plurality of video segments;
and the fourth display module is used for displaying a plurality of thumbnails which correspond to the videos corresponding to the third thumbnail or a plurality of video segments obtained by dividing the video segments in the second area.
12. The video editing apparatus according to claim 7, wherein the first segmentation module comprises:
the first determining module is used for determining the time when the change of the image picture and/or the sound intensity of the adjacent frame reaches the preset condition in the first video to obtain the segmentation time;
a third segmentation module, configured to segment the first video into the plurality of video segments based on the segmentation time.
CN202110116746.9A 2021-01-28 2021-01-28 Video editing method and device Pending CN113242464A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110116746.9A CN113242464A (en) 2021-01-28 2021-01-28 Video editing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110116746.9A CN113242464A (en) 2021-01-28 2021-01-28 Video editing method and device

Publications (1)

Publication Number Publication Date
CN113242464A true CN113242464A (en) 2021-08-10

Family

ID=77130196

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110116746.9A Pending CN113242464A (en) 2021-01-28 2021-01-28 Video editing method and device

Country Status (1)

Country Link
CN (1) CN113242464A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114845171A (en) * 2022-03-21 2022-08-02 维沃移动通信有限公司 Video editing method and device and electronic equipment
WO2023061414A1 (en) * 2021-10-15 2023-04-20 维沃移动通信有限公司 File generation method and apparatus, and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745736A (en) * 2013-12-27 2014-04-23 宇龙计算机通信科技(深圳)有限公司 Method of video editing and mobile terminal thereof
CN110798752A (en) * 2018-08-03 2020-02-14 北京京东尚科信息技术有限公司 Method and system for generating video summary
CN111432138A (en) * 2020-03-16 2020-07-17 Oppo广东移动通信有限公司 Video splicing method and device, computer readable medium and electronic equipment
CN111818390A (en) * 2020-06-30 2020-10-23 维沃移动通信有限公司 Video capturing method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745736A (en) * 2013-12-27 2014-04-23 宇龙计算机通信科技(深圳)有限公司 Method of video editing and mobile terminal thereof
CN110798752A (en) * 2018-08-03 2020-02-14 北京京东尚科信息技术有限公司 Method and system for generating video summary
CN111432138A (en) * 2020-03-16 2020-07-17 Oppo广东移动通信有限公司 Video splicing method and device, computer readable medium and electronic equipment
CN111818390A (en) * 2020-06-30 2020-10-23 维沃移动通信有限公司 Video capturing method and device and electronic equipment

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023061414A1 (en) * 2021-10-15 2023-04-20 维沃移动通信有限公司 File generation method and apparatus, and electronic device
CN114845171A (en) * 2022-03-21 2022-08-02 维沃移动通信有限公司 Video editing method and device and electronic equipment
WO2023179539A1 (en) * 2022-03-21 2023-09-28 维沃移动通信有限公司 Video editing method and apparatus, and electronic device

Similar Documents

Publication Publication Date Title
KR102028198B1 (en) Device for authoring video scene and metadata
US8856656B2 (en) Systems and methods for customizing photo presentations
CN108334371B (en) Method and device for editing object
CN111612873A (en) GIF picture generation method and device and electronic equipment
US20220174237A1 (en) Video special effect generation method and terminal
CN109388506B (en) Data processing method and electronic equipment
CN111526427B (en) Video generation method and device and electronic equipment
CN113242464A (en) Video editing method and device
CN112004138A (en) Intelligent video material searching and matching method and device
WO2023061414A1 (en) File generation method and apparatus, and electronic device
CN112954046A (en) Information sending method, information sending device and electronic equipment
JP2023529219A (en) Picture processing method, apparatus and electronic equipment
CN116017043A (en) Video generation method, device, electronic equipment and storage medium
WO2022247830A1 (en) Picture management method and apparatus, and electronic device
CN106021322A (en) Multifunctional image input method
CN113163256B (en) Method and device for generating operation flow file based on video
CN115344159A (en) File processing method and device, electronic equipment and readable storage medium
CN112578959B (en) Content publishing method and device
CN113840099B (en) Video processing method, device, equipment and computer readable storage medium
CN114443567A (en) Multimedia file management method, device, electronic equipment and medium
CN109992697B (en) Information processing method and electronic equipment
CN114416664A (en) Information display method, information display device, electronic apparatus, and readable storage medium
CN111782309A (en) Method and device for displaying information and computer readable storage medium
CN112764601B (en) Information display method and device and electronic equipment
WO2023217122A1 (en) Video clipping template search method and apparatus, and electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210810

RJ01 Rejection of invention patent application after publication