CN112506412A - Video editing method and device and electronic equipment - Google Patents

Video editing method and device and electronic equipment Download PDF

Info

Publication number
CN112506412A
CN112506412A CN202011438852.0A CN202011438852A CN112506412A CN 112506412 A CN112506412 A CN 112506412A CN 202011438852 A CN202011438852 A CN 202011438852A CN 112506412 A CN112506412 A CN 112506412A
Authority
CN
China
Prior art keywords
video
track
objects
editing
object track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011438852.0A
Other languages
Chinese (zh)
Other versions
CN112506412B (en
Inventor
吴丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202011438852.0A priority Critical patent/CN112506412B/en
Publication of CN112506412A publication Critical patent/CN112506412A/en
Application granted granted Critical
Publication of CN112506412B publication Critical patent/CN112506412B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/0486Drag-and-drop
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers

Abstract

The disclosure relates to a video editing method and device and electronic equipment. Wherein, the method comprises the following steps: displaying target information on an editing interface of a first video, wherein the target information comprises any one of the following items: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects; in response to the received input for the target information, displaying a second object track on the editing interface, wherein the second object track binds M objects; wherein Q is an integer greater than 1; m is an integer greater than or equal to Q. The object track of the present disclosure may bind at least two objects. In this way, when a user wants to adjust the application of at least two objects bound to the same object track in a video, only one object track needs to be operated, so that the operation efficiency can be improved.

Description

Video editing method and device and electronic equipment
Technical Field
The present disclosure relates to video processing technologies, and in particular, to a video editing method and apparatus, and an electronic device.
Background
In the related art, each time a piece of content is added to a video, a client displays a track corresponding to the content in an editing area of the video. If a user wants to adjust the application of a plurality of contents in a video, the user needs to adjust the tracks of the plurality of contents respectively, and the operation efficiency is low.
Disclosure of Invention
The present disclosure provides a video editing method, a video editing device and an electronic device, so as to at least solve the problem of low operation efficiency of video editing in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a video editing method, including:
displaying target information on an editing interface of a first video, wherein the target information comprises any one of the following items: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects;
in response to the received input for the target information, displaying a second object track on the editing interface, wherein the second object track binds M objects;
wherein Q is an integer greater than 1; m is an integer greater than or equal to Q.
According to a second aspect of the embodiments of the present disclosure, there is provided a video editing apparatus including:
a first display module configured to display target information on an editing interface of a first video, the target information including any one of: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects;
a second display module configured to display a second object track on the editing interface in response to the received input for the target information, the second object track binding M objects;
wherein Q is an integer greater than 1; m is an integer greater than or equal to Q.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus including:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video editing method of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having instructions that, when executed by an electronic device, enable the electronic device to perform the video editing method according to the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising:
executable instructions which, when run on a computer, enable the computer to perform the video editing method of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
in an embodiment of the present disclosure, target information is displayed on an editing interface of a first video, where the target information includes any one of: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects; in response to the received input for the target information, displaying a second object track on the editing interface, wherein the second object track binds M objects; wherein Q is an integer greater than 1; m is an integer greater than or equal to Q. It can be seen that in embodiments of the present disclosure, an object track may bind at least two objects. In this way, when a user wants to adjust the application of at least two objects bound to the same object track in a video, only one object track needs to be operated, so that the operation efficiency can be improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flow diagram illustrating a video editing method according to an example embodiment.
Fig. 2 is a block diagram illustrating a video editing apparatus according to an exemplary embodiment.
FIG. 3 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
For convenience of understanding, the following description is provided for some of the references to the embodiments of the present disclosure:
i, an object.
The objects may be animations, stickers, sound clips, text, video clips, etc.
And II, object tracks.
In an embodiment of the present disclosure, an object track may bind at least one object.
The client can classify the object tracks into two broad categories based on the number of objects to which they are bound: the system comprises a first type object track and a second type object track, wherein the first type object track is only bound with one object, and the second type object track is bound with at least two objects. It should be understood that the number of objects bound by different object tracks belonging to the second type of object track may be the same or different. Such as: the second type of object track comprises an object track I and an object track II, wherein the object track I is bound with two objects, and the object track II is bound with four objects.
In some embodiments, the client may set the visual display effect or the identification of the binding of the first type object track and the second type object track to be different, such as: at least one of the display color, the display style and the identification of the object track binding of the first type of object track and the second type of object track can be different, so that the user can conveniently distinguish the number of the objects bound by the object tracks. Of course, in other embodiments, the client may set the visual display effect or the binding identity of the first type object track and the second type object track to be the same.
In the embodiments of the present disclosure:
1) the client may change at least one of a (display) length and a display position of the object track based on an input of the user. In a specific implementation, the user may drag at least one of the start point and the end point of the object track to trigger the client to change the length of the object track. The user can move the object track, triggering the client to change the display position of the object track. Of course, it is understood that the user may also change the display position of the object track while dragging at least one of the start point and the end point of the object track. But the user does not change the length of the object track when moving the object track.
In the embodiment of the present disclosure, the length and the display position of the object track may affect the total effective time range of all the objects bound to the object track in the video. Therefore, for at least two objects bound to the same object track, if a user wants to change the total effective time range of the at least two objects in the video, the user can operate the object track bound by the at least two objects. Such as: assuming that the object track one binds the object a and the object b, if a user wants to change the total effective time range of the object a and the object b in the video, the user can operate the object track one to realize the change.
It should be noted that, for an object track to which at least two objects are bound, when a user changes the length or the display position of the object track, only the total effective time range is changed, and the corresponding relationship between the sub effective time range of each object bound to the object track in the total effective time range and the total effective time range may not be changed.
2) For an object track to which at least two objects are bound, the client according to the embodiment of the present disclosure may release the binding relationship between the object track and the object based on the input of the user, and at least two objects bound before the release of the binding relationship may be bound with at least two object tracks again after the release of the binding relationship. In this way, the user may operate a certain object track of the at least two object tracks to trigger the client to change only the effective time range of the object currently bound to the object track in the video, so that the flexibility of adjusting the effective time range of the at least two objects in the video may be improved.
Such as: assuming that the object track I binds the object a and the object b, after the client releases the binding relationship between the object track I and the object a and the object b, the object a can be bound with the object track 1 again, and the object b can be bound with the object track 2 again. The user operates the object track 1, such as: changing the length or display position of the object track 1 is that the client will change the effective time range of the object a in the video alone, and will not affect the effective time range of the object b in the video.
3) The client may copy the object tracks based on user input, and when an object track is copied, the object bound by the copy of the object track is the same as the object bound by the object track. Such as: assuming that the first object track binds the object a and the object b, and the client obtains the second object track by copying the first object track, the second object track binds the object a and the object b.
4) The client may delete the object track based on an input of the user, and when a certain object track is deleted, the client may delete all objects bound to the object track from the video. Such as: assuming that the first object track binds to the object a and the object b, and the second object track binds to the object a and the object d, if the user triggers the client to delete the first object track, the client will not add the object a and the object b bound to the first object track in the video, but will keep the object a and the object d bound to the second object track in the video.
5) The client may change the object to which the object track is bound based on user input. Such as: assuming that the object track one binds the object a and the object b, the user can interact with the client to update the object track one bound object into the object a, the object c and the object d.
6) The client may change the label or naming of the object tracks based on the user's input.
And secondly, combining the objects.
In an embodiment of the present disclosure, an object combination may include at least two objects. The client may obtain the object combination through the newly created object combination, or may receive the object combination shared by other devices, but is not limited thereto.
Fig. 1 is a flow diagram illustrating a video editing method according to an example embodiment. The video editing method of the embodiment of the present disclosure may be implemented by any client capable of editing a video, where the client may be understood as: an electronic device, or an application installed by the electronic device. In practical applications, the electronic device may be a Mobile phone, a tablet Computer, a notebook Computer, an Ultra-Mobile Personal Computer (UMPC), a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) or Virtual Reality (VR) device, and the like.
As shown in fig. 1, the video editing method of the embodiment of the present disclosure may include the following steps.
Step S11: displaying target information on an editing interface of a first video, wherein the target information comprises any one of the following items: the object combination control comprises an object combination control body and Q first object tracks, wherein the Q first object tracks bind M objects.
Wherein Q is an integer greater than 1; m is an integer greater than or equal to Q. It is to be understood that, in the case where M is equal to Q, each of the Q first object tracks binds to one object; and in the case that M is larger than Q, at least one first object track in the Q first object tracks binds at least two objects.
In this embodiment of the disclosure, a user may trigger a client to display an object track bound with M objects, that is, a second object track described below, on an editing interface of the first video through interaction with the target information.
In practical application, the first video may be a video imported from the album by the client, or may be a video obtained by shooting after the client obtains the right to open the camera, but is not limited thereto.
Step S12: in response to the received input for the target information, displaying a second object track on the editing interface, the second object track binding M objects.
In a specific implementation, when the target information is the object combination control, the second object track may be an object track bound with M objects and created in advance by the client before receiving the input for the target information, or may be an object track bound with M objects and created newly by the client after receiving the input for the target information, which may be specifically determined according to an actual situation, and this is not limited in this embodiment of the present disclosure.
In a case where the target information is the Q first object tracks, an object to which the second object track is bound is an object to which the Q first object tracks are bound, and the second object track may be understood as being obtained by combining the Q first object tracks.
In embodiments of the present disclosure, the client may change the object to which the object track is bound based on user input. Such as: assuming that the object track-one initially binds the object a and the object b, the user can interact with the client to update the object track-one bound object to the object a, the object c and the object d. In addition, when a user wants to adjust the application of all objects bound by the second object track in the video, the user can operate the second object track, so that the operation efficiency can be improved.
In the video editing method of this embodiment, target information is displayed on an editing interface of a first video, where the target information includes any one of: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects; in response to the received input for the target information, displaying a second object track on the editing interface, wherein the second object track binds M objects; wherein Q is an integer greater than 1; m is an integer greater than or equal to Q. It can be seen that in embodiments of the present disclosure, an object track may bind at least two objects. In this way, when a user wants to adjust the application of at least two objects bound to the same object track in a video, only one object track needs to be operated, so that the operation efficiency can be improved.
In this embodiment of the present disclosure, the manner of triggering the client to display the second object track on the editing interface of the first video may be different for the target information in different presentation forms, and the specific description is as follows:
in a first embodiment, the target information includes the object composition control.
Optionally, the step of displaying a second object track on the editing interface in response to the received input for the target information includes:
responding to the received input aiming at the object combination control, and displaying a first window, wherein the first window comprises at least one of K object combinations and a new control used for triggering the new object combinations, and K is a positive integer;
displaying a second object track on the editing interface in response to the received input of the selected target object combination, wherein the second object track binds all objects included in the target object combination, and the target object combination comprises M objects.
In a specific implementation, the object combination control may be always displayed in the editing interface of the first video, or may be called and displayed in the editing interface of the first video through an input of a user, and the embodiment of the present disclosure does not limit an expression form of the input for calling the object combination control.
In this embodiment, a user touches the object combination control, and may trigger a client to display a first window, where the first window may only display the K object combinations or the new control, or may simultaneously display the K object combinations and the new control. In a specific implementation, the manner of displaying the first window by the client may be any of the following: displaying a first window in a floating mode on an editing interface of the first video; jumping to a first window from an editing interface of the first video; and expanding the first window after the editing interface of the first video fades in, but not limited to.
The target object combination may be any one of the K object combinations, or an object combination newly created by a user touching the newly created control, which may be determined specifically according to an actual situation, and this is not limited in the embodiment of the present disclosure.
In a case that the target object combination is an object combination newly created by a user touching the newly created control, before the step of responding to the received input of the selected target object combination, the method further includes: and responding to the received input aiming at the new control, and displaying a third window, wherein the third window is used for newly building the target object combination. The user may interact with the third window to determine all objects comprised by the target object combination.
It is understood that, after the target object combination is newly created, the client may display the target object combination in the first window, so that the user may select the target object combination.
In this embodiment, after the user selects the target object combination, the user triggers the client to display a second object track on the editing interface, where the second object track binds all objects included in the target object combination, and the target object combination includes the M objects. Such as: assume that the user triggers the client to display the second object track by double-clicking on object combination one, while assume that object combination one includes object a and object b. Then the objects to which the second object track is initially bound include object a and object b.
In a second embodiment, the target information includes the Q first objects.
Optionally, the step of displaying a second object track on the editing interface in response to the received input for the target information includes:
in response to receiving input selecting the Q first object tracks, displaying a second object track on the editing interface, the second object track binding the M objects bound by the Q first object tracks.
In this embodiment, the second object track may be understood as being combined from the Q first object tracks. And the objects initially bound by the second object track are all objects bound by the Q two object tracks. Such as: suppose that the user triggers the client to display the second object track by selecting object track three and object track four, and suppose that object track three binds object c, object track four binds object d, and object e. Then the objects to which the second object track is initially bound include object c, object d, and object e.
It should be noted that, in this embodiment, after the client displays the second object track, the client may delete the Q first object tracks, or may reserve the Q first object tracks, which may be determined according to actual situations, and this is not limited in this disclosure.
In this embodiment of the present disclosure, optionally, the method further includes:
displaying a video track of a first video on an editing interface of the first video;
in response to an input received at a first time and used for finishing editing the first video, acquiring a first length of the second object track at the first time, a first position relationship between the second object track and the video track at the first time, and N objects bound to the second object track at the first time, wherein N is an integer greater than 1;
determining a first total effective time range of the N objects in the first video according to the first length and the first position relation;
and adding the N objects into the first video according to the first total effective time range.
In specific implementation, after a client acquires a first video, an editing interface of the first video can be displayed, and a video track of the first video is displayed on the editing interface.
In this optional embodiment, in response to an input to end editing the first video, the client may add all object track bound objects included in the editing interface of the first video to the first video. In a specific implementation, the input for ending editing the first video may be, but is not limited to, any one of the following: deriving an input for the first video; exiting the input of the editing interface of the first video.
In this embodiment of the present disclosure, after the client displays the second object track, the user may operate the second object track to trigger the client to change at least one of the following: an object to which the second object track is bound; a length of the second object track; a display position of the second object track. It will be appreciated that a change in the display position or length of the second object track will result in a change in the positional relationship of the second object track to the video track. Therefore, after receiving an input to end editing the first video at a first time, the client needs to obtain the first length of the second object track at the first time, the first positional relationship between the second object track and the video track at the first time, and the N objects bound to the second object track at the first time. In this way, by adding the object to the first video through the acquired information, the first video finally exported by the client can better conform to the final expected effect of the user, and the reliability of the first video editing can be further improved.
It can be understood that, in the embodiment of the present disclosure, the object bound by the second object track at the first time may be the same as or different from the object initially bound by the second object track, that is, the N objects may be the same as or different from the M objects, depending on whether the user changes the object bound by the second object track. The object initially bound by the first object is the object bound by the first object track at the target time, and the target time is the time when the first object track is displayed.
For ease of understanding, examples are illustrated below:
assume that the object to which the second object track is initially bound includes object a and object b.
In a first implementation manner, after displaying the second object track and before receiving an input for finishing editing the first video, the client does not receive an input for editing an object bound by the second object track. Then in this implementation, the N objects bound by the second object track at the first time instance include object a and object b.
In a second implementation manner, after displaying the second object track and before receiving an input for ending editing of the first video, the client receives an input for editing the object bound to the second object track, and the input updates the object bound to the second object track into an object a, an object c, and an object d. Then in this implementation, the N objects bound by the second object track at the first time comprise object a, object c, and object d.
Similarly, the length of the second object track at the first time may be the same as or different from the initial length of the second object track, depending on whether the user changes the length of the second object track, where the initial length of the second object track is the length of the second object track at the target time. The position relationship of the second object track and the video track at the first time may be the same as or different from the initial position relationship of the second object track and the video track at the target time depending on whether the user changes the display position or the length of the second object track.
In an embodiment of the present disclosure, the positional relationship of the second object track and the video track may be used to determine at least one of: a first point in the video track corresponding to a starting point of the second object track; a second point in the video track corresponding to an end point of the second object track.
In a specific implementation, the client may determine, but is not limited to, a total effective time range of the N objects in the first video by:
in a first mode, the client may determine the total effective time length of the N objects in the first video according to the first length; and then, determining the total effective time range of the N objects in the first video according to the first position relation and the total effective time length of the N objects in the first video.
In a specific implementation, the client may store a first corresponding relationship between "the length of the object track" and "the total effective time of all objects bound to the object track in the video" in advance. In this way, after the client acquires the first length, the total effective duration of the N objects in the first video may be determined by searching the first relationship.
The client may also store a second corresponding relationship between the length of the video track and the playing time of the video in advance. The positional relationship between the second object track and the video track may be used to determine: for example, the first point corresponding to the start point of the second object track in the video track may be determined by the client according to the first location relationship, and then the play time point corresponding to the first point may be determined by looking up the second location relationship to be the start time point of the total effective time range of the N objects in the first video. And then, determining the total effective time range of the N objects in the first video according to the starting time point and the total effective time length.
For ease of understanding, examples are illustrated below:
assuming that the playing time of the first video is 00: 00-20: 00, and 20 minutes (min); in the first corresponding relation, the total effective time corresponding to the unit length of the object track is 2min, and the unit length of the object track is 1 centimeter (cm); in the second corresponding relation, the playing time corresponding to the unit length of the video track is 2min, and the unit length of the video track is 1 cm; the length of the video track is 10 cm; the first length is 6cm, and the first point determined according to the first position relationship is 2cm of the video track.
Then, in the first mode, the client may determine: the total effective time length of the N objects in the first video is as follows: 6 × 2 ═ 12 min; the starting time point of the total effective time range of the N objects in the first video is 2 × 2 ═ 4min of the first video. And then, according to the starting time point and the total effective time length, determining that the total effective time range of the N objects in the first video is 04: 00-16: 00 of the first video.
In a second mode, the client can determine a target length matched with the first length in the video track according to the first length and the first position relation; and then, determining the total effective time range of the N objects in the first video according to the target length.
In a specific implementation, the position relationship between the second object track and the video track may be used to determine: for example, the first point corresponding to the start point of the second object track in the video track may be determined by the client according to the first location relationship. Then, the target length is determined according to the first point and the first length.
The client may also store a second corresponding relationship between the length of the video track and the playing time of the video in advance. In this way, the client may determine the video playing time period corresponding to the target length as the total effective time range of the N objects in the first video by searching the above relationship.
For ease of understanding, examples are illustrated below:
assuming that the playing time of the first video is 20 minutes (min), 00: 00-20: 00; in the second corresponding relation, the playing time corresponding to the unit length of the video track is 2 min; the length of the video track is 10 cm; the first length is 6cm, and the first point determined according to the first position relationship is 2cm of the video track.
Then, in approach two, the client may determine: the target length is 2-8 cm of the video track. And then, by searching the second object relationship, determining that the total effective time range of the N objects in the first video is 04: 00-16: 00 of the first video.
After determining the first total effective time range, the client may add the N objects to the first video according to the first total effective time range.
In a specific implementation manner, in an implementation manner, the client may determine, according to the first total effective time range, a first frame image set corresponding to the first total effective time range in the first video, where the first frame image set includes at least one frame image. Such as: assuming that the total effective time range of the N objects in the first video is 04: 00-16: 00, all frame images included in the first video from 04: 00-16: 00 can be determined as all frame images included in the first frame image set. Thereafter, the client may add each of the N objects in all the images included in the first set of frame images. As can be seen, in this implementation manner, the client may default that the effective time range of each object in the N objects in the first video is the first total effective time range.
In another implementation manner, optionally, the step of adding the N objects to the first video according to the total effective time range of the N objects in the first video includes:
acquiring a first sub-effective time range of each object in the N objects in the first total effective time range to obtain N first sub-effective time ranges;
and adding the N objects into the first video according to the first total effective time range and the N first sub effective time ranges.
In this implementation manner, in a specific implementation, the client may store, in advance, a third corresponding relationship between the sub effective time range of each object in the N objects and the total effective time range of the N objects. In this way, after determining the first total effective time range, the client may determine N first sub effective time ranges by searching the third correspondence.
In a specific implementation, the third corresponding relationship may be set by a user or set by a client.
Such as: the N objects are assumed to be the M objects.
In a case that the target information includes the object combination control, a sub-effective time range corresponding to each of the M objects and a total effective time range corresponding to the M objects may be set by a user. Such as: the user may set the sub-effective time range corresponding to a certain object in the M objects to be the top 1/3 duration of the total effective time range corresponding to the M objects.
In a case that the target information includes the Q first objects, a sub-effective time range corresponding to each of the M objects and a total effective time range corresponding to the M objects may be determined based on a sub-forward projection length of each first object track and a total forward projection length of the Q first object tracks. Such as: the sub-orthographic projection length of a certain first object track occupies 1/3-2/3 segments of the total orthographic projection length of the Q first object tracks, and the sub-effective time range corresponding to the objects bound by the first object track is 1/3-2/3 duration of the total effective time range corresponding to the M objects. Of course, the sub effective time range corresponding to each object in the M objects and the total effective time range corresponding to the M objects may also be set by the user.
It is understood that the sub-effective time range of each object in the N objects in the first video is within the total effective time range of the N objects in the first video. The sub-effective time ranges of the objects in the N objects in the first video may be the same or different. For at least two objects with different sub-effective time ranges in the first video, the sub-effective time ranges in the first video may or may not have overlapping time effective ranges.
In this implementation manner, the client may first determine, according to the first total effective time range, a first frame image set corresponding to the N objects in the first video, where the first frame image set includes at least one frame image. Then, for convenience of description, for any one of the N objects, which is marked as a target object, a target frame image corresponding to the target object in the first frame image set may be determined according to a first sub-effective time range corresponding to the target object. Then, the target object is added to each frame image included in the target frame image. It can be seen that, in this implementation manner, the effective time range of each object in the N objects in the first video is determined based on a first sub-effective time range of each object in the N objects in the first total effective time range.
For ease of understanding, examples are illustrated below:
assuming that the total effective time range of the N objects in the first video set by the user is 04: 00-16: 00, the N objects bound by the second object track at the first time comprise an object a, an object c, an object d and an object e. The sub-effective time range of the object a and the object c in the first video can be 04: 00-8: 00, the sub-effective time range of the object d in the first video can be 06: 00-10: 00, and the sub-effective time range of the object e in the first video can be 09: 00-12: 00. It can be seen that, in this case, the sub-effective time ranges of the object a and the object c in the first video are the same, and the sub-effective time ranges of the object a (or the object c), the object d, and the object e in the first video are the same. Further, there is no overlapping time effective range between the sub effective time ranges of the object a (or the object c) and the object e in the first video, and there is an overlapping time effective range between the sub effective time ranges of the object d and the object a (or the object c, the object e) in the first video.
The client can add an object a and an object c to each frame image included in the first video in a time period of 04: 00-8: 00, add an object d to each frame image included in the first video in a time period of 06: 00-10: 00, and add an object e to each frame image included in the first video in a time period of 09: 00-12: 00.
Through the method, the client can determine the total effective time range of the at least two objects in the first video directly according to the length of the second object track and the position relationship between the second object track and the video track of the first video, and add the at least two objects to the first video. In this way, when a user wants to adjust the application of at least two objects bound to the same object track in a video, only one object track needs to be operated, so that the operation efficiency can be improved.
In the embodiment of the present disclosure, the client may change the object bound by the object track in the following manner:
optionally, before the step of editing the input of the first video in response to the end received at the first time, the method further includes:
in response to receiving input for the second object track, displaying a second window, the second window being a window editing an object to which the second object track is bound;
updating the second object track bound object to the N objects in response to input received at the second window editing the second object track bound object.
In this optional embodiment, the user may touch the second object track to trigger the client to display the second window.
In a specific implementation, the manner of displaying the second window by the client may be any of the following: displaying a second window in a floating mode on the editing interface of the first video; jumping from the editing interface of the first video to a second window; and expanding a second window after the editing interface of the first video fades in, but not limited to.
The second window may contain all objects currently bound by the first track input, an add control for an object, and a delete control for an object. The user can trigger the client to add the object bound by the second object track through the adding control of the touch object, and can trigger the client to delete the object bound by the second object track through the deleting control of the touch object.
Further, all objects currently bound by the second object track may be displayed in a category in the second window. Such as: the stickers bound by the second object track are of one type, the special effects bound by the second object track are of one type, and the sound clips bound by the second object track are of one type.
Through the embodiment, the client can change the object bound by the object track based on the input of the user, so that the flexibility of the first video editing can be improved, the editing result of the first video meets the expectation of the user, and the reliability of the first video editing is improved.
It should be noted that in other embodiments, the client may change the object bound by the object track in other ways. Such as: assume that the second object track initially binds M objects, M being an integer greater than 1. After receiving the input for the second object track, the client may display M object tracks, an addition control of the tracks, and a deletion control of the tracks in the second object track, where each object track in the M object tracks binds one object in the M objects, and the objects bound by different object tracks are different. The user can trigger the client to add the object bound by the second object track through the adding control of the touch object track, and can trigger the client to delete the object bound by the second object track through the deleting control of the touch object track.
In the embodiment of the present disclosure, the client may copy the object track based on the input of the user, and the client may respond to the input of the user copying the object track by:
optionally, before the step of editing the input of the first video in response to the end received at the first time, the method further includes:
in response to an input received at a second time to copy the second object track, displaying a replica object track on the editing interface, wherein the object bound by the replica object track at the second time is the same as the object bound by the second object track at the second time;
after the step of editing the first video in response to the input received at the first time to end, the method further comprises:
acquiring a second length of the copy object track at the first moment, a second position relation between the copy object track and the video track at the first moment, and P objects bound to the copy object track at the first moment, wherein P is an integer greater than 1;
determining a second total effective time range of the P objects in the first video according to the second length and the second position relation;
adding the P objects to the first video according to the second total effective time range.
In this alternative embodiment, when an object track (e.g., a second object track) is replicated, the replica object track of the object track binds the same object as the object track currently bound to. Such as: suppose that the client copies the first object track to obtain the second object track, and suppose that the first object track binds the object a and the object b. Then object track two binds object a and object b.
It is contemplated that a user may change the objects bound by the second object track before the first time, and may also change the objects bound by the replica object track, resulting in the objects bound by the second object track at the first time being different from the objects bound by its counterpart replica object track. Therefore, in this optional embodiment, the objects bound by the second object track at the first time are denoted as N objects, and the objects bound by the duplicate object track at the first time are denoted as P objects. It is understood that if the user does not change the objects bound by the second object track and the replica object track before the first time, then the N objects are the same as the P objects.
The implementation principle of adding the P objects to the first video by the client is the same as the implementation principle of adding the N objects to the first video, and reference may be specifically made to the foregoing description of adding the N objects to the first video, which is not repeated herein.
In the embodiment of the present disclosure, for an object track to which at least two objects are bound, a client may release a binding relationship between the object track and the object based on an input of a user, and the client may respond to the input of the user to release the binding relationship between the object track and the object by:
optionally, before the step of editing the input of the first video in response to the end received at the first time, the method further includes:
displaying a third object track on the editing interface, wherein the third object track is bound with L objects, and L is an integer greater than 1;
in response to a received input for releasing the binding relationship between the third object track and the objects, replacing the third object track with T fourth object tracks, where the T fourth object tracks bind the L objects, and T is an integer greater than 1 and less than or equal to L;
after the step of editing the first video in response to the input received at the first time to end, the method further comprises:
acquiring T third lengths of the T fourth object tracks at the first moment, T third position relations of the T fourth object tracks and the video track at the first moment, and U objects bound to the T fourth object tracks at the first moment, wherein U is an integer greater than 1;
determining a third total effective time range of the U objects in the first video according to the T third lengths and the T third position relations;
adding the U objects to the first video according to the third total effective time range.
In this optional implementation manner, the step of replacing the third object track with T fourth object tracks in response to the received input of releasing the binding relationship between the third object track and the object may include the following implementation manners:
in a first implementation manner, in response to a received input for releasing a binding relationship between a third object track and an object, the third object track is updated to L fourth object tracks, where each fourth object track in the L fourth object tracks binds one object in the L objects, and the objects bound by different fourth object tracks are different;
in a second implementation manner, when the third object track is obtained by combining T fourth object tracks, the third object track is restored to the T fourth object tracks in response to a received input for releasing the binding relationship between the third object track and the objects, where the objects bound to the T fourth object tracks are the L objects.
It should be noted that, for the case that the third object track is obtained by combining T fourth object tracks, the client may execute the first implementation manner or the second implementation manner. In the second embodiment, it may be understood that the third object track is restored to the T fourth object tracks, and the objects bound to each fourth object track in the T fourth object tracks may be the same as or different from the objects bound to each fourth object track before the third object track is combined.
After the binding relationship between the third object track and the objects is released to obtain T fourth object tracks, an implementation principle of adding the U objects to the first video by the client is the same as an implementation principle of adding the N objects to the first video, and reference may be specifically made to the foregoing description of adding the N objects to the first video, which is not repeated here.
In the embodiment of the present disclosure, the client may share the end track based on the input of the user, and in specific implementation, the client may respond to the input for sharing the end track in the following manner:
optionally, after the step of editing the first video in response to the input received at the first time to end, the method further includes:
responding to the received input of sharing the second object track, and acquiring a target sharing end;
and sending a second object combination to the target sharing end, wherein the second object combination comprises the N objects.
In this optional embodiment, the client may detect whether a second object combination including the N objects is stored, and if so, may directly send the second object combination to the target sharing end; if the second object combination is not stored, the second object combination can be generated first, and then the second object combination is sent to the target sharing end.
In this way, after receiving the second object combination, the target sharing end can directly respond to the received input for selecting the second object combination and display the object tracks bound with the N objects, so that the efficiency of video editing can be improved.
It should be noted that, various optional implementations described in the embodiments of the present invention may be implemented in combination with each other or separately without conflict between the implementations, and the embodiments of the present invention are not limited in this respect.
For ease of understanding, examples are illustrated below:
the embodiment of the present disclosure provides an "object binding" function, and an author can pack and place favorite objects into one track, and then operate the track, and the following technical content module introduces this function in detail.
1.1 adding a clip track style named as 'object combination track'. The "object combination track" is used alone as one track. The object combination track and other tracks have no influence on each other. And the user can add multiple combined tracks in one video.
1.2 Add entry to "object combination track": an "object combination" control (icon) may be added. After clicking the icon of the object combination, the control of the new object combination is popped up, and the historical object combination is popped up.
1.3. The user can operate on the "object combination track" including but not limited to edit, set start point, copy, move, delete, set tag.
i. Editing (clicking on the "object composition track" pops up a window) may include at least one of:
copying at least one object bound by the object combination track; adding a paster; importing a paster; magic expression; special effect; music; sound effect; dubbing; only dubbing; subtitles; character paster; deleting at least one object of an object-composition track binding
Setting a starting point: the user can set the "object combination track" appearance time according to the cursor position.
Replication: the user can copy the entire combined track, and when the "object combined track 2" is copied from the "object combined track 1", the "combined track 2" will inherit all the object elements in the "combined track 1".
Moving: the user may move the entire "object combination track".
v. delete: the user may delete the "object combination track".
Setting a label: when a user edits different object combination tracks, the user can customize labels/names for the object combination tracks, and then the user can search in a label/name mode conveniently in the next operation.
Sharing tags: the user can share the object combination made by the user in a label mode "
On one hand, the whole video clipping area is more concise and not heavy and redundant, and a good clipping experience is provided for a user; on the other hand, the clipping efficiency of the user can be improved, and a large amount of clipping time can be saved.
Fig. 2 is a block diagram illustrating a video editing apparatus according to an example embodiment. Referring to fig. 2, the apparatus includes:
a first display module 21 configured to display target information on an editing interface of the first video, where the target information includes any one of: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects;
a second display module 22 configured to display a second object track on the editing interface in response to the received input for the target information, the second object track binding M objects;
wherein Q is an integer greater than 1; m is an integer greater than or equal to Q.
Optionally, in a case that the target information includes the object combination control, the second display module 22 is configured to:
responding to the received input aiming at the object combination control, and displaying a first window, wherein the first window comprises at least one of K object combinations and a new control used for triggering the new object combinations, and K is a positive integer;
displaying a second object track on the editing interface in response to the received input of the selected target object combination, wherein the second object track binds all objects included in the target object combination, and the target object combination comprises M objects.
Optionally, in a case that the target information includes the Q first objects, the second display module 22 is configured to:
in response to receiving input selecting the Q first object tracks, displaying a second object track on the editing interface, the second object track binding the M objects bound by the Q first object tracks.
Optionally, the video editing apparatus further includes:
a third display module configured to display a video track of a first video on an editing interface of the first video;
a first obtaining module configured to obtain, in response to an input received at a first time to end editing the first video, a first length of the second object track at the first time, a first positional relationship between the second object track and the video track at the first time, and N objects bound to the second object track at the first time, where N is an integer greater than 1;
a first determining module configured to determine a first total effective time range of the N objects in the first video according to the first length and the first position relationship;
a first adding module configured to add the N objects to the first video according to the first total effective time range.
Optionally, the first adding module is configured to:
acquiring a first sub-effective time range of each object in the N objects in the first total effective time range to obtain N first sub-effective time ranges;
and adding the N objects into the first video according to the first total effective time range and the N first sub effective time ranges.
Optionally, the video editing apparatus further includes:
a fourth display module configured to display a second window in response to the received input for the second object track, the second window being a window editing an object to which the second object track is bound;
an update module configured to update the second object track bound object to the N objects in response to input received at the second window editing the second object track bound object.
Optionally, the video editing apparatus further includes:
a fifth display module configured to display a replica object track on the editing interface in response to an input received at a second time to replicate the second object track, wherein an object bound by the replica object track at the second time is the same as an object bound by the second object track at the second time;
a second obtaining module, configured to obtain a second length of the replica object track at the first time, a second positional relationship between the replica object track and the video track at the first time, and P objects bound to the replica object track at the first time, where P is an integer greater than 1;
a second determining module configured to determine a second total effective time range of the P objects in the first video according to the second length and the second position relationship;
a second adding module configured to add the P objects to the first video according to the second total effective time range.
Optionally, the video editing apparatus further includes:
a sixth display module, configured to display a third object track on the editing interface, where the third object track binds L objects, where L is an integer greater than 1;
a replacement module configured to replace the third object track with T fourth object tracks binding the L objects in response to a received input to release the binding relationship between the third object track and the objects, where T is an integer greater than 1 and less than or equal to L;
a third obtaining module, configured to obtain T third lengths of the T fourth object tracks at the first time, T third positional relationships between the T fourth object tracks and the video track at the first time, and U objects bound to the T fourth object tracks at the first time, where U is an integer greater than 1;
a third determining module configured to determine a third total effective time range of the L objects in the first video according to the T third lengths and the T third positional relationships;
a third adding module configured to add the L objects to the first video according to the third total effective time range.
Optionally, the video editing apparatus further includes:
a fourth obtaining module configured to obtain a target sharing end in response to the received input of sharing the second object track;
a sending module configured to send a second object combination to the target sharing end, where the second object combination includes the N objects.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
FIG. 3 is a block diagram illustrating an electronic device in accordance with an example embodiment. As shown in fig. 3, the electronic device 30 includes: a processor 31, a memory 32, a user interface 33, a bus interface 34 and a transceiver 35.
In fig. 3, the bus architecture may include any number of interconnected buses and bridges, with various circuits of one or more processors represented by processor 31 and memory represented by memory 32 being linked together. The bus architecture may also link together various other circuits such as peripherals, voltage regulators, power management circuits, and the like, which are well known in the art, and therefore, will not be described any further herein. Bus interface 34 provides an interface. The user interface 33 may also be an interface capable of interfacing with a desired device for different user devices, including but not limited to a keypad, a display, a speaker, a microphone, a joystick, etc. The processor 31 is responsible for managing the bus architecture and general processing, and the memory 32 may store data used by the processor 31 in performing operations.
A processor 31 for reading the program in the memory 32 and executing the following processes:
displaying target information on an editing interface of a first video, wherein the target information comprises any one of the following items: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects;
displaying a second object track on the editing interface in response to the input for the target information received by the user interface 33, wherein the second object track binds M objects;
wherein Q is an integer greater than 1; m is an integer greater than or equal to Q.
Optionally, in a case that the target information includes the object combination control, the processor 31 is configured to read a program in the memory 32, and execute the following processes:
responding to the input aiming at the object combination control received by the user interface 33, displaying a first window, wherein the first window comprises at least one of K object combinations and a new control used for triggering the new object combinations, and K is a positive integer;
in response to input received by the user interface 33 selecting a target object combination, a second object track is displayed on the editing interface, the second object track binding all objects included in the target object combination, the target object combination including M objects.
Optionally, in a case that the target information includes the Q first objects, the processor 31 is configured to read a program in the memory 32, and execute the following processes:
in response to an input received by the user interface 33 selecting the Q first object tracks, displaying a second object track at the editing interface, the second object track binding the Q first object track bound M objects.
Optionally, the processor 31 is configured to read the program in the memory 32 and execute the following processes:
displaying a video track of a first video on an editing interface of the first video;
in response to an input received at a first time and used for finishing editing the first video, acquiring a first length of the second object track at the first time, a first position relationship between the second object track and the video track at the first time, and N objects bound to the second object track at the first time, wherein N is an integer greater than 1;
determining a first total effective time range of the N objects in the first video according to the first length and the first position relation;
and adding the N objects into the first video according to the first total effective time range.
Optionally, the processor 31 is configured to read the program in the memory 32 and execute the following processes:
acquiring a first sub-effective time range of each object in the N objects in the first total effective time range to obtain N first sub-effective time ranges;
and adding the N objects into the first video according to the first total effective time range and the N first sub effective time ranges.
Optionally, the processor 31 is configured to read the program in the memory 32 and execute the following processes:
in response to input received by the user interface 33 for the second object track, displaying a second window, the second window being a window for editing an object to which the second object track is bound;
updating the second object track bound object to the N objects in response to input received at the second window editing the second object track bound object.
Optionally, the processor 31 is configured to read the program in the memory 32 and execute the following processes:
in response to an input received at a second time to copy the second object track, displaying a replica object track on the editing interface, wherein the object bound by the replica object track at the second time is the same as the object bound by the second object track at the second time;
acquiring a second length of the copy object track at the first moment, a second position relation between the copy object track and the video track at the first moment, and P objects bound to the copy object track at the first moment, wherein P is an integer greater than 1;
determining a second total effective time range of the P objects in the first video according to the second length and the second position relation;
adding the P objects to the first video according to the second total effective time range.
Optionally, the processor 31 is configured to read the program in the memory 32 and execute the following processes:
displaying a third object track on the editing interface, wherein the third object track is bound with L objects, and L is an integer greater than 1;
in response to an input received by the user interface 33 to release the binding relationship between the third object track and the object, replacing the third object track with T fourth object tracks, where the T fourth object tracks bind the L objects, and T is an integer greater than 1 and less than or equal to L;
acquiring T third lengths of the T fourth object tracks at the first moment, T third position relations of the T fourth object tracks and the video track at the first moment, and U objects bound to the T fourth object tracks at the first moment, wherein U is an integer greater than 1;
determining a third total effective time range of the L objects in the first video according to the T third lengths and the T third position relations;
adding the L objects to the first video according to the third total effective time range.
Optionally, the processor 31 is configured to read the program in the memory 32 and execute the following processes:
acquiring a target sharing end in response to an input for sharing the second object track received by the user interface 33;
and sending a second object combination to the target sharing end, wherein the second object combination comprises the N objects.
In an exemplary embodiment, a storage medium comprising instructions, such as the memory 34 comprising instructions, executable by the processor 31 of the network device to perform the method described above is also provided. Alternatively, the storage medium may be a non-transitory computer readable storage medium, which may be, for example, a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, there is also provided a computer program product comprising: executable instructions which, when run on a computer, enable the computer to perform the above-described method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method of video editing, the method comprising:
displaying target information on an editing interface of a first video, wherein the target information comprises any one of the following items: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects;
in response to the received input for the target information, displaying a second object track on the editing interface, wherein the second object track binds M objects;
wherein Q is an integer greater than 1; m is an integer greater than or equal to Q.
2. The method of claim 1, wherein in the case that the target information includes the object composition control, the step of displaying a second object track in the editing interface in response to the received input for the target information comprises:
responding to the received input aiming at the object combination control, and displaying a first window, wherein the first window comprises at least one of K object combinations and a new control used for triggering the new object combinations, and K is a positive integer;
displaying a second object track on the editing interface in response to the received input of the selected target object combination, wherein the second object track binds all objects included in the target object combination, and the target object combination comprises M objects.
3. The method of claim 1, further comprising:
displaying a video track of a first video on an editing interface of the first video;
in response to an input received at a first time and used for finishing editing the first video, acquiring a first length of the second object track at the first time, a first position relationship between the second object track and the video track at the first time, and N objects bound to the second object track at the first time, wherein N is an integer greater than 1;
determining a first total effective time range of the N objects in the first video according to the first length and the first position relation;
and adding the N objects into the first video according to the first total effective time range.
4. The method of claim 3, wherein the step of editing the first video in response to the received input to end the first video at the first time is preceded by the method further comprising:
in response to receiving input for the second object track, displaying a second window, the second window being a window editing an object to which the second object track is bound;
updating the second object track bound object to the N objects in response to input received at the second window editing the second object track bound object.
5. The method of claim 3, wherein the step of editing the first video in response to the received input to end the first video at the first time is preceded by the method further comprising:
in response to an input received at a second time to copy the second object track, displaying a replica object track on the editing interface, wherein the object bound by the replica object track at the second time is the same as the object bound by the second object track at the second time;
after the step of editing the first video in response to the input received at the first time to end, the method further comprises:
acquiring a second length of the copy object track at the first moment, a second position relation between the copy object track and the video track at the first moment, and P objects bound to the copy object track at the first moment, wherein P is an integer greater than 1;
determining a second total effective time range of the P objects in the first video according to the second length and the second position relation;
adding the P objects to the first video according to the second total effective time range.
6. The method of claim 3, wherein the step of editing the first video in response to the received input to end the first video at the first time is preceded by the method further comprising:
displaying a third object track on the editing interface, wherein the third object track is bound with L objects, and L is an integer greater than 1;
in response to a received input for releasing the binding relationship between the third object track and the objects, replacing the third object track with T fourth object tracks, where the T fourth object tracks bind the L objects, and T is an integer greater than 1 and less than or equal to L;
after the step of editing the first video in response to the input received at the first time to end, the method further comprises:
acquiring T third lengths of the T fourth object tracks at the first moment, T third position relations of the T fourth object tracks and the video track at the first moment, and U objects bound to the T fourth object tracks at the first moment, wherein U is an integer greater than 1;
determining a third total effective time range of the L objects in the first video according to the T third lengths and the T third position relations;
adding the L objects to the first video according to the third total effective time range.
7. The method of claim 3, wherein after the step of editing the first video in response to the received input to end the first video at the first time, the method further comprises:
responding to the received input of sharing the second object track, and acquiring a target sharing end;
and sending a second object combination to the target sharing end, wherein the second object combination comprises the N objects.
8. A video editing apparatus, characterized in that the video editing apparatus comprises:
a first display module configured to display target information on an editing interface of a first video, the target information including any one of: the system comprises an object combination control and Q first object tracks, wherein the Q first object tracks bind M objects;
a second display module configured to display a second object track on the editing interface in response to the received input for the target information, the second object track binding M objects;
wherein Q is an integer greater than 1; m is an integer greater than or equal to Q.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the video editing method of any one of claims 1 to 7.
10. A storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform a video editing method as claimed in any one of claims 1 to 7.
CN202011438852.0A 2020-12-07 2020-12-07 Video editing method and device and electronic equipment Active CN112506412B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011438852.0A CN112506412B (en) 2020-12-07 2020-12-07 Video editing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011438852.0A CN112506412B (en) 2020-12-07 2020-12-07 Video editing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN112506412A true CN112506412A (en) 2021-03-16
CN112506412B CN112506412B (en) 2022-09-30

Family

ID=74970734

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011438852.0A Active CN112506412B (en) 2020-12-07 2020-12-07 Video editing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN112506412B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037618A (en) * 2021-04-16 2021-06-25 维沃移动通信有限公司 Image sharing method and device
CN113347479A (en) * 2021-05-31 2021-09-03 网易(杭州)网络有限公司 Multimedia material editing method, device, equipment and storage medium
CN113473204A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
WO2023093687A1 (en) * 2021-11-25 2023-06-01 北京字跳网络技术有限公司 Video processing method and device

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5099337A (en) * 1989-10-31 1992-03-24 Cury Brian L Method and apparatus for producing customized video recordings
US5682326A (en) * 1992-08-03 1997-10-28 Radius Inc. Desktop digital video processing system
US20050206751A1 (en) * 2004-03-19 2005-09-22 East Kodak Company Digital video system for assembling video sequences
CN1710507A (en) * 2004-06-17 2005-12-21 索尼株式会社 Content reproduction apparatus, content reproduction method, content management apparatus, content management method and computer program
US20070165998A1 (en) * 2003-09-09 2007-07-19 Sony Corporation File recording device, file reproducing device, file recording method, program of file recording method, recording medium containing therein program of file recording method, file reproducing method, program of file reproducing method, and recording medium containing therein program of file reproducing method
US20070274683A1 (en) * 2006-05-24 2007-11-29 Michael Wayne Shore Method and apparatus for creating a custom track
US20130263003A1 (en) * 2012-03-29 2013-10-03 Adobe Systems Inc. Method and apparatus for grouping video tracks in a video editing timeline
US20140300811A1 (en) * 2007-05-25 2014-10-09 Google Inc. Methods and Systems for Providing and Playing Videos Having Multiple Tracks of Timed Text Over A Network
CN104751869A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Variable rail sound image-based panoramic multi-channel audio control method
US20150213275A1 (en) * 2014-01-30 2015-07-30 Raytheon Pikewerks Corporation Context aware integrated display keyboard video mouse controller
US20170162228A1 (en) * 2015-12-07 2017-06-08 Cyberlink Corp. Systems and methods for media track management in a media editing tool
CN109698941A (en) * 2018-12-29 2019-04-30 北京强氧新科信息技术有限公司 Multiple-camera trajectory control system and method
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material
CN111357046A (en) * 2017-11-24 2020-06-30 索尼公司 Information processing apparatus, information processing method, and program
CN111526242A (en) * 2020-04-30 2020-08-11 维沃移动通信有限公司 Audio processing method and device and electronic equipment

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5099337A (en) * 1989-10-31 1992-03-24 Cury Brian L Method and apparatus for producing customized video recordings
US5682326A (en) * 1992-08-03 1997-10-28 Radius Inc. Desktop digital video processing system
US20070165998A1 (en) * 2003-09-09 2007-07-19 Sony Corporation File recording device, file reproducing device, file recording method, program of file recording method, recording medium containing therein program of file recording method, file reproducing method, program of file reproducing method, and recording medium containing therein program of file reproducing method
US20050206751A1 (en) * 2004-03-19 2005-09-22 East Kodak Company Digital video system for assembling video sequences
CN1710507A (en) * 2004-06-17 2005-12-21 索尼株式会社 Content reproduction apparatus, content reproduction method, content management apparatus, content management method and computer program
US20070274683A1 (en) * 2006-05-24 2007-11-29 Michael Wayne Shore Method and apparatus for creating a custom track
US20140300811A1 (en) * 2007-05-25 2014-10-09 Google Inc. Methods and Systems for Providing and Playing Videos Having Multiple Tracks of Timed Text Over A Network
US20130263003A1 (en) * 2012-03-29 2013-10-03 Adobe Systems Inc. Method and apparatus for grouping video tracks in a video editing timeline
CN104751869A (en) * 2013-12-31 2015-07-01 广州励丰文化科技股份有限公司 Variable rail sound image-based panoramic multi-channel audio control method
US20150213275A1 (en) * 2014-01-30 2015-07-30 Raytheon Pikewerks Corporation Context aware integrated display keyboard video mouse controller
US20170162228A1 (en) * 2015-12-07 2017-06-08 Cyberlink Corp. Systems and methods for media track management in a media editing tool
CN111357046A (en) * 2017-11-24 2020-06-30 索尼公司 Information processing apparatus, information processing method, and program
CN109698941A (en) * 2018-12-29 2019-04-30 北京强氧新科信息技术有限公司 Multiple-camera trajectory control system and method
CN110198486A (en) * 2019-05-28 2019-09-03 上海哔哩哔哩科技有限公司 A kind of method, computer equipment and the readable storage medium storing program for executing of preview video material
CN111526242A (en) * 2020-04-30 2020-08-11 维沃移动通信有限公司 Audio processing method and device and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113037618A (en) * 2021-04-16 2021-06-25 维沃移动通信有限公司 Image sharing method and device
CN113037618B (en) * 2021-04-16 2022-11-01 维沃移动通信有限公司 Image sharing method and device
CN113347479A (en) * 2021-05-31 2021-09-03 网易(杭州)网络有限公司 Multimedia material editing method, device, equipment and storage medium
CN113473204A (en) * 2021-05-31 2021-10-01 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
CN113473204B (en) * 2021-05-31 2023-10-13 北京达佳互联信息技术有限公司 Information display method and device, electronic equipment and storage medium
WO2023093687A1 (en) * 2021-11-25 2023-06-01 北京字跳网络技术有限公司 Video processing method and device

Also Published As

Publication number Publication date
CN112506412B (en) 2022-09-30

Similar Documents

Publication Publication Date Title
CN112506412B (en) Video editing method and device and electronic equipment
US11893052B2 (en) Management of local and remote media items
US11169685B2 (en) Methods and apparatuses to control application programs
JP7150830B2 (en) Content management system workflow functionality enforced by the client device
US20070130541A1 (en) Synchronization of widgets and dashboards
US10817472B2 (en) Storage organization system with associated storage utilization values
CN105359133A (en) Interaction of web content with an electronic application document
CN110209735A (en) Database backup method, calculates equipment and storage medium at DB Backup device
KR20080086265A (en) System and method for scrolling display screen, mobile terminal including the system and recording medium storing program for performing the method thereof
US20230251763A1 (en) Display of a plurality of files from multiple devices
CN114047892A (en) Screen projection control method and device, storage medium and electronic equipment
CN110476162A (en) Use the action message of navigation memonic symbol control display
US10839143B2 (en) Referential gestures within content items
JP6474728B2 (en) Enhanced information gathering environment
US8965940B2 (en) Imitation of file embedding in a document
JP7254842B2 (en) A method, system, and computer-readable recording medium for creating notes for audio files through interaction between an app and a website
CN114979743B (en) Method, device, equipment and medium for displaying audiovisual works
US20240126804A1 (en) Management of local and remote media items
US11861139B1 (en) Deferring and accessing deferred content from multiple applications
CN117651198A (en) Method, device, equipment and storage medium for authoring media content
CN117319728A (en) Method, apparatus, device and storage medium for audio-visual content sharing
CN117435311A (en) Data processing method, device, apparatus, storage medium, and program
CN112423109A (en) Interactive video generation method and system, electronic equipment and storage medium
TW201013529A (en) Method and system for establishing multi-level tool sets

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant