CN116634233B - Media editing method, device, equipment and storage medium - Google Patents

Media editing method, device, equipment and storage medium Download PDF

Info

Publication number
CN116634233B
CN116634233B CN202310389160.9A CN202310389160A CN116634233B CN 116634233 B CN116634233 B CN 116634233B CN 202310389160 A CN202310389160 A CN 202310389160A CN 116634233 B CN116634233 B CN 116634233B
Authority
CN
China
Prior art keywords
media
target
frame
full
editing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310389160.9A
Other languages
Chinese (zh)
Other versions
CN116634233A (en
Inventor
尹玉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qicai Xingyun Digital Technology Co ltd
Original Assignee
Beijing Qicai Xingyun Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qicai Xingyun Digital Technology Co ltd filed Critical Beijing Qicai Xingyun Digital Technology Co ltd
Priority to CN202311648150.9A priority Critical patent/CN117499745A/en
Priority to CN202310389160.9A priority patent/CN116634233B/en
Publication of CN116634233A publication Critical patent/CN116634233A/en
Application granted granted Critical
Publication of CN116634233B publication Critical patent/CN116634233B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47205End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for manipulating displayed content, e.g. interacting with MPEG-4 objects, editing locally
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/858Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot
    • H04N21/8586Linking data to content, e.g. by linking an URL to a video object, by creating a hotspot by using a URL

Abstract

The embodiment of the invention provides a media editing method, which comprises the following steps: in response to obtaining that a target full-frame command for any frame position of a target media segment of the media frame editing line is a non-sequential full-frame command, and obtaining an indication that the target full-frame command is applied to a media segment that is homologous to the target media segment; acquiring a target media fragment set with a first source identifier corresponding to the target media fragment from the media fragment set to be edited according to the first source identifier of the target media fragment, and acquiring a target position acted by the target full-frame command according to the arrangement sequence of the target media fragment set in the media fragment set to be edited in an editing line of the media frame; and adding the target full-frame command in the editing area corresponding to the target position. Based on the scheme provided by the embodiment of the invention, the editing operation steps can be simplified, and the editing time of a user can be saved.

Description

Media editing method, device, equipment and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a media editing method, apparatus, device, and storage medium.
Background
In the existing media editing software, when a user performs editing processing on media, taking processing video media as an example, a segmentation command is added to media in an editing area to delete some abandoned video clips, then editing the reserved video clips, or newly adding some video clips to synthesize the reserved video clips, then editing the reserved video clips, but when the media clips to be edited become a plurality of media clips, the video clips to be edited are treated as one integral adding non-time-series full-frame control command (such as adjusting sound speed change, adding a filter, etc.), if the full-frame control command is directly added to one video clip, only the selected video clip is acted on, but not the whole reserved video clip,
if a non-time sequence full-frame control command needs to be added to some video clips, the video clips are firstly guided out and combined into a new video and then re-added and then re-imported into an editing area, or the non-time sequence full-frame command is respectively added to all the video clips, so that the operation is complex, the editing time of a user is wasted, and the method is very unfriendly to the user.
Disclosure of Invention
The embodiment of the invention provides a method for efficiently editing media content.
In a first aspect, an embodiment of the present invention provides a media editing method, applied to an editing terminal, where content displayed in an image user interface of the editing terminal includes at least an object display area and an editing area, where the object display area is used to display content of each frame of each media segment to be edited, and the editing area is used to edit the media frames of each media segment to be edited, and the editing area includes at least an editing line used to load the media frames of each media segment to be edited, where the method includes:
in response to obtaining that a target full-frame command for any frame position of a target media segment of the media frame editing line is a non-sequential full-frame command, and obtaining an indication that the target full-frame command is applied to a media segment that is homologous to the target media segment; the non-time sequence full-frame command is a control command which acts on each frame of the target media fragment and is irrelevant to time sequence, the target media fragment is any one of a media fragment set to be edited, the media fragment set to be edited is all media fragments loaded on the media frame editing line, and the media fragment set to be edited has an arrangement sequence on the media frame editing line;
Acquiring a target media fragment set with a first source identifier from the media fragment set to be edited according to the first source identifier of the target media fragment, wherein the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment;
in the editing line of the media frame, according to the arrangement sequence of the target media fragment set in the media fragment set to be edited, acquiring a target position acted by the target full-frame command;
and adding the target full-frame command in the editing area corresponding to the target position.
In a second aspect, an embodiment of the present invention provides a media editing apparatus, where content displayed in a graphical user interface of an editing terminal includes at least an object display area and an editing area, the object display area is used to display content of each frame of each media segment to be edited, the editing area is used to edit the media frames of each media segment to be edited, the editing area includes at least a media frame editing line used to load each media segment to be edited, and the apparatus includes:
An acquisition module for responding to the acquisition of a target full-frame command of any frame position of a target media segment of the media frame editing line as a non-time sequence full-frame command and the acquisition of an indication of the application of the target full-frame command to the media segment homologous to the target media segment; the non-time sequence full-frame command is a control command which acts on each frame of the target media fragment and is irrelevant to time sequence, the target media fragment is any one of a media fragment set to be edited, the media fragment set to be edited is all media fragments loaded on the media frame editing line, and the media fragment set to be edited has an arrangement sequence on the media frame editing line;
acquiring a target media fragment set with a first source identifier from the media fragment set to be edited according to the first source identifier of the target media fragment, wherein the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment;
the adding module is used for acquiring a target position acted by the target full-frame command according to the arrangement sequence of the target media fragment set in the media fragment set to be edited in the editing line of the media frame;
And adding the target full-frame command in the editing area corresponding to the target position.
In a third aspect, an embodiment of the present invention provides an electronic device, including: a memory, a processor, a communication interface; wherein the memory has executable code stored thereon, which when executed by the processor, causes the processor to at least implement the media editing method according to the first aspect.
In a fourth aspect, embodiments of the present invention provide a non-transitory machine-readable storage medium having stored thereon executable code, which when executed by a processor of an electronic device, causes the processor to at least implement a media editing method as described in the first aspect.
In the scheme provided by the embodiment of the invention, the target full-frame command of any frame position of the target media segment of the media frame editing line is obtained to be a non-time sequence full-frame command, and the indication of applying the target full-frame command to the media segment homologous to the target media segment is obtained; acquiring a target media fragment set with a first source identifier corresponding to the target media fragment from the media fragment set to be edited according to the first source identifier of the target media fragment, and acquiring a target position acted by the target full-frame command according to the arrangement sequence of the target media fragment set in the media fragment set to be edited in an editing line of the media frame; and adding the target full-frame command in the editing area corresponding to the target position. Based on the scheme provided by the embodiment of the invention, the editing operation steps can be greatly simplified, and the editing time of a user is saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1A is a schematic diagram of a graphical user interface application provided in an embodiment of the present invention for a seed media terminal applying the flowcharts of the methods shown in fig. 3A to 3C;
FIG. 1B is a schematic diagram of yet another graphical user interface application provided in an embodiment of the present invention for a media terminal employing the method flow diagrams shown in FIGS. 3A-3C;
FIG. 2 is a schematic representation of media segments in examples provided by embodiments of the present invention;
FIG. 3A is a flowchart of various media editing methods according to embodiments of the present invention;
FIG. 3B is a flowchart illustrating various media editing methods according to embodiments of the present invention;
FIG. 3C is a flowchart illustrating various media editing methods according to embodiments of the present invention;
FIG. 3D is a flowchart illustrating various media editing methods according to embodiments of the present invention;
FIG. 4 is a schematic diagram of a media editing apparatus according to an embodiment of the present invention;
Fig. 5 is a schematic structural diagram of an electronic device corresponding to the media editing apparatus provided in the embodiment shown in fig. 4.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this application and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise, the "plurality" generally includes at least two, but does not exclude the case of at least one.
It should be understood that the term "and/or" as used herein is merely one relationship describing the association of the associated objects, meaning that there may be three relationships, e.g., a and/or B, may represent: a exists alone, A and B exist together, and B exists alone. In addition, the character "/" herein generally indicates that the front and rear associated objects are an "or" relationship.
The words "if", as used herein, may be interpreted as "at … …" or "at … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrase "if determined" or "if detected (stated condition or event)" may be interpreted as "when determined" or "in response to determination" or "when detected (stated condition or event)" or "in response to detection (stated condition or event), depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a product or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such product or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a commodity or system comprising such elements.
In addition, the sequence of steps in the method embodiments described below is only an example and is not strictly limited.
The media editing method provided by the embodiment of the invention can be executed by an electronic device, and the electronic device can be a terminal device such as a PC, a notebook computer, a smart phone and the like, and can also be a server. The server may be a physical server comprising a separate host, or may be a virtual server, or may be a cloud server or a server cluster.
An interface 1M in fig. 1A is an image user interface of a media terminal provided for an embodiment of the present invention, where content displayed in the image user interface of an editing terminal at least includes an object display area 101 and an editing area 102, where the object display area 101 is used to display content of each frame of each media segment to be edited, and the editing area is used to edit the media frames of each media segment to be edited, in one media editing method in this embodiment, as shown in fig. 3A-3D, the object display area 101 dynamically displays content of each frame of each media segment to be edited, and a corresponding editing line 103 of a corresponding media frame is provided in the editing area, so that each frame of each media segment to be edited is used to edit, and an effect after editing the media segment to be edited can be displayed in the object display area 101.
In an embodiment, the method is applied to the media terminal, and may include the following steps, and a specific flowchart is shown in fig. 3A.
Step 301, in response to obtaining a target full-frame command for any frame position of a target media segment of an edit line of the media frame as a non-time-series full-frame command, and obtaining an indication to apply the target full-frame command to a media segment that is homologous to the target media segment,
acquiring a target media fragment set with a first source identifier from the media fragment set to be edited according to the first source identifier of the target media fragment, wherein the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment;
the full frame commands in this embodiment include a time-sequential full frame command and a non-time-sequential full frame command. Wherein a timing full frame command is a control command that acts on each frame of the target media segment and that is related to timing (e.g., a control command to add specific music); the non-time sequence full frame command is a control command which acts on each frame of the target media segment and is irrelevant to time sequence; exemplary, e.g., shift commands, filter commands for video streams, etc.; the target media fragment is any one media fragment in a media fragment set to be edited, the media fragment set to be edited is all media fragments loaded on the media frame editing line, and the media fragment set to be edited has an arrangement sequence on the media frame editing line; in this embodiment, the media segment set to be edited is changed in real time along with the real-time segmentation of the media segments by the user or the addition of new media segments, and the media segment set to be edited may be a combination of multiple homologous media segments obtained by adding a segmentation command to a complete source media segment, where the media segments obtained after segmentation are arranged in the editing line of the media frame according to a default segmentation sequence; or a combination of multiple media segments of different sources sequentially loaded into the editing area from the local map library, after the media segments are newly added, the newly added media segments are arranged backwards in the editing line of the media frames according to a default newly added sequence, and the arrangement sequence can be manually adjusted by a user.
In response to receiving an add split command to a source media segment, where at least one media segment to be edited is a homogeneous combination of multiple media segments, a first source identifier is set for each of the homogeneous media segments when the split command is executed to cut the source media segment.
Illustratively, in the first example, as shown in frame 2a in fig. 2, after receiving the segmentation command to the source media segment a, a plurality of media segments A1, A2, A3 arranged in sequence are obtained, and at this time, a first source identifier a may be set for each of A1, A2, A3 to identify each of the homologous media segments A1a, A2a, A3a.
In response to receiving an addition command for adding a new media segment to any source media segment, at least one media segment to be edited is a combination of a plurality of media segments of different sources, and when the addition command is executed to add the new media segment, a second source identifier is set for the different source media segments consisting of the source media segment and the new media segment.
Illustratively, in the second example, as shown in frame 2B in fig. 2, an adding command for adding a new media segment B and a new media segment C to a source media segment a is received, and a plurality of media segments a, B, and C arranged in sequence are obtained, where a second source identifier B may be set for each of a, B, and C to identify each of different source media segments Ab, bb, and Cb.
When at least one media segment to be edited comprises a plurality of media segment combinations of a same source and a plurality of media segment combinations of different sources, in response to receiving an add segmentation command to a source media segment, a first source identifier is set to each of the source media segments when the segmentation command is executed to cut the source media segment, and in response to receiving an add command to add a new media segment to any of the source media segments, a second source identifier is set to the different source media segments consisting of the source media segment and the new media segment when the add command is executed to add the new media segment.
In a third example, as shown in frame 2C in fig. 2, after receiving the segmentation command to the source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a first source identifier a may be set for each of the source media segments A1a, A2a, A3a, and a second source identifier B may be set for each of the source media segments A1a, A2a, A3a, B, C, and a second source identifier B may be set for each of the source media segments A1ab, A2ab, A3ab, bb, cb.
Specifically, when all the media segments to be edited are homologous media segments, at this time, all the homologous media segments have the first source identifier, and then other media segments to be edited are media segments having the first source identifier corresponding to the target media segment, and then other media segments to be edited can be extracted according to the first source identifier;
illustratively, following the first example described above, as shown in frame 2a of fig. 2, after receiving a segmentation command to a source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a first source identifier a may be set for each of A1, A2, A3 to identify each of homologous media segments A1a, A2a, A3a, respectively, and when A1 is a target media segment, other media segments A2, A3 to be edited may be extracted according to the first source identifier a.
When all the media fragments to be edited are different source media fragments, at this time, all the different source media fragments have the second source identifiers, and then other media fragments to be edited are media fragments having the second source identifiers corresponding to the target media, and then other media fragments to be edited can be extracted according to the second source identifiers;
Illustratively, following the second example described above, as shown in frame 2B of fig. 2, an addition command for adding a new media segment B and a new media segment C to a source media segment a is received, and a plurality of media segments a, B, and C arranged in sequence are obtained, where a second source identifier B may be set for each of a, B, and C to identify each of the homologous media segments Ab, bb, and Cb, respectively, and when a is a target media segment, other media segments B and C to be edited may be extracted according to the second source identifier B.
When the media fragments to be edited comprise both homologous media fragments and different source media fragments, all the homologous media fragments have a first source identifier, the different source media fragments have a second source identifier, and other media fragments to be edited are media fragments having a first source identifier and a second source identifier corresponding to the target media, then other media fragments to be edited can be extracted according to the first source identifier and/or the second source identifier;
for example, following the third example described above, after receiving the segmentation command for the source media segment a, as shown in frame 2C in fig. 2, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a first source identifier a may be set for each of the source media segments A1a, A2a, A3a, and then an addition command for the new media segment B and the new media segment C may be added, so that the sequentially arranged media segments are A1a, A2a, A3a, B, C, and a second source identifier B may be set for each of the source media segments A1ab, A2ab, A3ab, bb, cb.
Then, when A1 is a target media fragment, other media fragments A2ab, A3ab, bb, cb to be edited may be extracted according to the first source identifier a and/or the second source identifier b.
In a fourth example, as shown in frame 2ea in fig. 2, after receiving a segmentation command to a source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a third source identifier C may be set for each of A1, A2, A3 to identify each of homologous media segments A1C, A2C, A3C, and an addition command to add a new media segment B and a new media segment C, then the sequentially arranged media segments are A1C, A2C, A3C, B, C, and at this time, a third source identifier C may be set for the newly added media segments B, C, so as to obtain a plurality of sequentially arranged media segments A1C, A2C, A3C, bc, cc.
Then, when A1c is the target media segment, other media segments A2c, A3c, bc, cc to be edited may be extracted according to the third source identifier c.
In a fifth example, as shown in frame 2eb in fig. 2, an adding command for adding a new media segment B and a new media segment C to a source media segment a is received, a plurality of sequentially arranged media segments a, B, C are obtained, at this time, third source identifiers C are respectively set for a, B, C to respectively identify each homologous media segment Ac, bc, cc, and after receiving a segmentation command for adding a media segment B, at this time, only a third source identifier C is reserved for segments B1, B2, B3 of the media segment B, a plurality of sequentially arranged media segments Ac, B1C, B2C, B3C, cc are obtained,
Then, when Ac is the target media fragment, other media fragments Ac, B1c, B2c, B3c, cc to be edited may be extracted according to the third source identifier c.
Step 302, in the editing line of the media frame, according to the arrangement sequence of the target media segment set in the media segment set to be edited, obtaining the target position acted by the target full-frame command;
and adding the target full-frame command in the editing area corresponding to the target position.
In the editing line of the media frame, according to the arrangement sequence of the target media fragment set in the media fragment set to be edited, acquiring a target position acted by the target full-frame command; specifically, a position of each frame of each media segment of the target media segment set may be obtained as the target position; or taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame of each media fragment after the position of the starting frame in the target media fragment set as the target position in the media frame editing line according to the arrangement sequence of the target media fragment set in the media fragment set to be edited.
According to the embodiment of the invention, when the multi-segment homologous media segments and the multi-segment different-source media segments are edited, the multi-segment homologous media segments can be flexibly and automatically obtained according to the method of the embodiment, and the non-time sequence full-frame command is added to the multi-segment homologous media segments, so that the user editing time is saved, and the media editing efficiency is improved.
Illustratively, following the first example described above, after receiving a segmentation command to a source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, at this time, a first source identifier a may be set for each of A1, A2, A3 to identify each of homologous media segments A1a, A2a, A3a, respectively, and when A1a is a target media segment, the homologous media segments A2a, A3a of A1a may be extracted according to the first source identifier a;
illustratively, A1 is a target media segment comprising a plurality of sets of media frames: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media segments with the first source identifier a are arranged in order as A1 1 a, A1 2 a,A1 3 a,A1 4 a,A1 5 a,A1 6 a, A2a, A3a; illustratively, in any frame A1 4 a, acquiring a full-frame command at the position of a, and acquiring any frame A1 4 a target media fragment of a is A1, and a target media fragment set with a first source identifier a is acquired according to the first source identifier a of the target media fragment A1 to be A1a, A2a and A3a;
Illustratively, as shown in frame 2fa of FIG. 2, the location of each frame of each media segment of the set of target media segments (exemplary media segments A1a, A2a, A3 a) may be taken as the target location, i.e., with each frame A1 of media segment A1 1 a, A1 2 a,A1 3 a,A1 4 a,A1 5 a,A1 6 a, and each frame in the media fragment A2a, wherein each frame in the media fragment A3a is used as the target position, and the target full-frame command is added in an editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, as shown in frame 2fb of FIG. 2, any one of the frames A1 of the target media segment A1a may be used 4 a position is a start frame, and the start frame A1 in the target media fragment set (the exemplary media fragments A1a, A2a, A3 a) is acquired in the media frame editing line according to the arrangement sequence of the target media fragment set (the exemplary media fragments A1a, A2a, A3 a) in the media fragment set to be edited 4 a as the target position, i.e. the start frame A1 in the media fragment A1 4 a, each frame A1 after the position of a 4 a,A1 5 a,A1 6 a, and each frame in the media fragment A2a, wherein each frame in the media fragment A3a is used as the target position, and the target full-frame command is added in an editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, following the second example described above, an add command to add a new media segment B and a new media segment C to a source media segment a is received, and a plurality of sequentially arranged media segments a, B, C are obtained, where a second source identifier B may be set for each of a, B, C to identify each of the homologous media segments, and when a is a target media segment, other media segments B, C to be edited may be extracted according to the second source identifier B.
Illustratively, ab is a target media segment comprising a plurality of media frame sets: a is that 1 b, A 2 b,A 3 b,A 4 b,A 5 b,A 6 b; the media segments with the second source identifier b are arranged in order as a 1 b, A 2 b,A 3 b,A 4 b,A 5 b,A 6 b, bb, cb; illustratively, in any frame A 4 b, acquiring a full-frame command from the position of b, and acquiring any frame A 4 b is Ab, and the target media fragment set with the second source identifier b is Ab, bb and Cb obtained according to the second source identifier b of the target media fragment Ab;
illustratively, as shown in frame 2ga of FIG. 2, the position of each frame of each media segment of the target media segment set (exemplary media segments Ab, bb, cb) may be taken as the target position, i.e., with each frame A in media segment Ab 1 b, A 2 b,A 3 b,A 4 b,A 5 b,A 6 b, and each frame in the media fragment Bb, wherein each frame in the media fragment Cb is used as the target position, and the target full-frame command is added in the editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, as shown in frame 2gb in fig. 2, the position of each frame of the target media fragment set (exemplary media fragments Ab, bb, cb) after the position of the start frame A4b may be taken as the target position, that is, each frame A4b, A5b, A6b after the position of the start frame A4b in the media fragment Ab, and each frame of the media fragment Bb in the media fragment set to be edited may be taken as the target position in the media frame editing line according to the arrangement order of the target media fragment set (exemplary media fragments Ab, bb, cb) in the media fragment set to be edited, and the target full frame command is added in the editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
In an embodiment, the method applied to the media terminal may include the following steps, and the specific flowchart is shown in fig. 3B:
step 401, responding to the target full-frame command as a non-time sequence full-frame command, and acquiring an indication of a media segment to be edited after the target full-frame command is applied to the target media segment, and acquiring the media segment set to be edited;
step 402, in the editing line of the media frame, according to the arrangement sequence of the media fragment set to be edited, obtaining the target position acted by the target full-frame command, and adding the target full-frame command in the editing area corresponding to the target position; specifically, the target full-frame command may be added directly in an editing line of a media frame, and directly corresponding to the target position, or the target full-frame command may be added corresponding to the target position in an editing line of a target full-frame command in the editing region.
Specifically, according to the arrangement sequence of the media fragment set to be edited, the position of each frame of each media fragment of the media fragment set to be edited is obtained in the media frame editing line as the target position; or (b)
Taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame of each media fragment after the position of the starting frame in the media fragment set to be edited as the target position in the media frame editing line according to the arrangement sequence of the media fragment set to be edited;
the media segment set to be edited is obtained according to a first source identifier and/or a second source identifier of the target media segment, or according to a third source identifier of the target media segment, wherein the first source identifier is an identifier for identifying a media segment homologous to the target media segment, the second source identifier is an identifier for identifying a media segment different from the target media segment, and the third source identifier is an identifier for identifying a media segment homologous to the target media segment and/or a media segment different from the target media segment; for example, following the third example, after receiving the segmentation command for the source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a first source identifier a may be set for each of the A1, A2, A3 to identify each of the homologous media segments A1a, A2a, A3a, and then an addition command for the new media segment B and the new media segment C is added, and then sequentially arranged media segments are A1a, A2a, A3a, B, and C, and a second source identifier B is set for each of the A1a, A2a, A3a, B, C to identify each of the homologous media segments and the different source media segments A1ab, A2ab, A3ab, bb, cb. Then, when A1a is a target media fragment, other media fragments A2, A3, B, C to be edited can be extracted according to the first source identifier a and/or the second source identifier B, that is, the media fragment set to be edited is obtained as A1ab, A2ab, A3ab, bb, cb.
Illustratively, A1a is a target media segment comprising a plurality of media frame sets: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media segments with the first source identifier a and/or the second source identifier b are arranged in order as A1 1 ab, A1 2 ab,A1 3 ab,A1 4 ab,A1 5 ab,A1 6 ab, A2ab, A3ab, bb, cb; illustratively, in any frame A1 4 a, acquiring a full-frame command at the position of a, and acquiring any frame A1 4 a target media fragment of a is A1, and a media fragment set to be edited with a first source identifier a and/or a first source identifier b is acquired according to a first source identifier a and/or a second source identifier b of the target media fragment A1, wherein the media fragment set to be edited with the first source identifier a and/or the first source identifier b is A1a, A2a, A3a, bb and Cb;
illustratively, as shown in the frame 2ha of fig. 2, each media fragment set to be edited may be acquired in the media frame editing line according to the arrangement order of the media fragment sets to be edited as A1a, A2a, A3a, bb, cbThe position of each frame of the media segment is taken as the target position, namely, each frame A1 in the media segment A1 1 ab, A1 2 ab,A1 3 ab,A1 4 ab,A1 5 ab,A1 6 ab, and each frame in media fragment A2ab, and each frame in media fragment A3ab, and each frame in media fragment Bb, and each frame in media fragment Cb as the target location, adding the target full frame command in an edit zone corresponding to the target location; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, as shown in frame 2hb of FIG. 2, the arbitrary frame position of the target media segment may be used as a starting frame, and the arrangement sequence A1 of the media segment set to be edited is followed 1 ab, A1 2 ab,A1 3 ab,A1 4 ab,A1 5 ab,A1 6 ab, A2ab, A3ab, bb, cb, obtaining the initial frame A1 from the media fragment set to be edited in the media frame editing line 4 The position of each frame of each media fragment after the position of ab is taken as the target position, namely the initial frame A1 in the media fragment A1 4 ab position after each frame A1 4 ab,A1 5 ab,A1 6 ab, and each frame in media fragment A2ab, and each frame in media fragment A3ab, and each frame in media fragment Bb, and each frame in media fragment Cb as the target location, adding the target full frame command in an edit zone corresponding to the target location; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
In one embodiment, in response to receiving an add split command to a source media segment, a third source identifier is set for each of the source media segments when the split command is executed to cut the source media segment, and an add command to add a newly added media segment to any of the source media segments is received, the third source identifier is also set for the newly added media segment when the add command is executed to add the newly added media segment.
When at least one media segment to be edited comprises both a homologous media segment and a different source media segment, then the set of media segments to be edited may be extracted according to the third source identifier.
For example, following the fourth example, after receiving the segmentation command for the source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a third source identifier C may be set for each of the A1, A2, A3 to identify each of the homologous media segments A1C, A2C, A3C, and then an addition command for the newly added media segment B and the newly added media segment C is set, and then the sequentially arranged media segments are A1C, A2C, A3C, B, and C, and at this time, a plurality of sequentially arranged media segments A1C, A2C, A3C, bc, cc may be obtained by setting a third source identifier C for the newly added media segments B, C.
Then, when A1c is the target media segment, other media segments A2c, A3c, bc, cc to be edited can be extracted according to the third source identifier c, that is, the media segment set to be edited is obtained as A1c, A2c, A3c, bc, cc.
Illustratively, A1 is a target media segment comprising a plurality of sets of media frames: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media segments with the third source identifier c are arranged in order as A1 1 c, A1 2 c,A1 3 c,A1 4 c,A1 5 c,A1 6 c, A2c, A3c, bc, cc. By way of example only, and in an illustrative,
illustratively, in any frame A1 4 c, acquiring a full-frame command at the position of c, and acquiring any frame A1 4 c is A1, and the media fragment set to be edited with the third source identifier c is acquired according to the third source identifier c of the target media fragment A1 to be A1c, A2c, A3c, bc and Cc;
illustratively as shown in frame 2ka of FIG. 2The position of each frame of each media segment of the media segment set to be edited can be obtained in the media frame editing line as the target position according to the arrangement sequence of the media segment set to be edited as A1c, A2c, A3c, bc and Cc, namely, each frame A1 in the media segment A1 1 c, A1 2 c,A1 3 c,A1 4 c,A1 5 c,A1 6 c, and each frame in the media segment A2c, and each frame in the media segment A3c, and each frame in the media segment Bc, and each frame in the media segment Cc as the target position, adding the target full-frame command in the editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, as shown in frame 2kb in FIG. 2, the arbitrary frame position of the target media fragment can be used as a starting frame, and the arrangement sequence A1 of the media fragment set to be edited is followed 1 c, A1 2 c,A1 3 c,A1 4 c,A1 5 c,A1 6 c, A2c, A3c, bc, cc, obtaining the initial frame A1 from the media fragment set to be edited in the media frame editing line 4 c as the target position, i.e. with the start frame A1 in the media fragment A1 4 c each frame A1 after the position of c 4 c,A1 5 c,A1 6 c, and each frame in the media segment A2c, and each frame in the media segment A3c, and each frame in the media segment Bb, and each frame in the media segment Cc as the target location, adding the target full-frame command in the edit section corresponding to the target location; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
According to the embodiment of the invention, when multiple homologous media fragments and multiple different media fragments are edited, all the media fragments to be edited can be flexibly and automatically obtained according to the method of the embodiment, and non-time sequence full-frame commands are added to all the media fragments to be edited, so that the user editing time is saved, and the media editing efficiency is improved.
In an embodiment, the method applied to the media terminal may include the following steps, and the specific flowchart is shown in fig. 3C:
step 501, in response to the target full-frame command being a non-time-sequential full-frame command, and not obtaining an indication of the target full-frame command being applied to a media segment that is homologous to the target media segment, and not obtaining an indication of a media segment to be edited after the target full-frame command is applied to the target media segment,
step 502, in the editing line of the media frame, acquiring a target position acted by the target full-frame command according to the target media segment; adding the target full-frame command in an editing area corresponding to the target position; specifically, the target full-frame command may be added directly in an editing line of a media frame, and directly corresponding to the target position, or the target full-frame command may be added corresponding to the target position in an editing line of a target full-frame command in the editing region.
Specifically, a position of each frame of the target media segment may be acquired as the target position; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame after the position of the starting frame in the target media fragment in the media frame editing line as the target position.
Illustratively, following the first example described above, after receiving the addition of the segmentation command to the source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, at which time a first source identifier a may be set for each of A1, A2, A3 to identify each of the homologous media segments A1a, A2a, A3a, respectively;
illustratively, A1 is a target media segment comprising a plurality ofMedia frame collection: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media segments with the first source identifier a are arranged in sequence as A1 1 a, A1 2 a,A1 3 a,A1 4 a,A1 5 a,A1 6 a,A2a,A3a;
Illustratively, as shown in frame 2ma of FIG. 2, at any frame A1 4 a, acquiring a full-frame command at the position of a, and acquiring any frame A1 4 a is a target media segment A1, and each frame A1 of the target media segment A1 can be acquired 1 a, A1 2 a,A1 3 a,A1 4 a,A1 5 a,A1 6 a as the target position; and adding the target full-frame command corresponding to the target position in an editing line of the target full-frame command.
Illustratively, as shown in frame 2mb of FIG. 2, at any frame A1 4 a position of a obtains a full-frame command, and any frame A1 of the target media fragment A1 can be obtained 4 a position is used as a starting frame, and each frame A1 after the position of the starting frame in the target media fragment is acquired in the media frame editing line 4 a,A1 5 a,A1 6 and a, taking the position of the part a as the target position, and adding the target full-frame command corresponding to the target position in an editing line of the target full-frame command.
Illustratively, following the second example described above, an addition command is received to add a new media segment B and a new media segment C to a source media segment a, to obtain a plurality of sequentially arranged media segments a, B, C, at which time a second source identifier B may be set for each of a, B, C to identify each of the homologous media segments Ab, bb, cb, respectively, and when a is a target media segment, other media segments B, C to be edited may be extracted according to the second source identifier B; illustratively, a is a target media segment comprising a plurality of sets of media frames: a is that 1 , A 2 ,A 3 ,A 4 ,A 5 ,A 6 The method comprises the steps of carrying out a first treatment on the surface of the The media fragments with the second source identifier b are arranged as A in sequence 1 b, A 2 b,A 3 b,A 4 b,A 5 b,A 6 b,Bb,Cb;
Illustratively, as shown in frame 2na of FIG. 2, at any frame A1 4 a, acquiring a full-frame command at the position of a, and acquiring any frame A1 4 a is a target media segment A1, and each frame a of the target media segment A1 can be acquired 1 b, A 2 b,A 3 b,A 4 b,A 5 b,A 6 b as the target position; adding the target full-frame command in an editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, as shown in frame 2nb of FIG. 2, at any frame A 4 b, acquiring a full-frame command from the position of b, wherein any frame A of the target media fragment A1 can be obtained 4 b position is used as a starting frame, and each frame A after the position of the starting frame in the target media fragment is acquired in the media frame editing line 4 b,A 5 b,A 6 b, taking the position of the object as the target position, and adding the target full-frame command corresponding to the target position in an editing area; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
For example, following the third example, after receiving the segmentation command for the source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a first source identifier a may be set for each of the A1, A2, A3 to identify each of the homologous media segments A1a, A2a, A3a, and then an addition command for the new media segment B and the new media segment C is added, and then sequentially arranged media segments are A1a, A2a, A3a, B, and C, and a second source identifier B is set for each of the A1a, A2a, A3a, B, C to identify each of the homologous media segments and the different source media segments A1ab, A2ab, A3ab, bb, cb. Then, when A1a is a target media fragment, other media fragments A2, A3, B, C to be edited can be extracted according to the first source identifier a and/or the second source identifier B, that is, the media fragment set to be edited is obtained as A1ab, A2ab, A3ab, bb, cb.
Illustratively, A1a is a target media segment comprising a plurality of media frame sets: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media segments with the first source identifier a and/or the second source identifier b are arranged in order as A1 1 ab, A1 2 ab,A1 3 ab,A1 4 ab,A1 5 ab,A1 6 ab, A2ab, A3ab, bb, cb; illustratively, in any frame A1 4 a, acquiring a full-frame command at the position of a, and acquiring any frame A1 4 a target media fragment of a is A1, and a media fragment set to be edited with a first source identifier a and/or a first source identifier b is acquired according to a first source identifier a and/or a second source identifier b of the target media fragment A1, wherein the media fragment set to be edited with the first source identifier a and/or the first source identifier b is A1a, A2a, A3a, bb and Cb;
illustratively, as shown in frame 2pa of FIG. 2, at any frame A1 4 ab, the position obtains the full frame command, and any frame A1 can be obtained 4 ab target media fragment is A1, each frame A1 of the target media fragment A1 can be obtained 1 ab, A1 2 ab,A1 3 ab,A1 4 ab,A1 5 ab,A1 6 ab's position as the target position; adding the target full-frame command in an editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
Illustratively, as shown in frame 2pb of FIG. 2, at any frame A 4 b, acquiring a full-frame command from the position of b, wherein any frame A1 of the target media fragment A1 can be obtained 4 The ab position is used as a starting frame, and each frame A1 after the position of the starting frame in the target media fragment is acquired in the media frame editing line 4 ab,A1 5 ab,A1 6 ab is used as the target position, and the target full-frame command is added in the editing area corresponding to the target position; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
For example, following the fourth example, after receiving the segmentation command for the source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, and at this time, a third source identifier C may be set for each of the A1, A2, A3 to identify each of the homologous media segments A1C, A2C, A3C, and then an addition command for the newly added media segment B and the newly added media segment C is set, and then the sequentially arranged media segments are A1C, A2C, A3C, B, and C, and at this time, a plurality of sequentially arranged media segments A1C, A2C, A3C, bc, cc may be obtained by setting a third source identifier C for the newly added media segments B, C.
Illustratively, A1c is taken as a target media segment, and other media segments A2c, A3c, bc, cc to be edited can be extracted according to the third source identifier c. A1 as a target media segment, comprising a plurality of media frames: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media fragments with the third source identifier c are arranged as A1 in sequence 1 c, A1 2 c,A1 3 c,A1 4 c,A1 5 c,A1 6 c,A2c,A3c,Bc,Cc。
Illustratively, as shown in frame 2ra of FIG. 2, at any frame A1 4 c, acquiring a full-frame command at the position of c, and directly acquiring any frame A1 4 c is a target media segment A1c, and each frame A1 of the target media segment A1c can be acquired 1 c, A1 2 c,A1 3 c,A1 4 c,A1 5 c,A1 6 c as the target position; adding the target full-frame command in an editing area corresponding to the target position; in particular, the target may be added directly in the editing line of the media frame, directly corresponding to the target positionThe full-frame command may be added to the target full-frame command corresponding to the target position in an edit line of the target full-frame command.
Illustratively, as shown in frame 2rb of FIG. 2, at any frame A1 4 c obtains the full-frame command from the position of c, and can target any frame A1 of the media fragment A1c 4 c position is used as a starting frame, and each frame A1 after the position of the starting frame in the target media fragment is acquired in the media frame editing line 4 c,A1 5 c,A1 6 c, taking the position of the object as the target position, and adding the target full-frame command corresponding to the target position in an editing area; specifically, the target full-frame command may be added directly in the editing line of the media frame, and directly corresponding to the target position, or the target full-frame command may be added in the editing line of the target full-frame command, and corresponding to the target position.
According to the embodiment of the invention, when multiple sections of homologous media fragments and multiple sections of different media fragments are edited, the media fragments which are being edited can be flexibly and automatically obtained according to the method of the embodiment, and only non-time sequence full-frame commands are added to the media fragments which are being edited, so that the user editing time is saved, and the media editing efficiency is improved.
In an embodiment, a prompt control may be designed in the image user interface, and the prompt control is used to receive a trigger signal manually operated by a user to obtain an indication of applying the target full-frame command to a media segment homologous to the target media segment, or an indication of applying the target full-frame command to a media segment to be edited after the target media segment, or an indication of only applying the target full-frame command to a target media segment, and various indications of receiving manual triggering through the prompt control will be specifically described below in connection with the image user interface.
As in the graphical user interface shown in interface 1M of fig. 1A, the editing line 103 of the media frame displays a set of media segments to be edited in a sequential arrangement, where the media segments identified by the shadows are homologous media segments; after acquiring the full-frame command 105 of any frame position of the target media segment of the editing line 103 of the media frame, a prompt control 106 can be displayed in an editing area, and an image user interface shown as an interface 1Aa in fig. 1A prompts a user whether to apply the full-frame command to all media segments;
still further, as shown in interface 1Ab in fig. 1A, further, whether to apply the full-frame command to other media segments of the same source as the media segment acted by the full-frame command 105 is prompted by the prompt control 107, at this time, in the interface shown in interface 1Ab in fig. 1A, the set prompt item is "apply to homologous media", in the interface shown in interface 1Ab in fig. 1A, it is displayed that the prompt control 107 is not triggered by the default user, which means that when the target full-frame command is a non-time-sequence full-frame command, an indication of applying the target full-frame command to a media segment of the same source as the target media segment and an indication of applying the target full-frame command to a media segment to be edited after the target media segment are not obtained, and then, the media editing method described in steps 501-502 of this embodiment is executed, and the target position acted by the target full-frame command is obtained; adding the target full-frame command in an editing area corresponding to the target position; the example given in this embodiment is the new full-frame command edit line 104, in which the target full-frame command is added corresponding to the target position in the target full-frame command edit line 104 of the edit area, that is, the full-frame command is applied only to the target media segment acted on by the full-frame command 105 in the full-frame command edit line 104; furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
If the user triggers the interface 1Ab prompt control 107 in fig. 1A, the full-frame command is applied to other media fragments with homology to the media fragment acted by the full-frame command 105, the target full-frame command indicating that any frame position of the target media fragment of the editing line of the media frame is acquired as a non-time-sequence full-frame command, and an instruction of applying the target full-frame command to the media fragment with homology to the target media fragment is acquired, the media editing method recorded in steps 301-302 of the embodiment is executed, and the editing interface shown as an interface 1N in fig. 1A is switched;
further as shown in interface 1N in fig. 1A, in the editing line 103 of the media frame, according to the arrangement sequence of the target media segment set in the media segment set to be edited, a target position acted by the target full-frame command is obtained; adding the target full-frame command in an editing area corresponding to the target position; an example given in this embodiment is to add the target full-frame command corresponding to the target position in the edit line 104 of the target full-frame command of the edit section. In addition, the target full frame command may be directly added to the editing line of the media frame, which is not shown in this example, directly corresponding to the target position.
The process of jumping from the gui of interface 1M in fig. 1B to the gui of interface 1Ea in fig. 1B and from the gui of interface 1Eb in fig. 1B to the gui of interface 1K in fig. 1B is similar to the process of jumping from the gui of interface 1M in fig. 1A to the gui of interface 1Aa in fig. 1A and from the gui of interface 1Ab in fig. 1A to the gui of interface 1N in fig. 1A, and the differences only differ in the initial position of the full frame command, and how to position the full frame command initial frame has been described in other embodiments, and is not repeated here.
In one embodiment, as shown in interface 1M of fig. 1A, the editing line 103 of the media frame displays a set of media segments to be edited in a sequential order, wherein the media segments identified by the shadows are homologous media segments; after acquiring the full-frame command 105 for any frame position of the target media segment of the editing line 103 of the media frame, switching to an image user interface shown as an interface 1Ba in fig. 1A, wherein a prompt control 106 can be displayed in an editing area in the image user interface, and the prompt control 106 is used for prompting a user not to apply the full-frame command to all media segments;
In an implementation manner that the prompt control 106 is not triggered is given in the interface 1Ba in fig. 1A, when the prompt control 106 in the interface 1Ba in fig. 1A is not triggered, the target full-frame command is a non-time-sequence full-frame command, and after the full-frame command is obtained and applied to all media fragments to be edited, that is, the target full-frame command is applied to the media fragments to be edited, after the full-frame command 105 of any frame position of the target media fragments of the editing line 103 of the media frames is obtained, the image user interface as shown in the interface 1Ba in fig. 1A is switched, in the interface 1Ba in fig. 1A, the default is that the full-frame command is applied to the media fragments to be edited after the full-frame command is obtained and the full-frame command 105 of any frame position of the target media fragments of the editing line 103 of the media frames is directly used as a target position, and the target full-frame command can be added in the editing area corresponding to the target position; the example given in this embodiment is a newly created edit line 104 of full-frame commands, in which, in the edit line 104 of target full-frame commands in the edit area, the target full-frame commands are infinitely added at a position subsequent to the target position, as shown in the edit line 104 of target full-frame commands in interface 1Ba in fig. 1A; furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
When the prompt control 106 in the interface 1Ba in fig. 1A is triggered, it indicates that the full-frame command is not applied to all media segments, and at this time, the image user interface shown in the interface 1Bb in fig. 1A is switched to, where the prompt control 107 may be displayed in an edit area, where the prompt control 107 is used to prompt the user whether the full-frame command is applied to a homologous media segment;
in fig. 1A, when the prompt control 107 in the interface 1Bb is not triggered, it indicates that the target full-frame command is a non-time-series full-frame command, and no indication is obtained that the target full-frame command is applied to a media segment that is homologous to the target media segment, and no indication is obtained that the target full-frame command is applied to a media segment to be edited after the target media segment; the media editing method described in steps 501-502 of this embodiment may be executed, where in the editing line 103 of the media frame, the target position acted by the target full-frame command is obtained according to the target media segment (the first shaded segment in the interface 1Bb in fig. 1A); adding the target full-frame command in an editing area corresponding to the target position; an example given in this embodiment is to add the target full-frame command corresponding to the target position in the edit line 104 of the target full-frame command of the edit section. Furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
When the prompt control 107 in the interface 1Bb in fig. 1A is triggered, the full-frame command is applied to the other media segments homologous to the media segment acted by the full-frame command 105, the target full-frame command indicating that any frame position of the target media segment of the editing line of the media frame is acquired as a non-time-series full-frame command, and the instruction of applying the target full-frame command to the media segment homologous to the target media segment is acquired, the media editing method described in steps 301-302 of the present embodiment is executed, the editing interface as shown in the interface 1N in fig. 1A is switched,
further as shown in interface 1N in fig. 1A, in the editing line 103 of the media frame, according to the arrangement sequence of the target media segment set in the media segment set to be edited, a target position acted by the target full-frame command is obtained; adding the target full-frame command in an editing area corresponding to the target position; the example given in the present embodiment is that in the edit line 104 of the target full-frame command of the edit section, the target full-frame command is added corresponding to the target position; furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
The process of jumping from the gui of the interface 1M to the gui of the interface 1Fa of fig. 1B and from the gui of the interface 1Fb of fig. 1B to the gui of the interface 1K of fig. 1B is similar to the process of jumping from the gui of the interface 1M to the gui of the interface 1Ba of fig. 1A and from the gui of the interface 1Bb of fig. 1A to the gui of the interface 1N of fig. 1A, and the differences are only different at the start position of the full frame command, and how to position the full frame command start frame is already described in other embodiments, and will not be repeated here.
In another implementation manner in which the prompt control 106 is not triggered is given in the interface 1Ca in fig. 1A, when the prompt control 106 is not triggered in the interface 1Ca in fig. 1A, the full-frame command is applied to all media segments by default in the interface 1Ca in fig. 1A, as shown in the interface 1Ca in fig. 1A, after the full-frame command 105 for any frame position of the target media segment of the editing line 103 of the media frame is obtained, the media editing method described in steps 401-402 in this embodiment may be executed, and the target position acted by the target full-frame command is obtained; adding the target full-frame command in an editing area corresponding to the target position; the example given in this embodiment is a new full-frame command edit line 104, in which the target full-frame command is added corresponding to the target position in the target full-frame command edit line 104 of the edit area, that is, the full-frame command is applied to the media segment to be edited after the target media segment in the full-frame command edit line 104, as shown in the target full-frame command edit line 104 in the interface 1Ca in fig. 1A; furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
When the prompt control 106 in the interface 1Ca in fig. 1A is triggered, it indicates that the full-frame command is not applied to all media segments, and at this time, the image user interface shown in the interface 1Cb in fig. 1A is switched to, where the prompt control 107 may be displayed in an edit area, where the prompt control 107 is used to prompt the user whether the full-frame command is applied to a homologous media segment;
if the prompt control 107 in the interface 1Cb in fig. 1A is not triggered, it indicates that the target full-frame command is a non-time-series full-frame command, and no indication is obtained that the target full-frame command is applied to a media segment homologous to the target media segment, and no indication is obtained that the target full-frame command is applied to a media segment to be edited after the target media segment; the media editing method described in steps 501-502 of this embodiment may be performed, where in the editing line 103 of the media frame, the target position acted by the target full-frame command is obtained according to the target media segment (the first shaded segment in the interface 1Cb in fig. 1A); adding the target full-frame command in an editing area corresponding to the target position; an example given in this embodiment is to add the target full-frame command corresponding to the target position in the edit line 104 of the target full-frame command of the edit section. Furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
When the prompt control 107 in the interface 1Cb in fig. 1A is triggered, the full-frame command is applied to the other media segments homologous to the media segment acted by the full-frame command 105, the target full-frame command indicating that any frame position of the target media segment of the editing line of the media frame is acquired as a non-time-series full-frame command, and the instruction of applying the target full-frame command to the media segment homologous to the target media segment is acquired, the media editing method described in steps 301-302 of the embodiment is executed, the editing interface as shown in the interface 1N in fig. 1A is switched,
further as shown in interface 1N in fig. 1A, in the editing line 103 of the media frame, according to the arrangement sequence of the target media segment set in the media segment set to be edited, a target position acted by the target full-frame command is obtained; adding the target full-frame command in an editing area corresponding to the target position; an example given in this embodiment is to add the target full-frame command corresponding to the target position in the edit line 104 of the target full-frame command of the edit section. Furthermore, it is also possible to add the target full frame command directly in the edit line of the media frame, directly corresponding to the target position, not shown in this example;
The process of jumping from the gui of interface 1M to the gui of interface 1Ga in fig. 1B and from the gui of interface 1Gb in fig. 1B to the gui of interface 1K in fig. 1B is similar to the process of jumping from the gui of interface 1M to the gui of interface 1Ca in fig. 1A and from the gui of interface 1Cb in fig. 1A to the gui of interface 1N in fig. 1A, and the difference is only that the initial position of the full frame command is located, and how to locate the full frame command initial frame is described in other embodiments, and is not repeated here.
By adopting the scheme of the embodiment of the invention, when the media fragments to be edited come from different source media respectively, for example, a user clips a section of material which is shot in an editor into a plurality of sections of homologous media fragments, and simultaneously, a section of media fragments of a television play is added for synthesis, at the moment, a filter effect (an exemplary non-time-sequence full-frame command) is required to be added to the homologous media fragments, and the non-time-sequence full-frame command can be flexibly added to the homologous media fragments by utilizing the scheme of the embodiment, so that the editing time of the user is saved, and the editing efficiency is improved.
Meanwhile, by adopting the scheme of the embodiment of the invention, when the prompt control is designed, a single prompt control design (such as the prompt control 106 in the interface 1Ba in the figure 1A, such as the prompt control 107 in the interface 1Bc in the figure 1A) is adopted, a default result (such as the full-frame command editing line 104 in the interface 1Ba in the figure 1A, such as the full-frame command editing line 104 in the interface 1Bc in the figure 1A) is given when the user does not trigger the prompt control, so that trigger selection operation before the user obtains the result is saved.
In another embodiment, a prompt control is not required to be designed in the image user interface, and is used for receiving a trigger signal of manual operation by a user, and the automatic identification is directly performed on the media fragment set to be edited by means of an artificial intelligence technology, and according to various characteristics of the media fragment set to be edited, whether the target full-frame command is applied to the media fragment homologous to the target media fragment, or the media fragment to be edited after being applied to the target media fragment, or only the target media fragment of the current section is judged, so that an instruction is further automatically given.
Illustratively, when it is identified that a media segment set to be edited has a media segment that is homologous to the target media segment and a media segment that is non-homologous to the target media segment, the media segments that are homologous to the target media segment are both original media segments, and the non-homologous media segment is a non-original media segment; an indication that the target full-frame command is to be applied to a media segment that is homologous to the target media segment may be automatically triggered; wherein the original media segments may be media segments that are identical to or associated with them (e.g., a certain unpublished self-shot segment) are not present in the media library, and the non-original media segments may be media segments that are identical to or associated with them (e.g., a clip of a certain television show) are present in the media library;
For example, when it is identified that the media segment set to be edited has a media segment homologous to the target media segment and a media segment non-homologous to the target media segment, which are both original media segments, the target full-frame command may be automatically triggered to be edited after being applied to the target media segment;
for example, when it is identified that there are no media segments in the set of media segments to be edited that are homologous to the target media segment, an indication of the target full frame command being applied to the target media segment may be automatically triggered.
By adopting the scheme of the embodiment of the invention, when the media fragments to be edited comprise a plurality of sections of homologous media fragments and a plurality of sections of different source media fragments, the non-time-sequence full-frame command can be flexibly selected to be added to the homologous media fragments, the non-time-sequence full-frame command can be flexibly selected to be added to all the media fragments, or the non-time-sequence full-frame command can be added to the media fragments in the editing section, so that the flexible usability of editing software is improved, the editing time of a user is saved, and the editing efficiency is improved. In an embodiment, the method applied to the media terminal may include the following steps, and the specific flowchart is shown in fig. 3D:
Step 601, responding to the target full frame command as a time sequence full frame command, wherein the time sequence full frame command is a control command which acts on each frame of the target media segment and is related to time sequence,
taking any frame position of the target media fragment as a starting frame or taking a first frame position of the target media fragment as a starting frame; and
according to the arrangement sequence of the media fragment set to be edited, the position of each frame of each media fragment after the position of the initial frame in the media fragment set to be edited is obtained in the media frame editing line to be used as a target position;
step 602, adding the full-frame command corresponding to the target position and the position subsequent to the target position in the editing line of the full-frame command of the editing area. The embodiment of the invention not only can flexibly select the action range of the non-time sequence full-frame command, but also can be matched with the time sequence full-frame command to be used for editing, thereby improving the flexible usability of editing software, saving the editing time of a user and improving the editing efficiency.
Illustratively, following the first example described above, after receiving a segmentation command to a source media segment a, a plurality of sequentially arranged media segments A1, A2, A3 are obtained, at this time, a first source identifier a may be set for each of A1, A2, A3 to identify each of homologous media segments A1a, A2a, A3a, respectively, and when A1a is a target media segment, the homologous media segments A2a, A3a of A1a may be extracted according to the first source identifier a;
Illustratively, A1 is a target media segment comprising a plurality of sets of media frames: A1A 1 1 , A1 2 ,A1 3 ,A1 4 ,A1 5 ,A1 6 The method comprises the steps of carrying out a first treatment on the surface of the The media fragment with the first source identifier a set as described above,arranged in sequence as A1 1 a, A1 2 a,A1 3 a,A1 4 a,A1 5 a,A1 6 a, A2a, A3a; illustratively, in any frame A1 4 a, acquiring a full-frame command at the position of a, wherein any frame A1 of the target media fragment can be used 4 a position is a starting frame, and A1 after the position of the starting frame in the media fragment A1a in the media fragment set to be edited is acquired in the media frame editing line according to the arrangement sequence of the media fragment set to be edited 4 a,A1 5 a,A1 6 a, each frame position of the media fragment A2a, each frame position of the media fragment A3a as a target position,
illustratively, as shown in frame 2ta of fig. 2, in the edit line of the full-frame command, the full-frame command is infinitely added corresponding to the target position and the position subsequent to the target position.
Illustratively, in any frame A1 4 a, acquiring a full-frame command at the position of a, wherein the full-frame command can be acquired by using the first frame A1 of the target media fragment 1 a position is a starting frame, and A1 after the position of the starting frame in the media fragment A1a in the media fragment set to be edited is acquired in the media frame editing line according to the arrangement sequence of the media fragment set to be edited 1 a, A1 2 a,A1 3 a,A1 4 a,A1 5 a,A1 6 a, each frame position of the media fragment A2a, each frame position of the media fragment A3a as a target position,
illustratively, as shown in frame 2tb shown in fig. 2, in the edit line of the full-frame command, the full-frame command is infinitely added corresponding to the target position and the position subsequent to the target position.
Media editing apparatuses of one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these means may be configured by the steps taught by the present solution using commercially available hardware components.
Fig. 4 is a schematic structural diagram of a media editing apparatus according to an embodiment of the present invention, as shown in fig. 4, where the apparatus includes: the device comprises: the acquisition module 11 and the addition module 12,
an acquisition module 11 for responding to acquisition of a target full-frame command of any frame position of a target media segment of the media frame editing line as a non-time sequence full-frame command, and acquisition of an instruction of applying the target full-frame command to a media segment homologous to the target media segment; the non-time sequence full-frame command is a control command which acts on each frame of the target media fragment and is irrelevant to time sequence, the target media fragment is any one of a media fragment set to be edited, the media fragment set to be edited is all media fragments loaded on the media frame editing line, and the media fragment set to be edited has an arrangement sequence on the media frame editing line;
Acquiring a target media fragment set with a first source identifier from the media fragment set to be edited according to the first source identifier of the target media fragment, wherein the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment;
an adding module 12, configured to obtain, in an editing line of the media frame, a target position acted by the target full-frame command according to an arrangement sequence of the target media segment set in the media segment set to be edited;
and adding the target full-frame command corresponding to the target position in an editing line of the target full-frame command of the editing area.
The adding module 12 is specifically further configured to obtain, as the target position, a position of each frame of each media segment of the target media segment set; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame of each media fragment after the position of the starting frame in the target media fragment set as the target position in the media frame editing row according to the arrangement sequence of the target media fragment set in the media fragment set to be edited.
The obtaining module 11 is further configured to respond to the target full-frame command being a non-time-sequence full-frame command, and obtain an indication of a media segment to be edited after the target full-frame command is applied to the target media segment,
acquiring the media fragment set to be edited, wherein the media fragment set to be edited is acquired according to a first source identifier and/or a second source identifier of the target media fragment or is acquired according to a third source identifier of the target media fragment, the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment, the second source identifier is an identifier for identifying a media fragment different from the target media fragment, and the third source identifier is an identifier for identifying a media fragment homologous to the target media fragment and/or a media fragment different from the target media fragment;
the adding module 12 is specifically further configured to obtain, in the editing line of the media frame, a target position acted by the target full-frame command according to the arrangement sequence of the media segment set to be edited,
and adding the target full-frame command corresponding to the target position in an editing line of the target full-frame command of the editing area.
The adding module 12 is specifically further configured to obtain, in the media frame editing line, a position of each frame of each media segment of the media segment set to be edited as the target position according to an arrangement sequence of the media segment set to be edited; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame of each media fragment after the position of the starting frame in the media fragment set to be edited in the media frame editing line as the target position according to the arrangement sequence of the media fragment set to be edited.
The obtaining module 11 is specifically further configured to respond to the target full-frame command being a non-time-sequence full-frame command, and not obtain an indication of applying the target full-frame command to a media segment homologous to the target media segment, and not obtain an indication of applying the target full-frame command to a media segment to be edited after the target media segment,
the adding module 12 is specifically further configured to obtain, in an editing line of the media frame, a target position acted by the target full-frame command according to the target media segment;
And adding the target full-frame command corresponding to the target position in an editing line of the target full-frame command of the editing area.
The adding module 12 is specifically further configured to obtain a position of each frame of the target media segment as the target position; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame after the position of the starting frame in the target media fragment in the media frame editing line as the target position.
The media editing apparatus further comprises a setting module 13,
the setting module 13 is specifically configured to, in response to executing a segmentation command on any source media segment to segment the source media segment, obtain homologous media segments, set the first source identifier for each homologous media segment,
responding to an adding command executed on any source media segment to add a new media segment, obtaining different source media segments formed by the source media segment and the new media segment, and setting a second source identifier for each different source media segment; or (b)
In response to executing a split command on any source media segment to split the source media segment, obtaining homologous media segments, setting the third source identifier for each homologous media segment, and
And responding to the addition command executed on any source media fragment to add a new media fragment, obtaining different source media fragments consisting of the source media fragment and the new media fragment, and setting the third source identifier for each different source media fragment.
The obtaining module 11 is specifically further configured to respond to the target full-frame command as a time sequence full-frame command, where the time sequence full-frame command is a control command that acts on each frame of the target media segment and is related to time sequence,
taking any frame position of the target media fragment as a starting frame or taking a first frame position of the target media fragment as a starting frame; and
according to the arrangement sequence of the media fragment set to be edited, the position of each frame of each media fragment after the position of the initial frame in the media fragment set to be edited is obtained in the media frame editing line to be used as a target position;
the adding module 12 is specifically further configured to add the full-frame command infinitely corresponding to the target position and a position subsequent to the target position in an editing line of the full-frame command in the editing area.
The apparatus shown in fig. 4 may perform the steps described in the foregoing embodiments, and detailed execution and technical effects are referred to in the foregoing embodiments and are not described herein.
In one possible design, the structure of the media editing apparatus shown in fig. 4 may be implemented as an electronic device, as shown in fig. 5, where the electronic device may include: memory 21, processor 22, communication interface 23. Wherein the memory 21 has stored thereon executable code which, when executed by the processor 22, causes the processor 22 to at least implement the data governance method as provided in the previous embodiments.
Additionally, embodiments of the present invention provide a non-transitory machine-readable storage medium having stored thereon executable code that, when executed by a processor of an electronic device, causes the processor to at least implement a media editing method as provided in the previous embodiments.
The apparatus embodiments described above are merely illustrative, wherein the units described as separate components may or may not be physically separate. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
From the above description of the embodiments, it will be apparent to those skilled in the art that the embodiments may be implemented by adding necessary general purpose hardware platforms, or may be implemented by a combination of hardware and software. Based on such understanding, the foregoing aspects, in essence and portions contributing to the art, may be embodied in the form of a computer program product, which may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and are not limiting; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention.

Claims (10)

1. A media editing method applied to an editing terminal, wherein content displayed in an image user interface of the editing terminal at least comprises an object display area and an editing area, the object display area is used for displaying content of each frame of each media segment to be edited, the editing area is used for editing the media frames of each media segment to be edited, and the editing area at least comprises an editing row used for loading the media frames of each media segment to be edited, the method comprises:
in response to obtaining a non-sequential full frame command for a target full frame command for any frame position of a target media segment of an edit line of the media frame, and obtaining an indication of application of the target full frame command to a media segment homologous to the target media segment; the non-time sequence full-frame command is a control command which acts on each frame of the target media fragment and is irrelevant to time sequence, the target media fragment is any one of a media fragment set to be edited, the media fragment set to be edited is all media fragments loaded on the media frame editing line, and the media fragment set to be edited has an arrangement sequence on the media frame editing line;
Acquiring a target media fragment set with a first source identifier from the media fragment set to be edited according to the first source identifier of the target media fragment, wherein the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment;
in the editing line of the media frame, according to the arrangement sequence of the target media fragment set in the media fragment set to be edited, acquiring a target position acted by the target full-frame command;
and adding the target full-frame command in the editing area corresponding to the target position.
2. The media editing method of claim 1, wherein the obtaining the target location of the full frame command action comprises:
acquiring the position of each frame of each media fragment of the target media fragment set as the target position; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame of each media fragment after the position of the starting frame in the target media fragment set as the target position in the media frame editing row according to the arrangement sequence of the target media fragment set in the media fragment set to be edited.
3. The media editing method of claim 1, wherein,
in response to the target full-frame command being a non-sequential full-frame command, and obtaining an indication of a media segment to be edited after the target full-frame command is applied to the target media segment,
acquiring the media fragment set to be edited, wherein the media fragment set to be edited is acquired according to a first source identifier and/or a second source identifier of the target media fragment or is acquired according to a third source identifier of the target media fragment, the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment, the second source identifier is an identifier for identifying a media fragment different from the target media fragment, and the third source identifier is an identifier for identifying a media fragment homologous to the target media fragment and/or a media fragment different from the target media fragment;
in the editing line of the media frame, according to the arrangement sequence of the media fragment set to be edited, the target position acted by the target full-frame command is obtained,
and adding the target full-frame command in the editing area corresponding to the target position.
4. A media editing method as claimed in claim 3, wherein the obtaining the target position of the target full frame command action comprises:
according to the arrangement sequence of the media fragment set to be edited, acquiring the position of each frame of each media fragment of the media fragment set to be edited in the media frame editing line as the target position; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame of each media fragment after the position of the starting frame in the media fragment set to be edited in the media frame editing line as the target position according to the arrangement sequence of the media fragment set to be edited.
5. The media editing method of claim 1, wherein,
in response to the target full-frame command being a non-time-sequential full-frame command, and not obtaining an indication of the target full-frame command being applied to a media segment that is homologous to the target media segment, and not obtaining an indication of a media segment to be edited after the target full-frame command is applied to the target media segment,
in the editing line of the media frame, acquiring a target position acted by the target full-frame command according to the target media fragment;
And adding the target full-frame command in the editing area corresponding to the target position.
6. The media editing method of claim 5, wherein obtaining the target position for the target full frame command action from the target media segment comprises:
acquiring the position of each frame of the target media fragment as the target position; or (b)
And taking any frame position of the target media fragment as a starting frame, and acquiring the position of each frame after the position of the starting frame in the target media fragment in the media frame editing line as the target position.
7. The media editing method of claim 1 or 2, wherein,
in response to executing a split command on any source media segment to split the source media segment, obtaining homologous media segments, setting the first source identifier for each homologous media segment,
responding to an adding command executed on any source media segment to add a new media segment, obtaining different source media segments formed by the source media segment and the new media segment, and setting a second source identifier for each different source media segment; or (b)
In response to executing the split command on any source media segment to split the source media segment, obtaining the homologous media segments, setting a third source identifier for each homologous media segment, and
And responding to the addition command executed on any source media fragment to add a new media fragment, obtaining different source media fragments consisting of the source media fragment and the new media fragment, and setting the third source identifier for each different source media fragment.
8. A media editing apparatus applied to an editing terminal, the content displayed in an image user interface of the editing terminal at least comprising an object display area for displaying the content of each frame of each media segment to be edited and an editing area for editing the media frames of each media segment to be edited, the editing area at least comprising a media frame editing line for loading the media segments to be edited, the apparatus comprising:
an acquisition module for responding to the acquisition of a target full-frame command of any frame position of a target media segment of the media frame editing line as a non-time sequence full-frame command and the acquisition of an indication of the application of the target full-frame command to the media segment homologous to the target media segment; the non-time sequence full-frame command is a control command which acts on each frame of the target media fragment and is irrelevant to time sequence, the target media fragment is any one of a media fragment set to be edited, the media fragment set to be edited is all media fragments loaded on the media frame editing line, and the media fragment set to be edited has an arrangement sequence on the media frame editing line;
Acquiring a target media fragment set with a first source identifier from the media fragment set to be edited according to the first source identifier of the target media fragment, wherein the first source identifier is an identifier for identifying a media fragment homologous to the target media fragment;
the adding module is used for acquiring a target position acted by the target full-frame command in the editing area according to the arrangement sequence of the target media fragment set in the media fragment set to be edited;
and adding the target full-frame command in the editing area corresponding to the target position.
9. An electronic device, comprising: a memory, a processor, a communication interface; wherein the memory has stored thereon executable code which, when executed by the processor, causes the processor to perform the media editing method of any of claims 1 to 4.
10. A non-transitory machine-readable storage medium having stored thereon executable code which, when executed by a processor of an electronic device, causes the processor to perform the media editing method of any of claims 1 to 4.
CN202310389160.9A 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium Active CN116634233B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202311648150.9A CN117499745A (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium
CN202310389160.9A CN116634233B (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310389160.9A CN116634233B (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium

Related Child Applications (1)

Application Number Title Priority Date Filing Date
CN202311648150.9A Division CN117499745A (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116634233A CN116634233A (en) 2023-08-22
CN116634233B true CN116634233B (en) 2024-02-09

Family

ID=87635525

Family Applications (2)

Application Number Title Priority Date Filing Date
CN202310389160.9A Active CN116634233B (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium
CN202311648150.9A Pending CN117499745A (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202311648150.9A Pending CN117499745A (en) 2023-04-12 2023-04-12 Media editing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (2) CN116634233B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001101838A (en) * 1999-09-30 2001-04-13 Sony Corp Editing method and digital recording and reproducing device
WO2013001537A1 (en) * 2011-06-30 2013-01-03 Human Monitoring Ltd. Methods and systems of editing and decoding a video file
WO2017176940A1 (en) * 2016-04-08 2017-10-12 Nightlight Systems Llc Digital media messages and files
CN107438839A (en) * 2016-10-25 2017-12-05 深圳市大疆创新科技有限公司 A kind of multimedia editing method, device and intelligent terminal
CN110737435A (en) * 2019-10-18 2020-01-31 网易(杭州)网络有限公司 Multimedia editing method and device in game, terminal equipment and storage medium
CN112541353A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Video generation method, device, equipment and medium
WO2021259322A1 (en) * 2020-06-23 2021-12-30 广州筷子信息科技有限公司 System and method for generating video
US11289127B1 (en) * 2021-02-11 2022-03-29 Loom, Inc. Instant video editing and associated methods and systems
CN115460448A (en) * 2022-08-19 2022-12-09 北京达佳互联信息技术有限公司 Media resource editing method and device, electronic equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10281893B2 (en) * 2009-06-25 2019-05-07 Universal Electronics Inc. System and method for configuration of macro commands in a controlling device
US9111579B2 (en) * 2011-11-14 2015-08-18 Apple Inc. Media editing with multi-camera media clips

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001101838A (en) * 1999-09-30 2001-04-13 Sony Corp Editing method and digital recording and reproducing device
WO2013001537A1 (en) * 2011-06-30 2013-01-03 Human Monitoring Ltd. Methods and systems of editing and decoding a video file
WO2017176940A1 (en) * 2016-04-08 2017-10-12 Nightlight Systems Llc Digital media messages and files
CN107438839A (en) * 2016-10-25 2017-12-05 深圳市大疆创新科技有限公司 A kind of multimedia editing method, device and intelligent terminal
CN110737435A (en) * 2019-10-18 2020-01-31 网易(杭州)网络有限公司 Multimedia editing method and device in game, terminal equipment and storage medium
WO2021259322A1 (en) * 2020-06-23 2021-12-30 广州筷子信息科技有限公司 System and method for generating video
CN112541353A (en) * 2020-12-24 2021-03-23 北京百度网讯科技有限公司 Video generation method, device, equipment and medium
US11289127B1 (en) * 2021-02-11 2022-03-29 Loom, Inc. Instant video editing and associated methods and systems
CN115460448A (en) * 2022-08-19 2022-12-09 北京达佳互联信息技术有限公司 Media resource editing method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN117499745A (en) 2024-02-02
CN116634233A (en) 2023-08-22

Similar Documents

Publication Publication Date Title
CN109672922B (en) Game video editing method and device
CN110263650B (en) Behavior class detection method and device, electronic equipment and computer readable medium
EP2132624B1 (en) Automatically generating audiovisual works
AU2007345938B2 (en) Method and system for video indexing and video synopsis
US8542982B2 (en) Image/video data editing apparatus and method for generating image or video soundtracks
KR102161080B1 (en) Device, method and program of generating background music of video
JP2011234226A (en) Video editing apparatus, and video editing method and program
JP2018535499A (en) Integrating audio into a multi-view interactive digital media representation
WO2018076174A1 (en) Multimedia editing method and device, and smart terminal
US11164604B2 (en) Video editing method and apparatus, computer device and readable storage medium
CN109429093B (en) Video editing method and terminal
CN112827172A (en) Shooting method, shooting device, electronic equipment and storage medium
CN113918522A (en) File generation method and device and electronic equipment
CN116634233B (en) Media editing method, device, equipment and storage medium
CN114286169B (en) Video generation method, device, terminal, server and storage medium
CN109885799A (en) Content delivery method and device
CN109889883A (en) A kind of Wonderful time video recording method and device
JP2001119666A (en) Method of interactive processing of video sequence, storage medium thereof and system
CN113747233B (en) Music replacement method and device, electronic equipment and storage medium
CN116708945B (en) Media editing method, device, equipment and storage medium
CN113012723B (en) Multimedia file playing method and device and electronic equipment
CN115567660A (en) Video processing method and electronic equipment
CN105739957B (en) user interface data processing method and system
CN109495786B (en) Pre-configuration method and device of video processing parameter information and electronic equipment
WO2004081940A1 (en) A method and apparatus for generating an output video sequence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240119

Address after: Building 7, 11th District, No. 188 South Fourth Ring West Road, Fengtai District, Beijing, 100000, all 7 floors, including floors 1 to 8

Applicant after: Beijing Qicai Xingyun Digital Technology Co.,Ltd.

Address before: 102100 North 708 Dayushu Village, Dayushu Town, Yanqing District, Beijing

Applicant before: Beijing Youbeika Technology Co.,Ltd.

GR01 Patent grant
GR01 Patent grant