CN114666637A - Video editing method, audio editing method and electronic equipment - Google Patents

Video editing method, audio editing method and electronic equipment Download PDF

Info

Publication number
CN114666637A
CN114666637A CN202210234347.7A CN202210234347A CN114666637A CN 114666637 A CN114666637 A CN 114666637A CN 202210234347 A CN202210234347 A CN 202210234347A CN 114666637 A CN114666637 A CN 114666637A
Authority
CN
China
Prior art keywords
sentences
video
deleted
user
sentence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210234347.7A
Other languages
Chinese (zh)
Other versions
CN114666637B (en
Inventor
周凡皙
张晟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba China Co Ltd
Original Assignee
Alibaba China Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba China Co Ltd filed Critical Alibaba China Co Ltd
Priority to CN202210234347.7A priority Critical patent/CN114666637B/en
Publication of CN114666637A publication Critical patent/CN114666637A/en
Application granted granted Critical
Publication of CN114666637B publication Critical patent/CN114666637B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • H04N21/4307Synchronising the rendering of multiple content streams or additional data on devices, e.g. synchronisation of audio on a mobile phone with the video output on the TV screen
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/26Speech to text systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/439Processing of audio elementary streams
    • H04N21/4394Processing of audio elementary streams involving operations for analysing the audio stream, e.g. detecting features or characteristics in audio streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44016Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip

Abstract

The embodiment of the application provides a video clipping method, an audio clipping method and electronic equipment. The video clipping method comprises the following steps: displaying a plurality of sentences and clipping operation marks corresponding to the plurality of sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the video segments associated with the sentences; and restoring the video segments associated with the deleted sentences in response to the restoring operation triggered after the user selects one deleted sentence by referring to the clipping operation identification. According to the technical scheme provided by the embodiment of the application, the user can carry out recovery operation on the deleted sentence to be recovered in a targeted manner, the clipping operation is separated from other operations (such as sentence editing), the deleted sentence recovery efficiency is high, and the user experience is good.

Description

Video editing method, audio editing method and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video editing method, an audio editing method, and an electronic device.
Background
At present, videos are visible everywhere in daily life, people can shoot videos anytime and anywhere, and then the shot videos are clipped to be shared on a social platform or friends. For example, in live and short video scenes, there are a large number of practitioners who need to produce live video and live video with a large amount of explanation content.
For such videos, a common method is to clip the videos in a text clipping manner by subtitle recognition. The user may prune the corresponding video segment by pruning the caption sentence. However, the existing clipping method has low clipping flexibility, for example, assuming that the user performs clipping three times, the user finds that the subtitle sentence deleted for the first time is not suitable for recovery. The user needs to withdraw all the three operations and clip again.
Disclosure of Invention
In view of the problems in the prior art, embodiments of the present application provide a video clipping method, an audio clipping method, and an electronic device with high clipping flexibility.
In one embodiment of the present application, a video clipping method is provided. The method comprises the following steps:
displaying a plurality of sentences and clipping operation identifiers corresponding to the sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the video segments associated with the sentences;
and restoring the video segments associated with the deleted sentences in response to the restoring operation triggered after the user selects one deleted sentence by referring to the clipping operation identification.
In another embodiment of the present application, a video clipping method is provided. The method comprises the following steps:
displaying a plurality of deleted sentences and recovery marks corresponding to the deleted sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of deleted sentences, and the plurality of deleted sentences are respectively associated with corresponding video segments in the video to be clipped;
and restoring the video segment associated with the target sentence in response to an operation triggered by a user aiming at a restoring identifier corresponding to any target sentence in the plurality of deleted sentences.
In yet another embodiment of the present application, a video clipping method is provided. The method comprises the following steps:
determining display modes corresponding to a plurality of sentences respectively based on the clipping operation attributes corresponding to the sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation attribute represents the clipping operation type of a user;
and in response to a recovery operation triggered after a deleted sentence is selected by a user according to the display mode, recovering the video segment associated with the deleted sentence, and adjusting the attribute of the clipping operation corresponding to the deleted sentence to change the display mode.
In yet another embodiment of the present application, an audio clipping method is provided. The method comprises the following steps:
displaying a plurality of sentences and clipping operation marks corresponding to the plurality of sentences respectively; the audio text synchronous with the audio to be clipped comprises a plurality of sentences, the sentences are respectively associated with corresponding audio segments in the audio to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the audio segments associated with the sentences;
and restoring the audio segment associated with the deleted sentence in response to a restoring operation triggered after the user selects a deleted sentence by referring to the clipping operation identification.
In yet another embodiment of the present application, an electronic device is provided. The electronic device comprises a processor and a memory, wherein the memory is used for storing one or more computer instructions; the processor, coupled with the memory, is configured to execute the one or more computer instructions to implement the steps in the above-described method embodiments.
In an embodiment of the present application, a computer program product is provided. The computer program product comprises computer programs or instructions which, when executed by a processor, cause the processor to carry out the steps in the above-described method embodiments.
Embodiments of the present application provide a computer-readable storage medium storing a computer program, where the computer program can implement the method steps or functions provided by the above embodiments when executed by a computer.
According to the technical scheme provided by the embodiment of the application, the cutting operation identifier is displayed for each statement, represents the cutting operation type of the user and is used for assisting the user to randomly delete or recover the video segment associated with the statement; the user can carry out recovery operation on the deleted sentences which are required to be recovered in a targeted manner, the clipping operation is separated from other operations (such as sentence editing), the efficiency of recovering the deleted sentences is high, and the user experience is good.
In another technical solution provided in the embodiment of the present application, corresponding clipping operation attributes are configured for each statement, and display modes corresponding to different attributes are different; the user can distinguish the sentences in various states in different display modes, such as deleted sentences and reserved sentences; the user can carry out the recovery operation on the deleted sentence which is required to be recovered in a targeted manner, the clipping operation is separated from other operations (such as sentence editing), the efficiency of recovering the deleted sentence is high, and the user experience is good.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present application, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flowchart illustrating a video editing method according to an embodiment of the present application;
FIG. 2a is a schematic diagram of a first state of a clipping interface implemented by the video clipping method provided in the embodiment of FIG. 1;
FIG. 2b is a diagram illustrating a second state of a clipping interface implemented by the video clipping method according to the embodiment shown in FIG. 1;
FIG. 2c is a schematic diagram of a third state of a clipping interface implemented by the video clipping method provided in the embodiment of FIG. 1;
FIG. 3 is a flowchart illustrating a video editing method according to another embodiment of the present application;
FIG. 4a is a schematic diagram of a first state of a clipping interface implemented by the video clipping method provided in the embodiment of FIG. 3;
FIG. 4b is a diagram illustrating a second state of a clipping interface implemented by the video clipping method according to the embodiment shown in FIG. 3;
FIG. 5 is a flowchart illustrating a video clipping method according to another embodiment of the present application;
FIG. 6a is a schematic diagram of a first state of a clipping interface implemented by the video clipping method provided in the embodiment of FIG. 5;
FIG. 6b is a diagram illustrating a second state of a clipping interface implemented by the video clipping method provided in the embodiment of FIG. 5;
FIG. 7 is a flowchart illustrating an audio editing method according to an embodiment of the present application;
FIG. 8 is a schematic diagram of a clipping scheme provided by an embodiment of the present application;
FIG. 9 is a schematic diagram illustrating an exemplary configuration of a video editing apparatus according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
According to the video editing scheme based on subtitle recognition, a user can edit videos like editing documents, delete corresponding video segments while deleting characters, and the video editing method is mainly suitable for live broadcast and port broadcast contents with a large amount of explanation contents. The existing video clipping scheme based on subtitle recognition has the following disadvantages:
1. the operation efficiency is low: after the user deletes a sentence for many times, if the user wants to restore a deleted sentence, the user needs to withdraw the sentence once in order until the sentence is restored.
2. The recovery fragment cannot be accurately selected at one time: the deletion is effective for both deletion operation and subtitle editing operation, and in order to delete a certain deleted content, a user needs to orderly delete all operations such as deletion, subtitle editing and the like before one-time deletion until the deleted content is restored to the sentence, so that all edits made by the user in the process are invalid.
3. The adjusting space is small: the content deleted by the user is not visible.
In view of at least some of the above problems, the following embodiments of the present application are proposed. In order to make the technical solutions better understood by those skilled in the art, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application.
In some of the flows described in the specification, claims, and above-described figures of the present application, a number of operations are included that occur in a particular order, which operations may be performed out of order or in parallel as they occur herein. The sequence numbers of the operations, e.g., 101, 102, etc., are merely used to distinguish between the various operations, and the sequence numbers themselves do not represent any order of execution. Additionally, the flows may include more or fewer operations, and the operations may be performed sequentially or in parallel. It should be noted that, the descriptions of "first", "second", etc. in this document are used for distinguishing different messages, devices, modules, etc., and do not represent a sequential order, nor limit the types of "first" and "second" to be different. In addition, the embodiments described below are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Fig. 1 shows a flowchart of a video clipping method according to an embodiment of the present application. The execution main body of the method provided in this embodiment may be a client device, such as a mobile phone, a desktop computer, a notebook computer, a tablet computer, an intelligent wearable device, and the like, which is not specifically limited in this embodiment. The client device has installed thereon a computer program product (e.g., a clipping application) that implements the method steps described below, or a functional module added to an existing application that implements the method steps described below, or the like. Specifically, as shown in fig. 1, the method includes:
101. and displaying a plurality of sentences and the clipping operation identifiers corresponding to the sentences respectively.
The audio text synchronous with the video to be clipped comprises the sentences, the sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the video segments associated with the sentences.
102. And restoring the video segments associated with the deleted sentences in response to the restoring operation triggered after the user selects one deleted sentence by referring to the clipping operation identification.
In the foregoing 101, the clip operation identifier has two display icons, which are respectively: a first icon representing that the user has not been pruned and prompting the user that pruning may be performed, and a second icon representing that the user has been pruned and prompting the user that the user may be restored at any time. Correspondingly, the method provided by the embodiment may further include the following steps:
and in response to a recovery operation triggered after a user selects a deleted sentence with reference to the clipping operation identifier, switching and displaying the clipping operation identifier corresponding to the deleted sentence as the first icon from the second icon.
As shown in fig. 2a, 2b and 2c, the first icon may be a graphic icon
Figure BDA0003541578400000051
The second icon may be in the figure
Figure BDA0003541578400000052
Of course, the icons may be other than those shown in the drawings, and this embodiment is not particularly limited to this.
In another implementation, the editable operation identifier may not have a specific form, but may correspond to an attribute. For example, the clipping operation identifier has two display attributes, which are: a first attribute representing that the user has not been pruned and prompting the user for pruning, and a second attribute representing that the user has been pruned and prompting the user for recovery at any time. The display modes corresponding to the attributes are different, so that the sentences corresponding to the different attributes are displayed in a distinguished manner. For example, the sentence corresponding to the first attribute is displayed normally; and displaying the gray scale corresponding to the statement of the second attribute. Or, the statements corresponding to the first attribute are displayed normally; the sentence corresponding to the second attribute has a strikethrough, such as "hello", displayed thereon. Or, the sentence corresponding to the first attribute is normally displayed; corresponding to the statement with the second attribute, flashing and displaying; and so on. The scheme will be described below with a corresponding embodiment, and the clipping operation identifier is referred to as "clipping operation attribute" hereinafter.
In 102, the recovery operation triggered by the user based on the clipping operation identifier for a pruned statement may be: clicking operation (shown in fig. 2 b) performed by the user on the clip operation identifier with a specific icon form; or the user right clicks on a sentence corresponding to the cutting operation identification without the specific form representing the display attribute and clicks on a corresponding option in the floating window (as shown in fig. 6 b); or the user directly completes the corresponding recovery operation by clicking the corresponding control (such as backspace or enter key) on the keyboard based on the sentence corresponding to the editing operation identifier without the specific form representing and displaying the attribute. Of course, the control on the keyboard may be set manually, and this embodiment does not limit this.
What needs to be added here is: the video clipping refers to the process of performing processing such as segment interception, position arrangement, play speed adjustment, segment connection effect adjustment and the like on a video material to generate a target video. The technical scheme provided by the embodiment relates to the clipping operations such as segment retention, deletion and the like, clips the video through the operations such as deletion, retention, recovery and the like, and does not relate to the position sequencing, the playing speed adjustment, the segment connection effect and the like of the video segments.
According to the technical scheme provided by the embodiment, the cutting operation identifier is displayed for each statement, represents the cutting operation type of the user and is used for assisting the user to randomly delete or recover the video segment associated with the statement; the user can carry out recovery operation on the deleted sentences which are required to be recovered in a targeted manner, the clipping operation is separated from other operations (such as sentence editing), the efficiency of recovering the deleted sentences is high, and the user experience is good.
Further, in this embodiment, the step 101 of displaying the plurality of sentences and the clipping operation identifiers corresponding to the plurality of sentences may include:
1011. sequentially displaying the plurality of sentences according to the playing sequence of the video to be clipped;
1012. and respectively displaying corresponding clipping operation identifications on the corresponding lines of the sentences.
In this embodiment, only the sentence of the audio text may be displayed on the editing interface, and a plurality of sentences may be simultaneously displayed on the interface. The user can browse sentences not displayed on the interface by a slide operation. Or, besides displaying a plurality of sentences on the clipping interface, the video to be clipped can also be displayed and played. That is, in one implementable embodiment, the method may further comprise:
103. displaying a clipping interface; the editing interface comprises a video playing area and a sentence display area for displaying the plurality of sentences;
104. in the video playing area, playing the video to be edited;
105. and determining the plurality of sentences based on the playing progress of the video to be edited.
As shown in the figure, the video playing area and the sentence display area on the clipping interface can be arranged in an up-down distribution manner, for example, an upper part area of the clipping interface is used as the video playing area, and a lower part area is used as the sentence display area. Alternatively, the upper 1/3 area of the clip interface is a video playing area, and the lower 2/3 area is a sentence display area.
Accordingly, the step 1011 "sequentially displaying the sentences in the playing order of the video to be edited" may include:
and in the sentence display area, displaying the sentences adapted to the playing progress and at least one sentence associated with the video clip to be played in the follow-up playing progress in a rolling manner.
Further, the method provided by this embodiment may further include the following steps:
106. determining a sliding track in response to a sliding operation of a user;
107. determining a plurality of sentences to be operated according to the sliding track;
108. and respectively carrying out switching display on the cutting operation identifications corresponding to the sentences to be operated so as to switch from the first icon to the second icon or switch from the second icon to the first icon.
109. And respectively carrying out clipping operation on the video segments associated with the statements to be operated so as to delete or restore the corresponding video segments.
The step 107 may specifically include the following three ways:
and in the first mode, according to the starting position and the ending position of the sliding track, all the sentences marked as the second icons by the cutting operation between the starting position and the ending position are determined to be the plurality of sentences to be operated.
And secondly, determining all sentences marked as first icons by the cutting operation between the starting position and the ending position as the plurality of sentences to be operated according to the starting position and the ending position of the sliding track.
And determining all sentences between the starting position and the ending position as the plurality of sentences to be operated according to the starting position and the ending position of the sliding track.
In the first mode, when step 108 is executed, the clip operation identifier corresponding to each sentence can be switched from the second icon to the first icon in a unified manner, and when step 109 is executed, the video segments associated with each sentence can be recovered in a unified manner.
In the second mode, when step 108 is executed, the clip operation identifier corresponding to each sentence can be switched from the first icon to the second icon in a unified manner, and when step 109 is executed, the video segments associated with each sentence can be deleted in a unified manner.
In the third way, when step 108 is executed, the editing operation identifier corresponding to the first type of statement may be switched from the first icon to the second icon, and the editing operation identifier corresponding to the second type of statement may be switched from the second icon to the first icon (i.e., the opposite operation is performed in a unified manner); wherein, the first category of statements refers to: the statement that the original clip operation identified as the first icon (i.e., not pruned); the second category of statements refers to: the original clip operation is identified as the statement of the second icon (i.e., pruned). And when step 109 is executed, deleting the video segment associated with the first type of statement and restoring the video segment associated with the second type of statement.
Further, the method provided by this embodiment may further include the following steps:
110. performing voice recognition and/or subtitle recognition on the audio synchronized with the video to be clipped to obtain the audio text;
111. responding to the editing operation of a user for one sentence to be modified in the displayed plurality of sentences, and modifying the sentence to be modified based on the editing content of the user;
wherein the modification to the statement comprises at least one of: modifying text content, modifying text style and modifying punctuated sentences.
In addition, the video to be clipped in this embodiment may be a video from a live television program, a video captured by an internet-side video or a video captured by an image capturing device, and the like. In this embodiment, the Speech request may be subjected to Speech recognition by an automatic Speech recognition technology asr (automatic Speech recognition), a Speech recognition technology based on a deep learning technology, and the like, so as to determine text information corresponding to the Speech. And/or performing text Recognition on the image content on the video frames in the video segment, such as performing text Recognition on the image content by using an Optical Character Recognition (OCR) technology.
In this embodiment, the clipping operation and the editing operation are separated, the type of the clipping operation is represented by a clipping operation identifier, and each statement corresponds to one clipping operation identifier. The user refers to the operation identification of the cutting operation to the statement, and only the operation related to the cutting, such as deleting and restoring. Text editing, i.e. an editing operation on a sentence, is independent of a clipping operation. Even if the user deletes the associated video segment by deleting the first sentence, and then performs editing and modifying operations on the fifth sentence, the user performs other editing operations, and the like. If the user wants to restore the deleted first sentence, the user does not need to withdraw the editing and modifying operation of the fifth sentence and other clipping operations performed by the user, and the deleted first sentence can be accurately restored by the technical scheme provided by the embodiment.
Referring to the example shown in fig. 2a, 2b and 2c, after the user imports the video to be edited into the Application (APP), the editing interface of the application has two areas, the area located at the upper part is a video playing area, which is used for playing the video to be edited and displaying the playing progress on the playing progress. The lower area is a sentence display area for displaying a plurality of sentences and the editing operation identifiers 1 corresponding to the sentences. As shown in fig. 2a, when a video to be clipped is imported, before the user does not operate, the clipping operation identifiers 1 corresponding to the respective sentences are all first icons (indicating that the user does not delete and prompting the user that the user can delete). A user triggers a deletion operation on a part of sentences in a plurality of sentences for a plurality of times or once, as shown in fig. 2b, the user operates to delete the second sentence, the third sentence, the fourth sentence, the seventh sentence and the eighth sentence; correspondingly, the video segment associated with the second sentence, the video segment associated with the third sentence, the video segment associated with the fourth sentence, the video segment associated with the seventh sentence and the video segment associated with the eighth sentence are all deleted from the video to be edited. Wherein the second, third and fourth sentences can be deleted at once by the sliding operation mentioned in the present embodiment. For example, the user slides from the second sentence to the fourth sentence, and the second, third, and fourth sentences are deleted at once. Of course, the second, third and fourth sentences may also be deleted in multiple times, which is not specifically limited in this embodiment.
Suppose that the seventh and eighth sentences are deleted first and then the second, third and fourth sentences are deleted. At this time, the user feels that the context continuity is not good after the fourth sentence is deleted, which causes the clipped video to be more obtrusive and wants to withdraw the recovery. By adopting the technical scheme provided by the embodiment, as shown in fig. 2b, the user only needs to click the editing operation identifier corresponding to the fourth sentence, and the deleted fourth sentence can be recovered, as shown in fig. 2 c. In FIG. 2c, the corresponding clipping operation identifier of the fourth sentence is marked from the second icon (e.g., the second icon)
Figure BDA0003541578400000081
) Switching display to a first icon (e.g. switching display to a second icon)
Figure BDA0003541578400000082
). This is what the user can perceive from the interface, and what cannot be perceived directly is: and after the user operates the above operation, the deleted fourth sentence is recovered back to the video to be edited.
Fig. 3 is a schematic flowchart illustrating a video clipping method according to another embodiment of the present application. Similarly, an execution subject of the method provided in this embodiment may be a client device, such as a mobile phone, a desktop computer, a notebook computer, a tablet computer, an intelligent wearable device, and the like, which is not specifically limited in this embodiment. The client device has installed thereon a computer program product (e.g., a clipping application) that implements the method steps described below, or a functional module added to an existing application that implements the method steps described below, or the like. Specifically, as shown in fig. 3, the method includes:
201. displaying a plurality of deleted sentences and recovery marks corresponding to the deleted sentences respectively; the audio text synchronized with the video to be clipped comprises the plurality of deleted sentences, and the plurality of deleted sentences are respectively associated with corresponding video segments in the video to be clipped.
202. And restoring the video segment associated with the target sentence in response to an operation triggered by a user aiming at a restoring identifier corresponding to any target sentence in the plurality of deleted sentences.
The "displaying a plurality of pruned sentences" in 201 above may include:
sequentially displaying the plurality of pruned sentences according to the time sequence of the pruning operation of the user; or alternatively
And sequentially displaying the plurality of deleted sentences according to the playing sequence of the video to be clipped.
Further, in an implementation technical solution, the method provided in this embodiment may further include the following steps:
203. displaying a clipping interface; the editing interface comprises a video playing area and a sentence display area;
204. in the video playing area, playing the video to be edited;
205. displaying a plurality of undeleted sentences in the sentence display area; wherein, the plurality of un-deleted sentences comprise sentences which are adapted to the playing progress of the video to be clipped;
206. and if at least one deleted sentence exists after the non-deleted sentence is determined according to the playing sequence of the video to be clipped, displaying prompt information at the position corresponding to the non-deleted sentence to prompt a user that the deleted sentence exists after the user.
Based on the above solution, in this embodiment, the step 101 of displaying the plurality of deleted sentences and the recovery marks corresponding to the plurality of deleted sentences respectively may include:
1011. responding to the operation of a user on the prompt message, and acquiring the plurality of deleted sentences after the non-deleted sentences corresponding to the prompt message;
1012. displaying the plurality of deleted sentences and recovery marks corresponding to the plurality of deleted sentences in a popup window; or shifting down the sentences displayed after the un-deleted sentences to reserve a space, and displaying the plurality of deleted sentences and the recovery marks corresponding to the plurality of deleted sentences in the reserved space.
As shown in FIG. 4a, three sentences are deleted after the first sentence, and a display similar to the first sentence can be displayed on the interface
Figure BDA0003541578400000091
The prompt message of (1). After clicking the prompt message, the user may present an interface as shown in fig. 4b, for example, a pop-up window displays recovery marks corresponding to the deleted sentences and the deleted sentences, for example
Figure BDA0003541578400000092
In the drawings of the present application, another mode is not shown, that is, a mode of "moving down a sentence shown after the undeleted sentence to reserve a space, and showing the plurality of deleted sentences and the recovery marks corresponding to the plurality of deleted sentences in the reserved space". The resulting display effect of this approach is similar to that of fig. 2b described above.
In the technical solution provided by this embodiment, the sentence display area only displays the undeleted sentences, and displays, for example, the undeleted sentences at the positions where the sentences are not deleted
Figure BDA0003541578400000093
The user knows that there are deleted sentences, and clicks the user to see the deleted sentences. The advantage of this is that more undeleted sentences are displayed in the limited sentence display area, the display is more complete and coherent, and the sentence does not need to be always up and down looked up, thus being convenient for the clipping personnel to find the place where the clipping is not right.
Fig. 5 is a flowchart illustrating a video clipping method according to another embodiment of the present application. As shown in fig. 5, the method provided in this embodiment includes:
301. determining display modes corresponding to a plurality of sentences respectively based on the clipping operation attributes corresponding to the sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation attribute represents the clipping operation type of a user;
302. and in response to a recovery operation triggered after a deleted sentence is selected by a user according to the display mode, recovering the video segment associated with the deleted sentence, and adjusting the attribute of the clipping operation corresponding to the deleted sentence to change the display mode.
The method provided by this embodiment is that the above-mentioned "clipping operation identifier" is a scheme that has no specific form, but corresponds to an attribute.
The clipping operation attribute in this embodiment may include: a first attribute representing that the user has not been pruned and prompting the user for pruning, and a second attribute representing that the user has been pruned and prompting the user for recovery at any time. There may be different display modes corresponding to different attributes. For example, the clipping operation identifier has two display attributes, which are: a first attribute representing that the user has not been pruned and prompting the user for pruning, and a second attribute representing that the user has been pruned and prompting the user for recovery at any time. The display modes corresponding to the attributes are different, so that the sentences corresponding to the different attributes are displayed in a distinguished manner. For example, the sentence corresponding to the first attribute is displayed normally; and displaying the gray scale corresponding to the statement of the second attribute. Or, the statements corresponding to the first attribute are displayed normally; the sentence corresponding to the second attribute has a strikethrough, such as "hello", displayed thereon. Or, the sentence corresponding to the first attribute is normally displayed; corresponding to the statement with the second attribute, flashing and displaying; and so on.
For example, as shown in fig. 6a, the user selects the second sentence, and then clicks the right button; displaying an operation selection box as shown in fig. 6b on the editing interface, and the user can restore the sentence and restore the video segment associated with the sentence by clicking a 'restore' option in the operation selection box; the operation here can also be cancelled by clicking the "cancel" option in the operation box.
For example, in an implementation solution, the clipping operation attribute in the present embodiment may include: the attribute to be restored is deleted, and the deletable attribute is not deleted, which is not specifically limited in this embodiment.
Fig. 7 shows a flowchart of an audio clipping method provided by an embodiment of the present application. As shown, the method includes:
401. and displaying a plurality of sentences and the clipping operation identifications corresponding to the plurality of sentences respectively.
The audio text synchronous with the audio to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding audio segments in the audio to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the audio segments associated with the sentences.
402. And restoring the audio segment associated with the deleted sentence in response to a restoring operation triggered after the user selects a deleted sentence according to the clipping operation identification.
Further, the clipping operation identifier has two display icons, which are respectively: a first icon indicating that the user has not been truncated and prompting the user that the truncation may be performed, and a second icon indicating that the user has been truncated and prompting the user that the user may recover at any time. Correspondingly, the method provided by the embodiment may further include the following steps:
403. responding to a recovery operation triggered after a user selects a deleted sentence by referring to a clipping operation identifier, and switching and displaying the clipping operation identifier corresponding to the deleted sentence as the first icon from the second icon;
404. in response to a deletion operation for an undeleted sentence triggered by a user based on a clipping operation identifier, deleting an audio segment associated with the undeleted sentence, and switching and displaying the clipping operation identifier corresponding to the undeleted sentence as the second icon from the first icon.
The clipping object in this embodiment is different from the above embodiments, and at least part of the clipping objects in the steps in the above embodiments may be replaced with the audio to be clipped in this embodiment. That is, the present embodiment may include at least some of the steps in the above embodiments in addition to the steps described above. For example, the video playing area mentioned in the above embodiment may also have a video playing area, but no content may be displayed in the video playing area, or only one playing identifier is displayed, or one picture is displayed, and the like, which is not specifically limited in this embodiment.
The execution main body of the technical scheme provided by each embodiment of the application can be clipping software or a newly added functional module on the existing application. For example, software having functions corresponding to the technical solutions provided in the embodiments of the present application is installed on one client device. As shown in fig. 8, a user may import a video to be clipped, and an execution subject of the method provided in an embodiment of the present application may perform speech recognition and/or subtitle recognition on audio synchronized with the video to be clipped to obtain the audio text (i.e., subtitle); then displaying the sentences in the audio text; the user may perform the operations as shown in FIG. 8 on each statement: editing operations (deleting unwanted sentences in a text editing manner with deleting video segments associated with the unwanted sentences), editing sentences in audio text, adding effects (such as adding effects of character styles, background sounds, stickers, etc.), and the like. As can be seen from fig. 8, with the solution provided in the embodiment of the present application, the deletion of the sentence and the clipping operation of the video segment are separated from the other two types of operations in the embodiment; even if these types of operations are mixed together, the user can accurately withdraw when the user wants to withdraw a certain operation.
In summary, the embodiment of the present application provides a scheme for unordered recall, and a user can flexibly restore deleted content through the scheme, so that the flexibility and efficiency of editing a video by the user are improved. The following points can be embodied:
1. the operation efficiency is high: after the user deletes the sentence for many times, if the user wants to restore the sentence deleted once, the sentence can be directly restored.
2. Accurate recovery of deleted sentences: when the unordered withdrawal is carried out, a user can directly withdraw a certain deletion operation, and the deletion operation is separated from the editing operation and other adding operations.
3. The adjusting space is large: the deleted content of the user is still visible, and the user can flexibly restore, delete and preview the content in real time again.
Fig. 9 shows a schematic structural diagram of a video clip apparatus according to an embodiment of the present application. As shown in fig. 9, the video clipping device includes: a display module 11 and a clipping module 12. The display module 11 is configured to display a plurality of sentences and clip operation identifiers corresponding to the plurality of sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the video segments associated with the sentences. The clipping module 12 is configured to restore the video segment associated with the deleted statement in response to a user referring to the clipping operation identifier and a restoring operation triggered after the deleted statement is selected.
Further, the clipping operation identifier has two display icons, which are respectively: a first icon representing that the user has not been pruned and prompting the user that pruning may be performed, and a second icon representing that the user has been pruned and prompting the user that the user may be restored at any time. Correspondingly, the display module 11 in the apparatus provided in this embodiment is further configured to:
and in response to a recovery operation triggered after a user selects a deleted sentence with reference to the clipping operation identifier, switching and displaying the clipping operation identifier corresponding to the deleted sentence as the first icon from the second icon.
Further, when the display module 11 displays a plurality of sentences and the clipping operation identifiers corresponding to the plurality of sentences, it is specifically configured to:
sequentially displaying the plurality of sentences according to the playing sequence of the video to be edited;
and respectively displaying corresponding clipping operation identifications on the corresponding lines of the sentences.
Further, the display module 11 is further configured to display a clipping interface; the editing interface comprises a video playing area and a sentence display area for displaying the plurality of sentences; and playing the video to be edited in the video playing area. Accordingly, the video clip device provided by the present embodiment may further include a determination module. The determining module is used for determining the plurality of sentences based on the playing progress of the video to be clipped.
Further, when the display module 11 displays the multiple statements sequentially according to the playing sequence of the video to be edited, the display module is specifically configured to:
and in the sentence display area, displaying the sentences adapted to the playing progress and at least one sentence associated with the video clip to be played in the follow-up playing progress in a rolling manner.
Further, the determining module in the video clipping device provided by this embodiment may be further configured to: determining a sliding track in response to a sliding operation of a user; and determining a plurality of sentences to be operated according to the sliding track. Correspondingly, the display module 11 is further configured to respectively switch and display the clipping operation identifiers corresponding to the multiple statements to be operated, so that the first icon is switched to the second icon, or the second icon is switched to the first icon. The clipping module 12 is further configured to: and respectively carrying out clipping operation on the video segments associated with the statements to be operated so as to delete or restore the corresponding video segments.
Further, the video clipping device provided by the embodiment may further include an identification module and an editing module. The recognition module is used for carrying out voice recognition and/or subtitle recognition on the audio synchronized with the video to be clipped so as to obtain the audio text. The editing module is used for responding to the editing operation of a user on one displayed sentence needing to be modified in the plurality of sentences, and modifying the sentence needing to be modified based on the editing content of the user; wherein the modification to the statement comprises at least one of: modifying text content, modifying text style and modifying punctuated sentences.
Here, it should be noted that: the video editing apparatus provided in the above embodiments may implement the technical solutions described in the above method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the above method embodiments, and is not described herein again.
The video clipping device provided by the other embodiment of the application. The structure of the video clipping device is similar to the embodiment shown in FIG. 9 described above. Specifically, the video clipping device includes: a display module and a clipping module. The display module is used for displaying a plurality of deleted sentences and recovery marks corresponding to the deleted sentences respectively; the audio text synchronized with the video to be clipped comprises the plurality of deleted sentences, and the plurality of deleted sentences are respectively associated with corresponding video segments in the video to be clipped. The clipping module is used for responding to an operation triggered by a user aiming at a recovery identifier corresponding to any one target statement in the plurality of deleted statements, and recovering the video segments associated with the target statement.
Further, when the display module displays a plurality of pruned sentences, the display module is specifically configured to:
sequentially displaying the plurality of pruned sentences according to the time sequence of the user pruning operation; or
And sequentially displaying the plurality of deleted sentences according to the playing sequence of the video to be clipped.
Further, the display module is further configured to:
displaying a clipping interface; the editing interface comprises a video playing area and a sentence display area;
in the video playing area, playing the video to be edited;
displaying a plurality of unpunctured sentences in the sentence display area; wherein, the plurality of un-deleted sentences comprise sentences which are adapted to the playing progress of the video to be clipped;
and if at least one deleted sentence exists after the non-deleted sentence is determined according to the playing sequence of the video to be clipped, displaying prompt information at the position corresponding to the non-deleted sentence to prompt a user that the deleted sentence exists after the user.
Further, when the display module displays the plurality of deleted sentences and the recovery marks corresponding to the plurality of deleted sentences, the display module is specifically configured to:
responding to the operation of a user on the prompt message, and acquiring the plurality of deleted sentences after the non-deleted sentences corresponding to the prompt message;
displaying the plurality of deleted sentences and recovery marks corresponding to the plurality of deleted sentences in a popup window; or shifting down the sentences displayed after the un-deleted sentences to reserve a space, and displaying the plurality of deleted sentences and the recovery marks corresponding to the plurality of deleted sentences in the reserved space.
Here, it should be noted that: the video editing apparatus provided in the foregoing embodiments may implement the technical solutions described in the foregoing method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the foregoing method embodiments, and details are not described here again.
The application further provides a video clipping device. The structure of the video clipping device is similar to the embodiment shown in FIG. 9 and described above. Specifically, the video clipping device includes: a display module and a clipping module. The display module is used for determining display modes corresponding to a plurality of sentences respectively based on the clipping operation attributes corresponding to the plurality of sentences respectively; the audio text synchronized with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation attribute represents the clipping operation type of a user. The clipping module is used for responding to a recovery operation triggered after a user selects a deleted statement according to a display mode, recovering a video segment associated with the deleted statement, and adjusting the clipping operation attribute corresponding to the deleted statement to change the display mode.
Here, it should be noted that: the video editing apparatus provided in the above embodiments may implement the technical solutions described in the above method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the above method embodiments, and is not described herein again.
The application further provides an audio clipping device. The structure of the audio clipping device is similar to the embodiment shown in fig. 9 described above. Specifically, the audio clipping device includes: a display module and a clipping module. The display module is used for displaying a plurality of sentences and the cutting operation identifiers corresponding to the plurality of sentences respectively; the audio text synchronous with the audio to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding audio segments in the audio to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the audio segments associated with the sentences. And the clipping module is used for responding to a recovery operation triggered after the user selects a deleted sentence by referring to the clipping operation identification, and recovering the audio segment associated with the deleted sentence.
Further, the clipping operation identifier has two display icons, which are respectively: a first icon representing that the user has not been pruned and prompting the user that pruning may be performed, and a second icon representing that the user has been pruned and prompting the user that the user may be restored at any time. Correspondingly, the display module is further configured to:
responding to a recovery operation triggered after a user selects a deleted sentence by referring to a clipping operation identifier, and switching and displaying the clipping operation identifier corresponding to the deleted sentence as the first icon from the second icon;
in response to a deletion operation for an undeleted sentence triggered by a user based on a clipping operation identifier, deleting an audio segment associated with the undeleted sentence, and switching and displaying the clipping operation identifier corresponding to the undeleted sentence as the second icon from the first icon.
Here, it should be noted that: the audio editing apparatus provided in the above embodiments may implement the technical solutions described in the above method embodiments, and the specific implementation principle of each module or unit may refer to the corresponding content in the above method embodiments, and is not described herein again.
Fig. 10 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application. The electronic device includes a processor 32 and a memory 31. Wherein the memory 31 is configured to store one or more computer instructions; the processor 32 is coupled to the memory 31 for one or more computer instructions (e.g., computer instructions implementing data storage logic) for implementing the steps in the above-described video clipping method embodiments, or the steps in the above-described audio clipping method embodiments.
The memory 31 may be implemented by any type or combination of volatile or non-volatile memory devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Further, as shown in fig. 10, the electronic apparatus further includes: communication components 33, power components 35, display 34, and audio components 36. Only some of the components are schematically shown in fig. 10, and the electronic device is not meant to include only the components shown in fig. 10.
Yet another embodiment of the present application provides a computer program product (not shown in any figure of the drawings). The computer program product comprises computer programs or instructions which, when executed by a processor, cause the processor to carry out the steps in the above-described method embodiments.
Accordingly, the present application further provides a computer-readable storage medium storing a computer program, where the computer program can implement the method steps or functions provided by the foregoing embodiments when executed by a computer.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. Based on the understanding, the above technical solutions substantially or otherwise contributing to the prior art may be embodied in the form of a software product, which may be stored in a computer-readable storage medium, such as ROM/RAM, magnetic disk, optical disk, etc., and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the method according to the various embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present application.

Claims (14)

1. A video clipping method, comprising:
displaying a plurality of sentences and clipping operation marks corresponding to the plurality of sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of sentences, the plurality of sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the video segments associated with the sentences;
and restoring the video segments associated with the deleted sentences in response to the restoring operation triggered after the user selects one deleted sentence by referring to the clipping operation identification.
2. The method of claim 1, wherein the clipping operation identifier has two display icons, respectively: a first icon representing that the user is not deleted and prompting the user to delete and a second icon representing that the user is deleted and prompting the user to recover at any time; and
the method further comprises the following steps:
and in response to a recovery operation triggered after a user selects a deleted sentence with reference to the clipping operation identifier, switching and displaying the clipping operation identifier corresponding to the deleted sentence as the first icon from the second icon.
3. The method of claim 1, wherein displaying a plurality of sentences and the corresponding clipping operation identifiers for each of the plurality of sentences comprises:
sequentially displaying the plurality of sentences according to the playing sequence of the video to be clipped;
and respectively displaying corresponding clipping operation identifications on the corresponding lines of the sentences.
4. The method of claim 3, further comprising:
displaying a clipping interface; the editing interface comprises a video playing area and a sentence display area for displaying the plurality of sentences;
in the video playing area, playing the video to be edited;
and determining the plurality of sentences based on the playing progress of the video to be edited.
5. The method of claim 4, wherein sequentially displaying the plurality of sentences in the playing order of the video to be edited comprises:
and in the sentence display area, displaying the sentences adapted to the playing progress and at least one sentence associated with the video clip to be played in the follow-up playing progress in a rolling manner.
6. The method of any of claims 2 to 5, further comprising:
determining a sliding track in response to a sliding operation of a user;
determining a plurality of sentences to be operated according to the sliding track;
respectively switching and displaying the cutting operation identifications corresponding to the sentences to be operated so as to switch from a first icon to a second icon or switch from the second icon to the first icon;
and respectively carrying out clipping operation on the video segments associated with the statements to be operated so as to delete or restore the corresponding video segments.
7. The method of any one of claims 1 to 5, further comprising:
performing voice recognition and/or subtitle recognition on the audio synchronized with the video to be clipped to obtain the audio text;
responding to the editing operation of a user on one sentence needing to be modified in the displayed sentences, and modifying the sentence needing to be modified based on the editing content of the user;
wherein the modification to the statement comprises at least one of: modifying text content, modifying text style and modifying punctuated sentences.
8. A video clipping method, comprising:
displaying a plurality of deleted sentences and recovery marks corresponding to the deleted sentences respectively; the audio text synchronous with the video to be clipped comprises the plurality of deleted sentences, and the plurality of deleted sentences are respectively associated with corresponding video segments in the video to be clipped;
and restoring the video segment associated with the target sentence in response to an operation triggered by a user aiming at a restoring identifier corresponding to any target sentence in the plurality of deleted sentences.
9. The method of claim 8, wherein displaying a plurality of pruned statements comprises:
sequentially displaying the plurality of pruned sentences according to the time sequence of the user pruning operation; or
And sequentially displaying the plurality of deleted sentences according to the playing sequence of the video to be clipped.
10. The method of claim 8, further comprising:
displaying a clipping interface; the editing interface comprises a video playing area and a sentence display area;
in the video playing area, playing the video to be edited;
displaying a plurality of undeleted sentences in the sentence display area; wherein, the plurality of un-deleted sentences comprise sentences which are adapted to the playing progress of the video to be clipped;
and if at least one deleted sentence exists after the non-deleted sentence is determined according to the playing sequence of the video to be clipped, displaying prompt information at the position corresponding to the non-deleted sentence to prompt a user that the deleted sentence exists after the user.
11. The method of claim 10, wherein displaying a plurality of pruned sentences and the recovery flags corresponding to the plurality of pruned sentences respectively comprises:
responding to the operation of a user on the prompt message, and acquiring the plurality of deleted sentences after the non-deleted sentences corresponding to the prompt message;
displaying the plurality of deleted sentences and recovery marks corresponding to the plurality of deleted sentences in a popup window; or shifting down the sentences displayed after the un-deleted sentences to reserve a space, and displaying the plurality of deleted sentences and the recovery marks corresponding to the plurality of deleted sentences in the reserved space.
12. A video clipping method, comprising:
determining display modes corresponding to a plurality of sentences respectively based on the clipping operation attributes corresponding to the sentences respectively; the audio text synchronous with the video to be clipped comprises the multiple sentences, the multiple sentences are respectively associated with corresponding video segments in the video to be clipped, and the clipping operation attribute represents the clipping operation type of a user;
and in response to a recovery operation triggered after a deleted sentence is selected by a user according to the display mode, recovering the video segment associated with the deleted sentence, and adjusting the attribute of the clipping operation corresponding to the deleted sentence to change the display mode.
13. An audio clipping method, comprising:
displaying a plurality of sentences and clipping operation marks corresponding to the plurality of sentences respectively; the audio text synchronous with the audio to be clipped comprises a plurality of sentences, the sentences are respectively associated with corresponding audio segments in the audio to be clipped, and the clipping operation identifier represents the type of the clipping operation of the user and is used for assisting the user to randomly delete or restore the audio segments associated with the sentences;
and restoring the audio segment associated with the deleted sentence in response to a restoring operation triggered after the user selects a deleted sentence by referring to the clipping operation identification.
14. An electronic device comprising a memory and a processor; wherein, the first and the second end of the pipe are connected with each other,
the memory storing one or more computer instructions;
the processor, coupled to the memory, configured to execute the one or more computer instructions to perform the steps of the method of any one of claims 1 to 7, or to perform the steps of the method of any one of claims 8 to 11, or to perform the steps of the method of claim 12 or 13.
CN202210234347.7A 2022-03-10 2022-03-10 Video editing method, audio editing method and electronic equipment Active CN114666637B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210234347.7A CN114666637B (en) 2022-03-10 2022-03-10 Video editing method, audio editing method and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210234347.7A CN114666637B (en) 2022-03-10 2022-03-10 Video editing method, audio editing method and electronic equipment

Publications (2)

Publication Number Publication Date
CN114666637A true CN114666637A (en) 2022-06-24
CN114666637B CN114666637B (en) 2024-02-02

Family

ID=82029294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210234347.7A Active CN114666637B (en) 2022-03-10 2022-03-10 Video editing method, audio editing method and electronic equipment

Country Status (1)

Country Link
CN (1) CN114666637B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460455A (en) * 2022-09-06 2022-12-09 上海硬通网络科技有限公司 Video editing method, device, equipment and storage medium
CN117278802A (en) * 2023-11-23 2023-12-22 湖南快乐阳光互动娱乐传媒有限公司 Video clip trace comparison method and device

Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0404399A2 (en) * 1989-06-19 1990-12-27 International Business Machines Corporation Audio editing system
GB9521688D0 (en) * 1995-10-23 1996-01-03 Quantel Ltd An audio editing system
WO2006025833A1 (en) * 2003-07-15 2006-03-09 Kaleidescape, Inc. Displaying and presenting multiple media streams for multiple dvd sets
US20080155421A1 (en) * 2006-12-22 2008-06-26 Apple Inc. Fast Creation of Video Segments
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
WO2015088196A1 (en) * 2013-12-09 2015-06-18 넥스트리밍(주) Subtitle editing apparatus and subtitle editing method
CN105142029A (en) * 2015-08-10 2015-12-09 北京彩云动力教育科技有限公司 Interactive video clipping system and interactive video clipping method
US20190355337A1 (en) * 2018-05-21 2019-11-21 Smule, Inc. Non-linear media segment capture and edit platform
CN111050209A (en) * 2018-10-11 2020-04-21 阿里巴巴集团控股有限公司 Multimedia resource playing method and device
WO2020125365A1 (en) * 2018-12-21 2020-06-25 广州酷狗计算机科技有限公司 Audio and video processing method and apparatus, terminal and storage medium
CN111447505A (en) * 2020-03-09 2020-07-24 咪咕文化科技有限公司 Video clipping method, network device, and computer-readable storage medium
CN111666446A (en) * 2020-05-26 2020-09-15 珠海九松科技有限公司 Method and system for judging AI automatic editing video material
WO2021073315A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generation method and device, terminal and storage medium
CN113204668A (en) * 2021-05-21 2021-08-03 广州博冠信息科技有限公司 Audio clipping method and device, storage medium and electronic equipment
CN113225618A (en) * 2021-05-06 2021-08-06 阿里巴巴新加坡控股有限公司 Video editing method and device
CN113891151A (en) * 2021-09-28 2022-01-04 北京字跳网络技术有限公司 Audio processing method and device, electronic equipment and storage medium
CN114157823A (en) * 2020-08-17 2022-03-08 富士胶片商业创新有限公司 Information processing apparatus, information processing method, and computer-readable medium

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0404399A2 (en) * 1989-06-19 1990-12-27 International Business Machines Corporation Audio editing system
GB9521688D0 (en) * 1995-10-23 1996-01-03 Quantel Ltd An audio editing system
WO2006025833A1 (en) * 2003-07-15 2006-03-09 Kaleidescape, Inc. Displaying and presenting multiple media streams for multiple dvd sets
US20080155421A1 (en) * 2006-12-22 2008-06-26 Apple Inc. Fast Creation of Video Segments
CN101740082A (en) * 2009-11-30 2010-06-16 孟智平 Method and system for clipping video based on browser
WO2015088196A1 (en) * 2013-12-09 2015-06-18 넥스트리밍(주) Subtitle editing apparatus and subtitle editing method
CN105142029A (en) * 2015-08-10 2015-12-09 北京彩云动力教育科技有限公司 Interactive video clipping system and interactive video clipping method
US20190355337A1 (en) * 2018-05-21 2019-11-21 Smule, Inc. Non-linear media segment capture and edit platform
CN111050209A (en) * 2018-10-11 2020-04-21 阿里巴巴集团控股有限公司 Multimedia resource playing method and device
WO2020125365A1 (en) * 2018-12-21 2020-06-25 广州酷狗计算机科技有限公司 Audio and video processing method and apparatus, terminal and storage medium
WO2021073315A1 (en) * 2019-10-14 2021-04-22 北京字节跳动网络技术有限公司 Video file generation method and device, terminal and storage medium
CN111447505A (en) * 2020-03-09 2020-07-24 咪咕文化科技有限公司 Video clipping method, network device, and computer-readable storage medium
CN111666446A (en) * 2020-05-26 2020-09-15 珠海九松科技有限公司 Method and system for judging AI automatic editing video material
CN114157823A (en) * 2020-08-17 2022-03-08 富士胶片商业创新有限公司 Information processing apparatus, information processing method, and computer-readable medium
CN113225618A (en) * 2021-05-06 2021-08-06 阿里巴巴新加坡控股有限公司 Video editing method and device
CN113204668A (en) * 2021-05-21 2021-08-03 广州博冠信息科技有限公司 Audio clipping method and device, storage medium and electronic equipment
CN113891151A (en) * 2021-09-28 2022-01-04 北京字跳网络技术有限公司 Audio processing method and device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
牛嵩峰;唐炜;: "基于人工智能的中文语音文本智能编辑系统设计", 广播与电视技术, no. 04 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115460455A (en) * 2022-09-06 2022-12-09 上海硬通网络科技有限公司 Video editing method, device, equipment and storage medium
CN115460455B (en) * 2022-09-06 2024-02-09 上海硬通网络科技有限公司 Video editing method, device, equipment and storage medium
CN117278802A (en) * 2023-11-23 2023-12-22 湖南快乐阳光互动娱乐传媒有限公司 Video clip trace comparison method and device
CN117278802B (en) * 2023-11-23 2024-02-13 湖南快乐阳光互动娱乐传媒有限公司 Video clip trace comparison method and device

Also Published As

Publication number Publication date
CN114666637B (en) 2024-02-02

Similar Documents

Publication Publication Date Title
CN110198486B (en) Method for previewing video material, computer equipment and readable storage medium
US10725626B2 (en) Systems and methods for chat message management and document generation on devices
US20210311593A1 (en) Systems and methods for chat message management and document generation on devices
CN114666637A (en) Video editing method, audio editing method and electronic equipment
CN112165553B (en) Image generation method and device, electronic equipment and computer readable storage medium
JP2017068841A (en) Method implemented in computer for processing image and related text computer program product, and computer system
CN112084756B (en) Conference file generation method and device and electronic equipment
CN102866987A (en) Apparatus and method for transmitting message in mobile terminal
JP2020079982A (en) Tagging device for moving images, method, and program
CN111524398B (en) Processing method, device and system of interactive picture book
CN114466222B (en) Video synthesis method and device, electronic equipment and storage medium
CN112199534A (en) Sticker recommendation method and device, electronic equipment and storage medium
CN113099033A (en) Information sending method, information sending device and electronic equipment
CN109871257B (en) Page element display method, device and equipment
CN114866851B (en) Short video creation method based on AI image, intelligent television and storage medium
CN116017043A (en) Video generation method, device, electronic equipment and storage medium
CN114979743B (en) Method, device, equipment and medium for displaying audiovisual works
KR101576094B1 (en) System and method for adding caption using animation
CN115174506A (en) Session information processing method, device, readable storage medium and computer equipment
CN115767141A (en) Video playing method and device and electronic equipment
CN107193961B (en) Information recording method and apparatus
JP2016500455A (en) Enhanced information gathering environment
CN112307252A (en) File processing method and device and electronic equipment
CN114022300A (en) Social dynamic information publishing method and device, storage medium and electronic equipment
CN114024929A (en) Voice message processing method and device, electronic equipment and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40074565

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant