CN116614655A - Video script creation method and device, electronic equipment and storage medium - Google Patents

Video script creation method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116614655A
CN116614655A CN202210133409.5A CN202210133409A CN116614655A CN 116614655 A CN116614655 A CN 116614655A CN 202210133409 A CN202210133409 A CN 202210133409A CN 116614655 A CN116614655 A CN 116614655A
Authority
CN
China
Prior art keywords
video
script
segment
index
video frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210133409.5A
Other languages
Chinese (zh)
Inventor
邱健予
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202210133409.5A priority Critical patent/CN116614655A/en
Publication of CN116614655A publication Critical patent/CN116614655A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23424Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving splicing one content stream with another content stream, e.g. for inserting or substituting an advertisement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Abstract

The application relates to the technical field of computers, and provides an authoring method and device of a video script, electronic equipment and a storage medium, which can be applied to vehicle-mounted scenes. According to the method, the first interface is divided into the video playing area and the script configuration area, so that when a target video script is created for the target video, in the process of playing the target video in the video playing area, each time a script creation instruction triggered by a video clip is responded, a script index associated with the content of the corresponding video clip is presented in the script configuration area, and the purpose of creating the video script while watching the target video is achieved, and the creation efficiency of the video script is improved; after segmenting each obtained script index, automatically associating each segmented script index with a video frame set extracted from a corresponding video segment to generate a target video script corresponding to the target video, wherein manual participation is not needed in the whole process of creating the target video script, and the creation cost is saved.

Description

Video script creation method and device, electronic equipment and storage medium
Technical Field
The application relates to the technical field of computers, and discloses an authoring method and device of a video script, electronic equipment and a storage medium.
Background
With the development of the short video industry, the secondary creation of the same person creation and the movie and television episode has become an indispensable source of short videos. The short video can improve the attention and click rate of the original video, which has great significance for film and television propaganda.
In the process of producing short videos, it is generally necessary to produce video scripts in advance, and then edit, clip and shoot the short videos according to the video scripts.
In the related art, when a video script of a short video is created, it is often necessary to refer to the created content of the original video (e.g., movie shows, variety shows, etc.). Therefore, it is generally necessary to completely view the entire content of the original video before authoring the content of the video script based on the viewed content; thus, the time consumption of the whole video script creation process is long, and the creation efficiency is low; and when the play duration of the original video is longer, the authored object forgets to watch the content partially in the watching process, so that the authored video script can be authored only by repeated watching, obviously, the authoring efficiency can be further reduced, the authoring complexity is improved, and unnecessary waste of manpower and material resources is caused.
Disclosure of Invention
The embodiment of the application provides an authoring method, an authoring device, electronic equipment and a storage medium for video scripts, which are used for improving the authoring efficiency of the video scripts and saving the authoring cost.
In one aspect, an embodiment of the present application provides a method for authoring a video script, including:
responding to triggering operation for target video, and playing the target video in a video playing area of a first interface;
in the playing process of the target video, each time the script creation instruction triggered for one video clip is responded, the following operations are executed: based on the currently received script creation indication, presenting a script index associated with the existence of content of the video clip targeted by the script creation indication in a script configuration area of the first interface;
obtaining each script index associated with the target video, segmenting each script index, and associating the segmentation script index with a video frame set extracted from a corresponding video segment for each segmentation script index;
and generating a target video script corresponding to the target video based on the segment script indexes and the associated video frame sets.
In another aspect, an embodiment of the present application provides an authoring apparatus for video scripts, including:
the playing module is used for responding to the triggering operation for the target video and playing the target video in the video playing area of the first interface;
the script configuration module is used for executing the following operations in response to a script creation instruction triggered for one video segment in the playing process of the target video: based on the currently received script creation indication, presenting a script index associated with the existence of content of the video clip targeted by the script creation indication in a script configuration area of the first interface;
the association module is used for obtaining each script index associated with the target video, segmenting each script index, and associating the segmented script index with a video frame set extracted from a corresponding video segment for each segmented script index;
and the generation module is used for generating a target video script corresponding to the target video based on the segment script indexes and the associated video frame sets.
Optionally, the association module is specifically configured to:
obtaining an audio file generated based on the segment script index;
Extracting a video frame set conforming to the dubbing duration from the video segments associated with the segment script index based on the dubbing duration of the audio file;
and associating the video frame set with the segment script index.
Optionally, the association module is specifically configured to:
and determining the dubbing duration of the audio file based on at least one of the word number of the segment script index, the selected dubbing role and the selected speech speed.
Optionally, the apparatus includes a duration determining module configured to:
for each segment script index, the following operations are respectively executed: determining a target duration of the segment script index based on the maximum value of the dubbing duration of the audio file corresponding to the segment script index and the play total duration of the video frame set corresponding to the segment script index;
and determining the total preview time length of all the segment script indexes based on the respective target time lengths of all the segment script indexes.
Optionally, the device further includes a preview module, configured to:
for each segment script index, the following operations are respectively executed:
in response to a preview operation for the segment script index, popping up a preview sub-window in the first interface, the preview sub-window overlaying the video play area and the script configuration area;
And in the preview sub-window, playing the video frame set associated with the segment script index.
Optionally, the preview module is further configured to:
when the total playing duration of the video frame set is longer than the dubbing duration of the audio file corresponding to the segmentation script index, filling the part exceeding the dubbing duration with a preset video frame;
and when the total playing duration of the video frame set is smaller than or equal to the dubbing duration of the audio file corresponding to the segmentation script index, not performing dubbing processing on the part exceeding the total playing duration.
Optionally, the preview module is further configured to:
reading each video frame in the video frame set in sequence to preview, wherein each time a video frame is read, the following operations are executed:
determining whether the video frame has a corresponding audio text in an audio file corresponding to a segment script index associated with the video frame set;
when corresponding audio text exists, the original volume of the video frame is reduced, and the audio text is played;
and when the corresponding audio text does not exist, playing according to the original volume of the frequency frame.
Optionally, the preview module is further configured to:
Determining the subtitle type corresponding to the segmentation script index;
and based on the subtitle type, performing play effect processing on the subtitles of the video frame set played in the preview sub-window.
Optionally, the preview module is specifically configured to:
determining whether subtitles in the set of video frames are individually controllable based on the subtitle type;
when the subtitles in the video frame set can be controlled independently, directly displaying audio texts of the video frames in the video frame set in corresponding audio files;
when the subtitles in the video frame set cannot be controlled independently, identifying the subtitles contained in the video frames for each video frame in the video frame set, and displaying the audio text of the video frames in the corresponding audio files as the subtitles after blurring the subtitles contained in the video frames.
Optionally, the apparatus further includes an import module configured to:
importing the segment script indexes and the audio files and the video frame sets corresponding to the segment script indexes into an editor;
and generating the short video corresponding to the target video script through the editor.
In another aspect, an embodiment of the present application provides an electronic device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the method for authoring a video script described above when the program is executed by the processor.
In another aspect, an embodiment of the present application provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of a method of authoring a video script as described above.
In another aspect, an embodiment of the present application provides a computer program product, including a computer program, which when executed by a processor implements the steps of a method for authoring a video script as described above.
In the embodiment of the application, when the target video is authored, the first interface is divided into the video playing area and the script configuration area, so that each time the script authoring indication triggered by aiming at one video segment is responded in the process of playing the target video in the video playing area, the script index related to the content of the video segment aiming at the script authoring indication is presented in the script configuration area based on the currently received script authoring indication, thereby realizing the purpose of viewing the target video and authoring the video script at the same time and improving the authoring efficiency of the video script; and after the target video is played, each script index can be obtained, after each script index is segmented, each segmented script index is respectively associated with a video frame set extracted from a corresponding video fragment, and finally, the target video script corresponding to the target video is generated.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are needed in the description of the embodiments will be briefly described below, it will be apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic diagram of a method for importing an original video according to an embodiment of the present application;
FIG. 2 is a schematic diagram illustrating creation of a video script according to an embodiment of the present application;
FIG. 3 is an overall frame diagram of a method for authoring video scripts provided by an embodiment of the present application;
FIG. 4 is a flowchart of a method for authoring video scripts provided by an embodiment of the present application;
FIG. 5 is a schematic diagram of a second interface according to an embodiment of the present application;
FIG. 6 is a schematic diagram of a first interface according to an embodiment of the present application;
FIG. 7 is a diagram showing an interface between a text mode and a mirror mode according to an embodiment of the present application;
FIG. 8 is a flowchart of a method for associating a script index with a set of video frames according to an embodiment of the present application;
FIG. 9 is a schematic diagram of determining a target duration of each script index according to an embodiment of the present application;
FIG. 10 is an interface diagram of associating a script index with a set of video frames provided by an embodiment of the present application;
FIG. 11 is a schematic diagram of determining preview time of each script index according to an embodiment of the present application;
FIG. 12 is a flowchart of a method for previewing a set of video frames corresponding to a script index according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a preview sub-window according to an embodiment of the present application;
fig. 14 is a preview effect diagram of a video frame set according to an embodiment of the present application;
fig. 15 is a preview effect diagram of a dubbing time length longer than a total playing time length according to an embodiment of the present application;
fig. 16 is a flowchart of a method for adjusting volume when previewing a video frame set according to an embodiment of the present application;
fig. 17 is a schematic diagram of volume adjustment when previewing a video frame set according to an embodiment of the present application;
fig. 18 is a flowchart of a subtitle adjusting method when previewing a video frame set according to an embodiment of the present application;
fig. 19 is a schematic diagram of subtitle adjustment when previewing a video frame set according to an embodiment of the present application;
FIG. 20 is a flowchart of a method for importing a target video script according to an embodiment of the present application;
FIG. 21 is a schematic diagram of importing a target video script according to an embodiment of the present application;
FIG. 22 is a schematic diagram illustrating sharing of a target video script according to an embodiment of the present application;
FIG. 23 is a frame diagram of a target video script authored by multiple object collaboration provided by an embodiment of the present application;
FIG. 24 is a block diagram of an authoring apparatus for video scripts provided by an embodiment of the present application;
fig. 25 is a block diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments, but not all embodiments of the present application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms "first," "second," and the like in the description of the present application, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the application described herein may be implemented in other sequences than those illustrated or otherwise described herein.
To facilitate an understanding of embodiments of the present application, several concepts will be briefly described as follows:
Video script: the method refers to a development outline of a story in a raised video/movie in the earlier stage of video/movie production; which may include time, place, character, speech, etc.
Short video: short video formed by performing secondary creation such as editing, re-splicing and the like on some popular, wonderful or same subject original video comprises video clips which are played on various new media platforms and are suitable for being watched in a moving state and a short leisure state and are pushed at high frequency, and the playing time period is different from a few seconds to a few minutes.
The following is a summary of the concepts of embodiments of the application.
Currently, when a video script is created for an original video (e.g., a movie play, a variety program, etc.) offline through a planning and editing manner, the captured original video or the original video selected from a photo album is uploaded to a video interface, as shown in fig. 1, and then the video script is created according to the viewing content at the creation interface after the whole content of the original video is completely viewed, as shown in fig. 2. The authoring mode can not realize the authoring requirement of refining the video script while watching the original video, so that the whole video script has long time consumption and low authoring efficiency in the authoring process; moreover, when the playing duration of the original video is long, the authored object may need to be watched repeatedly to create a satisfactory video script, which obviously further reduces the authoring efficiency and improves the authoring complexity. Meanwhile, in the authoring process, the authoring requirement of refining the video script while watching the original video cannot be achieved, so that after the video script is authored, an authored object is required to check the accuracy of the authored video script, when the content of the script is wrong, secondary editing is required, inconvenience is brought to the authored object, and unnecessary waste of manpower and material resources is caused.
In view of this, the embodiment of the application provides a method, a device, an electronic device and a storage medium for creating a video script, where the method divides a first interface into a video playing area and a script configuration area, so that when creating a target video script for a target video, in the process of playing the target video in the video playing area, in response to a script creation instruction triggered for one video clip, a script index associated with content of the corresponding video clip is presented in the script configuration area, thereby realizing creating the video script while viewing the target video, and improving the creation efficiency of the video script; after segmenting each obtained script index, automatically associating each segmented script index with a video frame set extracted from a corresponding video segment, and finally generating a target video script corresponding to the target video, wherein manual participation is not needed in the whole authoring process of the target video script, and the authoring cost is saved.
Meanwhile, in the authoring process of the target video script, the embodiment of the application supports multi-object online co-authoring, so that the authoring efficiency of the video script is further improved.
The method for creating the video script provided by the embodiment of the application can be executed by one or more clients, and the clients can be devices with video playing functions such as smart phones, notebook computers, desktops, tablets and the like.
Referring to fig. 3, an overall frame diagram of an authored video script provided by an embodiment of the present application is that, during a target video playing process, a client receives each script index input by an authored object in a text mode (denoted as mode 1), and different script indexes are partitioned by specific characters (such as a line feed). When a mode switching operation triggered by the authoring object is received, the client identifies specific characters among the script indexes, and splits the script indexes into segmented script indexes in a split mirror mode (marked as mode 2) according to the identified specific characters. Wherein each segment script index in mode 2 monopolizes one segment. In mode 2, for each segment script index, the client obtains an audio file generated based on the segment script index, and based on the audio file, intercepts a set of video frames associated with the content of the segment script index from the target video.
As shown in fig. 3, the embodiment of the present application further provides a preview function of a video script, when each segment script index is associated with a corresponding video frame set, the video frame set is previewed, and after the previewing is correct, each segment script index, each audio file, and each video frame set are imported into an editor, and the client obtains a short video corresponding to the target video through the editor.
The following describes in detail the authoring process of the video script provided by the embodiment of the present application with reference to the accompanying drawings.
Referring to fig. 4, a flow of an authoring method of a video script provided by an embodiment of the present application is executed by a client, and mainly includes the following steps:
s401: and the client responds to the triggering operation for the target video, and plays the target video in the video playing area of the first interface.
In an alternative embodiment, the client is installed with an Application (Application) for authoring video scripts, and the authoring object may perform the authoring of the video scripts through the APP. In particular implementations, the client presents a second interface in response to a click operation for the APP, as shown in fig. 5.
The second interface mainly includes a video search area 501, a video resource information area 502, and a script configuration area 503. The video search area 501 is used for quickly acquiring authoring resources, the video resource information area 502 is used for displaying the authoring resources available in the video library, and the authoring object can select a target video to be authored through the video search area 501 or the video resource information area 502.
For example, the authoring object may input the name of the target video in a search box of the video search area 501 to acquire the target video; alternatively, the authoring object uploads the target video to be played locally from the client by clicking on the video search area 501 "My materials" function option.
For another example, the authoring object may select a target video to be played by browsing the assets presented by the video asset information area 502 and by clicking on the asset tag.
In S401, after the authoring object selects the target video, the client jumps from the second interface to the first interface in response to the trigger operation for the target video, and plays the target video in the video playing area 504 of the first interface, as shown in fig. 6. The first interface further includes a script configuration area 503, configured to perform authoring of the video script based on the content played by the target video, so as to meet the authoring requirements of watching, writing and cutting simultaneously.
S402: during the playing process of the target video, each time the client responds to the script creation instruction triggered for one video clip, the client performs the following operations: in a script configuration area of the first interface, a script index associated with the existence of content for the video clip for which the script authoring indication is intended is presented based on the currently received script authoring indication.
Optionally, in order to improve the authoring efficiency of the video script, as shown in fig. 6, in the process of playing the target video by the client, the authoring object may adjust the playing position of the target video by sliding a progress bar (currentTime) 5041 in the video playing area 504, so as to quickly obtain the video content to be browsed.
In the embodiment of the present application, since the first interface includes both the script configuration area 503 and the video playing area 504, the script configuration area 503 may receive a script creation instruction triggered based on a currently played video clip during the process of playing the target video by the video playing area 504. Specifically, each time the client responds to a script authoring indication triggered for one video clip, based on the currently received script authoring indication, a script index associated with the existence of content for the video clip for which the script authoring indication is present in the script configuration area 503.
Considering that the speeds of script indexes input by different authoring objects are different, when the authoring objects trigger a script authoring indication in the script configuration area 503, the playing speed of the target video in the video playing area 504 can be adjusted to adapt to the input speed of the script indexes.
For example, as shown in fig. 6, when the authoring object authors a video script based on the currently playing video content, the "play/pause" option 5042 in the video play area 504 may be clicked to pause the target video, so that the authoring object has sufficient time to input a script index in the script configuration area, ensuring the accuracy of the script index content.
For another example, and still referring to FIG. 6, in playing the target video in the video playing area 504, when it is not necessary to create a video script for the currently playing video content, the creation object may select a high-speed (e.g., 2.0 times speed) to play the target video, and when it is necessary to create a video script for the currently playing video content, the creation object may select a low-speed (e.g., 0.5 times speed) to play the target video, so that the creation object has enough time to input a script index associated with the content of the video clip and present the script index in the script configuration area 503.
S403: the client obtains each script index associated with the target video, segments each script index, and associates each segment script index with a video frame set extracted from a corresponding video segment for each segment script index.
In embodiments of the present application, a target video may be associated with one or more script indexes, each of which is partitioned by a specific character. Optionally, the specific characters include, but are not limited to, a line feed, a semicolon, a serial number, and the like.
Optionally, after the client obtains each script index associated with the target video, in order to facilitate capturing video frames associated with each script index, the script mode may be switched from mode 1 to mode 2 in the script configuration area 503.
In particular implementations, referring to FIG. 7, in response to a script mode switching operation, a client identifies specific characters (e.g., line feed) between script indexes in mode 1, and based on the identified specific characters, segments each script index into a plurality of paragraphs, each paragraph containing a segment script index, and sequentially presents each segment script index in a script configuration area. Through the switching of script modes, the client can identify specific characters among script indexes, so that automatic switching from text to a sub-mirror format is completed, the process of manual participation is reduced, and the creation efficiency is improved.
When the script index is input in mode 1, the script index may be regarded as one segment script index from a plurality of lines of script indexes due to the limitation of the script configuration area, which is caused by the longer number of characters occupied by the script index.
Based on the comparison effect diagram of the two script modes shown in fig. 7, since each segment script index in the mode 2 is independently segmented, compared with the mode 1, the authoring object is convenient to distinguish each script index, and a corresponding video frame set is added for each segment script index.
In executing S403, for each segment script index in mode 2, the client associates the segment script index with the set of video frames extracted from the corresponding video segment. Alternatively, see fig. 8 for a specific association:
s4031: the client obtains an audio file generated based on the segment script index.
In an alternative embodiment, the client sends an asynchronous request to the back-end service, the asynchronous request carries the segment script index, after the back-end server receives the asynchronous request, an artificial intelligence (Artificial Intelligence, AI) technology is adopted to generate an audio file based on the segment script index carried by the asynchronous request, and the generated audio file is returned to the client.
In another alternative embodiment, the client itself uses AI technology to generate the audio file based on the obtained one segment script index; or after obtaining a segment script index, the client obtains a complete audio file corresponding to the prestored target video from the local, matches the script index with the prestored complete audio file, and extracts a part of audio files corresponding to the segment script index from the complete audio file.
S4032: the client side extracts a video frame set conforming to the dubbing duration from the video segments associated with the segment script index based on the dubbing duration of the audio file.
In an alternative embodiment, when S4032 is executed, as shown in fig. 9, in mode 2, when the "dubbing duration" option 5031 in the script configuration area 503 is turned on, for each segment script index, the client determines, based on at least one of the word number, the selected dubbing role, and the selected speech rate of the segment script index, the dubbing duration of the audio file, and displays, at a preset position of the sub-area occupied by the segment script index, the dubbing duration of the audio file corresponding to the segment script index.
For example, taking the first segment script index "wenslet is decorated by a police officer named melschin" in fig. 9, the segment script index occupies a sub-area of 5032, and at a preset position (shown by a dashed line in fig. 9) of 5032, the dubbing duration of the audio file corresponding to the segment script index is displayed to be 00:06 seconds.
For another example, taking the second segment script index "young in mel, champion in junior, high school, as an example, in fig. 9, the segment script index occupies a sub-region of 5033, and at a preset position (shown by a dashed line in fig. 9) of 5033, the dubbing duration of the audio file corresponding to the segment script index is displayed to be 00:10 seconds.
And so on, the preset positions of the sub-regions occupied by the other segment script indexes (e.g., 5034 and 5035 in fig. 9) are similar and will not be repeated here.
Further, the client extracts a video frame set conforming to the dubbing duration from the video segments associated with the segment script index based on the dubbing duration of the audio file. Wherein one or more video clips to which the video frames in the video frame set belong are determined by the client based on the editing operation of the authoring object.
For example, referring to fig. 10, taking the example of extracting the third segment script index, namely, the video frame set associated with "belonging to the wind cloud character on the town", video segment 1 is a segment of the target video between 10:15-11:20, video segment 2 is a segment of the target video between 30:21-33:08, and video segment 1 and video segment 2 are pictures of the host pub-mel help neighbors in the target video, therefore, when playing to video segment 1, the client side extracts the partial video frame associated with the third segment script index from video segment 1 in response to the clipping operation for video segment 1; when playing to the video segment 2, the client side responds to the clipping operation for the video segment 2, and extracts partial video frames associated with a third segment script index from the video segment 2; and obtaining a video frame set conforming to the dubbing duration of the audio file corresponding to the third segment script index based on the video frames extracted from the video segment 1 and the video segment 2.
When extracting video frames from a video clip, in an alternative embodiment, referring to fig. 10, when playing the current video clip of the target video in the video playing area 504, the client starts to intercept video frames in response to a click operation of the authoring object on the "start intercept" option 5043 in the video playing area 504, and when the intercepted frames or duration meets the requirement, the client stops to intercept video frames in response to a click operation of the authoring object on the "end intercept" option 5044 in the video playing area 504, thereby obtaining a video frame set conforming to the duration of the audio file.
In another alternative embodiment, when a video frame is extracted from a video clip, the client starts to intercept the video frame in response to a click operation of the authoring object on the "start intercept" option 5043 in the video playing area 504 when the video frame in the video frame set is from one video clip, and automatically calculates a position where the video frame intercept ends based on a dubbing duration of the audio file, thereby obtaining a video frame set conforming to the duration of the audio file.
S4033: the client associates the set of video frames with the segment script index.
Still referring to fig. 10 for example, after capturing a set of video frames, the authoring object may select a segment script index associated with the set of video frames in the video play area 504, and after selecting the associated segment script index, the client automatically associates the set of video frames with the selected segment script index in response to a click operation of the authoring object on the "export" option 5045 in the video play area 504.
S404: the client generates a target video script corresponding to the target video based on each segment script index and each associated video frame set.
In the implementation of the application, after each segment script index is associated with a corresponding video frame set, the respective target duration of each segment script index is determined. Specifically, for each segment script index, the client determines a total playing duration of a video frame set associated with the segment script index, compares the total playing duration with a dubbing duration of an audio file corresponding to the segment script index, and determines a target duration of the segment script index based on a maximum value of the total playing duration and the dubbing duration.
For example, referring to fig. 11, taking the first segment script index "wenslet is decorated by a police officer named mel sieen" as an example, the total playing duration (shown in fig. 11 with bold solid circles) of the video frame set associated with the segment script index is 00:06 seconds, the dubbing duration (shown in fig. 11 with bold dashed circles) of the audio file corresponding to the segment script index is 00:06 seconds, that is, the total playing duration of the video frame set is equal to the dubbing duration of the audio file, so it is determined that the target duration of the first segment script index is 00:06 seconds.
For another example, taking the second segment script index "mel year is named, and the high school is champion in the year", the total playing duration (shown with bold solid in fig. 11) of the video frame set associated with the segment script index is 00:08 seconds, and the dubbing duration (shown with bold dashed in fig. 11) of the audio file corresponding to the segment script index is 00:10 seconds, that is, the total playing duration of the video frame set is less than the dubbing duration of the audio file, so that the target duration of the first segment script index is determined to be 00:10 seconds.
Further, after determining the respective target durations of all the segment script indexes, the client determines a total preview duration of all the segment script indexes based on the respective target durations, and displays the total preview duration in the video script configuration area 503.
For example, referring to FIG. 11, the client-side determines a total preview time of 00:23 seconds based on the target time lengths of all segment script indexes and displays it in the video script configuration area 503, circled in double stippled lines.
In the embodiment of the application, the total preview time length of all the segment scripts is displayed, so that the time length occupied by all the segment script indexes corresponding to the creation of the segment scripts is conveniently created and known, and the time length of the short video generated based on each segment script index and the associated video frame set and audio file is preliminarily estimated.
In the above embodiment of the present application, after the client side provides the script mode switching function and switches the mode 1 to the mode 2, for each script index input in the mode 1, contents such as an audio file, a script subtitle and the like can be automatically generated, and cooperation of other software is not required, so that back and forth switching between each authoring function is reduced, and authoring of a video script can be independently completed; in addition, in the authoring process, each segment script index in the mode 2 can be automatically bound and associated with a video frame set, an audio file, a subtitle and the like, and the alignment operation of sound, words and pictures is not needed to be performed manually, so that the authoring cost is reduced, and the authoring efficiency is improved.
In the embodiment of the present application, after the client associates each segment script index with a corresponding video frame set, the following operations are performed for each segment script index, referring to fig. 12:
s4041: and the client side ejects a preview sub-window in the first interface in response to the preview operation aiming at the segmented script index, wherein the preview sub-window is covered on the video playing area and the script configuration area.
For example, as shown in FIG. 13, taking the first segment script index as an example, the client pops up the preview sub-window 505 in the first interface in response to a click operation of the authored object on the "preview segment" option 5046 in the video playback area 504.
S4042: and the client plays the video frame set associated with the segment script index in the preview sub-window.
For example, as shown in FIG. 14, playing the first script index "Wenslett decorated with a set of video frames associated with an officer named Meerxien" in preview sub-window 505.
After the video frame sets associated with the segment script indexes are previewed without errors, the client generates a target video script corresponding to the target video based on the segment script indexes and the associated video frame sets.
In some embodiments, when the total playing duration of the video frame set corresponding to the segment script indexes is inconsistent with the dubbing duration of the audio file, to ensure that the script starting point of each segment script index is consistent with the starting video frame in the associated video frame set, the client performs the following operations for different situations:
case one
And when the total playing duration of the video frame set is longer than the dubbing duration of the audio file corresponding to the segmentation script index, filling the part exceeding the dubbing duration with a preset video frame. The part exceeding the dubbing duration is the total playing duration minus the dubbing duration.
For example, referring to fig. 15, when the total playing duration of the video frame set is greater than the dubbing duration of the audio file corresponding to the segment script index, the client fills the portion exceeding the dubbing duration with black images.
It should be noted that fig. 15 is only an example, and alternatively, the preset video frame may be the last frame in the video frame set.
Case two
When the total playing duration of the video frame set is smaller than or equal to the dubbing duration of the audio file corresponding to the segmentation script index, the client does not perform dubbing processing on the part exceeding the total playing duration. The part exceeding the total playing duration is dubbing duration minus the total playing duration.
In some embodiments, when the client plays a set of video frames associated with a segment script index in the preview sub-window, referring to fig. 16, the following operations are further performed:
s4043: and the client side sequentially reads each video frame in the video frame set to preview and play.
In S4043, the video frames in a video frame set may be from the same video segment of the target video or may be from different video segments of the target video. For each video frame read, the following operations are performed:
s4044: the client determines whether the video frame has a corresponding audio text in an audio file corresponding to a segment script index associated with the current video frame set.
In the embodiment of the application, the playing total time length of a video frame set associated with one segment script index and the dubbing time length of an audio file corresponding to the segment script index may be different, that is, the video frames in the video frame set may have corresponding audio text or may not have corresponding audio text in the audio file. Therefore, the client needs to read one video frame in the video frame set associated with one script index at a time, and needs to determine whether the video frame has a corresponding audio text in the audio file corresponding to the segment script index, and when the video frame has the corresponding audio text, S4045 is executed, and when the video frame does not have the corresponding audio text, S4046 is executed. Therefore, the client can adjust the preview effect of different video frames, so that the suitability of the audio file and the video frame set corresponding to the segmentation script index is improved, and the preview effect of the audio file and the video frame set is further improved.
S4045: when the corresponding audio text exists, the client reduces the original volume of the video frame and plays the corresponding audio text existing in the audio file.
When the read video frames have corresponding audio texts in the audio files, the client terminal automatically reduces the original volume of the video frames based on the preset configuration so as to highlight the audio texts and improve the dubbing effect. The percentage of the original volume reduction can be set according to actual requirements, and the embodiment of the application does not have a limiting requirement.
For example, as shown in fig. 17, when the read video frame has a corresponding audio text in the audio file, the client reduces the original volume of the video frame to 30% (as shown in the video original sound), and sets the script volume of the audio text to 100% (as shown in the script dubbing) to highlight the dubbing content of the segment script index.
It should be noted that, in the embodiment of the present application, for a read video frame, the client may also set the original volume of the video frame and the size of the script dubbing in response to the volume adjustment operation for the video frame, so as to meet the personalized demand setting.
For example, still taking fig. 17 as an example, in the process of previewing a video frame, the client pops up a volume adjustment window in response to a click operation of an authoring object for a "volume" option, and the authoring object sets the volume of the video frame and the audio text by sliding a volume adjustment display bar of the video soundtrack and the script dubbing in the map.
S4046: when the corresponding audio text does not exist, the client plays according to the original volume of the video frame.
When the read video frame does not have the corresponding audio text in the audio file, the client plays according to the original volume of the video frame so as to keep the original volume effect of the video frame.
In some embodiments, when the client plays a set of video frames associated with a segment script index in the preview sub-window, referring to fig. 18, the following operations may also be performed:
s4047: the client determines the subtitle type corresponding to the segment script index.
Optionally, the client side provides different subtitle types based on the video format of the target video, so that the preview is convenient. Accordingly, in executing S4047, the client determines the subtitle type corresponding to the segment script index based on the target video corresponding to the segment script index.
For example, as shown in fig. 19, the client provides three subtitle types, namely a no-type, a script type and an original-film type, wherein the no-type indicates that when a video frame set is played, the subtitles contained in each video frame are not displayed, and simultaneously, the audio text in the audio file corresponding to the segment script index is not displayed; the script type indicates that when the video frame set is played, audio texts in the audio files corresponding to the segmentation script indexes are displayed; the original clip type indicates that subtitles included in each video frame are displayed when the video frame set is played.
S4048: and the client terminal processes the playing effect on the subtitles of the video frame set played in the preview sub-window based on the subtitle type.
When the method is implemented, the client determines whether the subtitles of the video frame set can be controlled independently based on the subtitle type corresponding to the segmentation script index, and if yes, audio texts of video frames in the video frame set in corresponding audio files are displayed directly; otherwise, for each video frame in the video frame set, identifying the caption contained in the video frame, and after blurring the caption contained in the video frame, displaying the audio text of the video frame in the corresponding audio file as the caption.
It should be noted that, in the embodiment of the present application, when previewing the video frame set, the executed flow in fig. 16 (S4043-S4046) and fig. 18 (S4047-S4048) is not limited to the strict execution sequence, and the subtitle may be adjusted first and then the volume may be adjusted.
In the embodiment of the application, after each segment script index is respectively associated with a corresponding video frame set, based on the preview function provided by the client, the authoring object can watch the authoring result of the video script in the process of authoring the video script, thereby avoiding unnecessary reworking operation after authoring, reducing reworking cost and improving the authoring efficiency of the video script.
In the embodiment of the application, after each segment script index previews, a target video script is generated, and the target video script is stored in a memory in a manner shown in table 1.
Storage mode of target video script in table 1 and mode 2
Script ID Script index content Associated video frame sets Start time and duration of video clips Audio file
ID_1 Content 1 Video frame set 1 (t1,t2) Dubbing 1
ID_2 Content 2 Video frame set 2 (t3,t4) Dubbing 2
ID_3 Content 3 Video frame set 3 (t5,t6) Dubbing 3
When a target video script needs to be authored for a plurality of target videos and each target video has an association, the storage mode of the target index corresponding to each target video script is shown in table 2.
Table 2, method of storing video scripts of a plurality of target videos
Target video ID Script ID Sequence number of target video
1 ID_1
2 ID_2
3 ID_3
It should be noted that, when the target video is a television series, one segment script index may be applied to a plurality of target videos. For example, the target video of the 1 st set contains pictures of basketball games of women, and the target video of the 2 nd set also contains pictures of basketball games of women, so that a segment script index in the target video of the 1 st set can be applied to the target video of the 2 nd set.
In some embodiments, after previewing the video frame sets associated with each segment script index and generating the target video script corresponding to the target video, referring to fig. 20, the client further performs the following operations:
s405: and the client side imports each segment script index and the audio file and video frame set corresponding to each segment script index into the editor.
In S405, after the video frame sets associated with the segment script indexes are previewed without errors, a target video script corresponding to the target video is generated, and further, the client side imports the segment script indexes, and the audio files and the video frame sets corresponding to the segment script indexes into the editor.
In particular, as shown in fig. 21, in response to clicking operation of the authoring object on the "clip" option (shown in bold in fig. 21) in the script configuration area 503, the client imports each segment script index, and the audio file and video frame set corresponding to each segment script index, into the editor in different tracks.
S406: and the client generates a short video corresponding to the target video script through the editor.
And after the segment script indexes and the audio files and video frame sets corresponding to the segment script indexes are imported into the editor, the client generates the short video corresponding to the target video script through the editor.
It should be noted that, after each segment script index and the audio file and video frame set corresponding to each segment script index are imported into the editor, the authoring object may perform a secondary editing operation in the editor. For example, operations such as adding special effects to video frames, performing special effect display on keywords in subtitles, and the like are performed, so that the viewing quality of short videos is improved.
In the embodiment of the application, the client can realize seamless connection of creation and editing, namely, after the target video script is created, the creation of the short video can be performed; in addition, the preview function is supported in the creation process, so that the detail adjustment (such as subtitle adjustment and volume adjustment) in the creation of the short video is simplified, and the production efficiency and quality of the short video are improved.
In some embodiments, the target video script may be implemented by one authoring object or by a plurality of authoring objects in cooperation, which requires the client to support the multi-object authoring function.
As shown in fig. 22, in response to the clicking operation of the "share" option (circled in fig. 22) in the script configuration area 503 by the authoring object a, the client transmits the target video, the authored script index, and at least one of the audio file and the video frame set corresponding to the authored script index to the clients corresponding to the other authoring objects, and the other clients complete the authoring of the remaining script indexes.
Referring to fig. 23, a whole frame diagram of a target video script is authored for a plurality of objects in collaboration, wherein a sharing object clicks on a "sharing" option in a script configuration area 503 of a client, and a link of the video script is copied, wherein the link includes information such as a target video, an authored script index, an audio file and a video frame set corresponding to the authored script index, and the client uses Instant Message (IM) technology to send the link to other clients receiving Instant communications corresponding to the objects. After the other clients receive the link, responding to the opening operation for the link, determining whether the received object logs in the creation software of the video script or not by the other clients, and when the login is not detected, popping up a login window so as to receive the object to continue creating the video script after logging in; when the login is detected, whether the link is effective or not is determined, if yes, a permission popup window is displayed to inform that the creation permission of the received object or the content needing creation is informed, after the permission popup window is closed by the creation object, script creation is continued, and if not, prompt information of popup window failure is displayed and a home page is returned. The video script is authored by multi-person online collaboration, so that the authoring efficiency of the video script is further improved.
The clients in the above embodiments of the present application include, but are not limited to, mobile phones, computers, intelligent voice interaction devices, intelligent home appliances, vehicle terminals, aircrafts, and the like.
Based on the same inventive concept, the embodiment of the application also provides an authoring device of a video script, which can be the client in the previous embodiment. Based on the above embodiments, referring to fig. 24, an authoring apparatus for video scripts according to an embodiment of the present application specifically includes:
a playing module 2401, configured to respond to a triggering operation for a target video, and play the target video in a video playing area of a first interface;
the script configuration module 2402 is configured to, during the playing of the target video, perform the following operations in response to a script authoring instruction triggered for one video clip: based on the currently received script creation indication, presenting a script index associated with the existence of content of the video clip targeted by the script creation indication in a script configuration area of the first interface;
the association module 2403 is configured to obtain each script index associated with the target video, segment each script index, and associate, for each segment script index, the segment script index with a video frame set extracted from a corresponding video segment;
The generating module 2404 is configured to generate a target video script corresponding to the target video based on the segment script indexes and the associated video frame sets.
Optionally, the association module 2403 is specifically configured to:
obtaining an audio file generated based on the segment script index;
extracting a video frame set conforming to the dubbing duration from the video segments associated with the segment script index based on the dubbing duration of the audio file;
and associating the video frame set with the segment script index.
Optionally, the association module 2403 is specifically configured to:
and determining the dubbing duration of the audio file based on at least one of the word number of the segment script index, the selected dubbing role and the selected speech speed.
Optionally, the apparatus further includes a duration determination module 2405 configured to:
for each segment script index, the following operations are respectively executed: determining a target duration of the segment script index based on the maximum value of the dubbing duration of the audio file corresponding to the segment script index and the play total duration of the video frame set corresponding to the segment script index;
and determining the total preview time length of all the segment script indexes based on the respective target time lengths of all the segment script indexes.
Optionally, the apparatus further includes a preview module 2406 for:
for each segment script index, the following operations are respectively executed:
in response to a preview operation for the segment script index, popping up a preview sub-window in the first interface, the preview sub-window overlaying the video play area and the script configuration area;
and in the preview sub-window, playing the video frame set associated with the segment script index.
Optionally, the preview module 2406 is further configured to:
when the total playing duration of the video frame set is longer than the dubbing duration of the audio file corresponding to the segmentation script index, filling the part exceeding the dubbing duration with a preset video frame;
and when the total playing duration of the video frame set is smaller than or equal to the dubbing duration of the audio file corresponding to the segmentation script index, not performing dubbing processing on the part exceeding the total playing duration.
Optionally, the preview module 2406 is further configured to:
reading each video frame in the video frame set in sequence to preview, wherein each time a video frame is read, the following operations are executed:
determining whether a corresponding audio text exists in the audio file corresponding to the script index of the video frame;
When the corresponding audio text exists, the original volume of the video frame is reduced and played;
and when the corresponding audio text does not exist, playing according to the original volume of the video frame.
Optionally, the preview module 2406 is further configured to:
determining the subtitle type corresponding to the segmentation script index;
and based on the subtitle type, performing play effect processing on the subtitles of the video frame set played in the preview sub-window.
Optionally, the preview module 2406 is specifically configured to:
determining whether subtitles in the set of video frames are individually controllable based on the subtitle type;
when the subtitles in the video frame set can be controlled independently, directly displaying audio texts of the video frames in the video frame set in corresponding audio files;
when the subtitles in the video frame set cannot be controlled independently, identifying the subtitles contained in the video frames for each video frame in the video frame set, and displaying the audio text of the video frames in the corresponding audio files as the subtitles after blurring the subtitles contained in the video frames.
Optionally, the apparatus further includes an import module 2407 for:
importing the segment script indexes and the audio files and the video frame sets corresponding to the segment script indexes into an editor;
and generating the short video corresponding to the target video script through the editor.
As an embodiment, the apparatus in fig. 24 may be used in the method for authoring a video script provided in the embodiment of the present application, and may achieve the same technical effects, which are not described herein again.
As an example of a hardware entity, the apparatus described above is an electronic device shown in fig. 25, and the electronic device includes a processor 2501, a storage medium 2502, and a display 2503; the processor 2501, the storage medium 2502, and the display 2503 are all connected by a bus 2504.
The storage medium 2502 has stored therein a computer program;
the processor 2501, when executing the computer program, implements a video script authoring method as previously discussed.
One processor 2501 is exemplified in fig. 25, but the number of processors 2501 is not limited in practice.
Wherein the storage medium 2502 may be a volatile memory (RAM) such as a random-access memory (RAM); the storage medium 2502 may also be a non-volatile memory medium (non-volatile memory), such as a read-only memory medium, a flash memory medium (flash memory), a hard disk (HDD) or a Solid State Drive (SSD), or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited thereto. The storage medium 2502 may be a combination of the above storage media.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to the application. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present application without departing from the spirit or scope of the application. Thus, it is intended that the present application also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (14)

1. A method of authoring a video script, the method comprising:
responding to triggering operation for target video, and playing the target video in a video playing area of a first interface;
in the playing process of the target video, each time the script creation instruction triggered for one video clip is responded, the following operations are executed: based on the currently received script creation indication, presenting a script index associated with the existence of content of the video clip targeted by the script creation indication in a script configuration area of the first interface;
obtaining each script index associated with the target video, segmenting each script index, and associating the segmentation script index with a video frame set extracted from a corresponding video segment for each segmentation script index;
and generating a target video script corresponding to the target video based on the segment script indexes and the associated video frame sets.
2. The method of claim 1, wherein associating the segment script index with the set of extracted video frames in the corresponding video segment comprises:
Obtaining an audio file generated based on the segment script index;
extracting a video frame set conforming to the dubbing duration from the video segments associated with the segment script index based on the dubbing duration of the audio file;
and associating the video frame set with the segment script index.
3. The method of claim 2, wherein the dubbing duration of the audio file is determined by:
and determining the dubbing duration of the audio file based on at least one of the word number of the segment script index, the selected dubbing role and the selected speech speed.
4. The method of claim 2, wherein after associating the segment script index with the set of extracted video frames in the corresponding video segment, the method further comprises:
for each segment script index, the following operations are respectively executed: determining a target duration of the segment script index based on the maximum value of the dubbing duration of the audio file corresponding to the segment script index and the play total duration of the video frame set corresponding to the segment script index;
and determining the total preview time length of all the segment script indexes based on the respective target time lengths of all the segment script indexes.
5. The method of any of claims 2-4, wherein after associating the segment script index with the set of extracted video frames in the corresponding video segment, the method further comprises:
for each segment script index, the following operations are respectively executed:
in response to a preview operation for the segment script index, popping up a preview sub-window in the first interface, the preview sub-window overlaying the video play area and the script configuration area;
and in the preview sub-window, playing the video frame set associated with the segment script index.
6. The method of claim 5, wherein in playing the set of video frames associated with the segment script index in the preview sub-window, the method further comprises:
when the total playing duration of the video frame set is longer than the dubbing duration of the audio file corresponding to the segmentation script index, filling the part exceeding the dubbing duration with a preset video frame;
and when the total playing duration of the video frame set is smaller than or equal to the dubbing duration of the audio file corresponding to the segmentation script index, not performing dubbing processing on the part exceeding the total playing duration.
7. The method of claim 5, wherein in playing the set of video frames associated with the segment script index in the preview sub-window, the method further comprises:
reading each video frame in the video frame set in sequence to preview, wherein each time a video frame is read, the following operations are executed:
determining whether the video frame has a corresponding audio text in an audio file corresponding to a segment script index associated with the video frame set;
when corresponding audio text exists, the original volume of the video frame is reduced, and the audio text is played;
and when the corresponding audio text does not exist, playing according to the original volume of the video frame.
8. The method of claim 5, wherein in playing the set of video frames associated with the segment script index in the preview sub-window, the method further comprises:
determining the subtitle type corresponding to the segmentation script index;
and based on the subtitle type, performing play effect processing on the subtitles of the video frame set played in the preview sub-window.
9. The method of claim 8, wherein the performing playback effect processing on subtitles of the set of video frames played in the preview sub-window based on the subtitle type comprises:
Determining whether subtitles in the set of video frames are individually controllable based on the subtitle type;
when the subtitles in the video frame set can be controlled independently, directly displaying audio texts of the video frames in the video frame set in corresponding audio files;
when the subtitles in the video frame set cannot be controlled independently, identifying the subtitles contained in the video frames for each video frame in the video frame set, and displaying the audio text of the video frames in the corresponding audio files as the subtitles after blurring the subtitles contained in the video frames.
10. The method of any of claims 6-9, wherein after previewing the respective associated set of video frames for the respective segment script index, the method further comprises:
importing the segment script indexes and the audio files and the video frame sets corresponding to the segment script indexes into an editor;
and generating the short video corresponding to the target video script through the editor.
11. An authoring apparatus for video scripts, comprising:
the playing module is used for responding to the triggering operation for the target video and playing the target video in the video playing area of the first interface;
The script configuration module is used for executing the following operations in response to a script creation instruction triggered for one video segment in the playing process of the target video: based on the currently received script creation indication, presenting a script index associated with the existence of content of the video clip targeted by the script creation indication in a script configuration area of the first interface;
the association module is used for obtaining each script index associated with the target video, segmenting each script index, and associating the segmented script index with a video frame set extracted from a corresponding video segment for each segmented script index;
and the generation module is used for generating a target video script corresponding to the target video based on the segment script indexes and the associated video frame sets.
12. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the method of any of claims 1-10 when executing the program.
13. A computer-readable storage medium having stored thereon a computer program, characterized by: the computer program, when executed by a processor, implements the method of any of claims 1-10.
14. A computer program product comprising a computer program, which, when executed by a processor, implements the method of any one of claims 1-10.
CN202210133409.5A 2022-02-08 2022-02-08 Video script creation method and device, electronic equipment and storage medium Pending CN116614655A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210133409.5A CN116614655A (en) 2022-02-08 2022-02-08 Video script creation method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210133409.5A CN116614655A (en) 2022-02-08 2022-02-08 Video script creation method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116614655A true CN116614655A (en) 2023-08-18

Family

ID=87673433

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210133409.5A Pending CN116614655A (en) 2022-02-08 2022-02-08 Video script creation method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116614655A (en)

Similar Documents

Publication Publication Date Title
CN110198486B (en) Method for previewing video material, computer equipment and readable storage medium
US11546667B2 (en) Synchronizing video content with extrinsic data
US11600301B2 (en) Method and device of editing a video
CN109889882B (en) Video clip synthesis method and system
US9237322B2 (en) Systems and methods for performing selective video rendering
KR20080090218A (en) Method for uploading an edited file automatically and apparatus thereof
EP2136370B1 (en) Systems and methods for identifying scenes in a video to be edited and for performing playback
US20130083036A1 (en) Method of rendering a set of correlated events and computerized system thereof
CN113613065B (en) Video editing method and device, electronic equipment and storage medium
CN109194887B (en) Cloud shear video recording and editing method and plug-in
CN103546698B (en) A kind of mobile terminal recorded video store method and device
WO2017062961A1 (en) Methods and systems for interactive multimedia creation
CN112004137A (en) Intelligent video creation method and device
CN111797061B (en) Multimedia file processing method and device, electronic equipment and storage medium
CN112153307A (en) Method and device for adding lyrics in short video, electronic equipment and storage medium
CN112004138A (en) Intelligent video material searching and matching method and device
CN111246289A (en) Video generation method and device, electronic equipment and storage medium
CN112839258A (en) Video note generation method, video note playing method, video note generation device, video note playing device and related equipment
CN114286169B (en) Video generation method, device, terminal, server and storage medium
WO2019042217A1 (en) Video editing method and terminal
CN112214678A (en) Method and device for recommending short video information
US10284883B2 (en) Real-time data updates from a run down system for a video broadcast
CN107241618B (en) Recording method and recording apparatus
CN113518187A (en) Video editing method and device
CN111787188B (en) Video playing method and device, terminal equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40092623

Country of ref document: HK