CN117880601A - Video generation method, device, electronic equipment and storage medium - Google Patents

Video generation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN117880601A
CN117880601A CN202410032933.2A CN202410032933A CN117880601A CN 117880601 A CN117880601 A CN 117880601A CN 202410032933 A CN202410032933 A CN 202410032933A CN 117880601 A CN117880601 A CN 117880601A
Authority
CN
China
Prior art keywords
sub
video
subtitles
subtitle
splitting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410032933.2A
Other languages
Chinese (zh)
Inventor
谢守涛
李志飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mobvoi Information Technology Co Ltd
Original Assignee
Mobvoi Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobvoi Information Technology Co Ltd filed Critical Mobvoi Information Technology Co Ltd
Priority to CN202410032933.2A priority Critical patent/CN117880601A/en
Publication of CN117880601A publication Critical patent/CN117880601A/en
Pending legal-status Critical Current

Links

Landscapes

  • Studio Circuits (AREA)

Abstract

The embodiment of the invention discloses a video generation method, a video generation device, electronic equipment and a storage medium. And acquiring input data, wherein the input data comprises a subtitle file, splitting the subtitle file into a plurality of sub-subtitles, generating sub-videos corresponding to the sub-subtitles, and acquiring a target video according to the sub-videos. Therefore, the parallel synthesis of the subtitles can be realized, and the time for generating the video is reduced.

Description

Video generation method, device, electronic equipment and storage medium
Technical Field
The present invention relates to the field of computer vision, and in particular, to a video generating method, apparatus, electronic device, and storage medium.
Background
Video generation technology is a process of converting video with background audio and subtitles into video using computer vision and video processing techniques. The technology is widely applied to the fields of movie production, advertisement production, online education and the like, and can realize rapid, efficient and low-cost video production.
End-to-end latency (latency) is an important indicator in the video generation process, which refers to the time required from providing background audio, subtitle, and video resolution parameters to generating video. When the subtitle time is longer, the end-to-end delay is correspondingly longer, resulting in longer time to generate video.
Disclosure of Invention
In view of this, embodiments of the present invention provide a video generating method, apparatus, electronic device, and storage medium, which can implement parallel synthesis of subtitles and reduce the time for generating video.
In a first aspect, an embodiment of the present invention provides a video generating method, where the method includes:
acquiring input data, wherein the input data comprises a subtitle file;
splitting the subtitle file into a plurality of sub subtitles;
generating sub videos corresponding to the sub subtitles;
and acquiring a target video according to the sub video.
In some embodiments, the input data further includes an audio file and video resolution parameters.
In some embodiments, the generating the sub video corresponding to each sub subtitle specifically includes:
and generating sub videos corresponding to the sub subtitles according to the video resolution parameters.
In some embodiments, the obtaining the target video from the sub-video comprises:
merging the sub-videos to obtain an intermediate video;
and merging the intermediate video and the audio file to acquire a target video.
In some embodiments, the splitting the subtitle file into a plurality of sub-subtitles includes:
determining the number of sub-subtitles;
splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the number of the sub-subtitles.
In some embodiments, the splitting the subtitle file into a plurality of sub-subtitles includes:
determining the duration of sub-subtitles;
splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the duration of the sub-subtitles.
In some embodiments, the obtaining the target video according to the sub-video specifically includes:
and merging all the sub-videos in sequence to acquire the target video.
In a second aspect, an embodiment of the present invention provides a video generating apparatus, including:
the input unit is used for acquiring input data, wherein the input data comprises a subtitle file;
a splitting unit for splitting the subtitle file into a plurality of sub subtitles;
the sub video generation unit is used for generating sub videos corresponding to the sub subtitles;
and the target video acquisition unit is used for acquiring the target video according to the sub video.
In a third aspect, embodiments of the present invention provide a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement a method according to the first aspect.
In a fourth aspect, an embodiment of the present invention provides an electronic device comprising a memory and a processor, the memory storing one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method as described in the first aspect.
According to the technical scheme, input data are obtained, the input data comprise subtitle files, the subtitle files are split into a plurality of sub-subtitles, sub-videos corresponding to the sub-subtitles are generated, and target videos are obtained according to the sub-videos. Therefore, the parallel synthesis of the subtitles can be realized, and the time for generating the video is reduced.
Drawings
The above and other objects, features and advantages of the present invention will become more apparent from the following description of embodiments of the present invention with reference to the accompanying drawings, in which:
FIG. 1 is a flow chart of a video generation method of one embodiment of the present invention;
fig. 2 is a flowchart of splitting a subtitle file into a plurality of sub-subtitles according to a time length according to an embodiment of the present invention.
Fig. 3 is a flowchart of splitting a subtitle file into a plurality of sub-subtitles according to a time length according to an embodiment of the present invention;
FIG. 4 is a flow chart of a method for capturing a target video from the sub-video and the audio file according to an embodiment of the invention;
FIG. 5 is a flow chart of a video generation method according to another embodiment of the present invention;
fig. 6 is a schematic diagram of a video generating apparatus according to an embodiment of the present invention;
fig. 7 is a schematic diagram of an electronic device according to an embodiment of the invention.
Detailed Description
The present application is described below based on examples, but the present application is not limited to only these examples. In the following detailed description of the present application, certain specific details are set forth in detail. The present application will be fully understood by those skilled in the art without a description of these details. Well-known methods, procedures, flows, components and circuits have not been described in detail so as not to obscure the nature of the present application.
Moreover, those of ordinary skill in the art will appreciate that the drawings are provided herein for illustrative purposes and that the drawings are not necessarily drawn to scale.
Unless the context clearly requires otherwise, the words "comprise," "comprising," and the like throughout the application are to be construed as including but not being exclusive or exhaustive; that is, it is the meaning of "including but not limited to".
In the description of the present application, it should be understood that the terms "first," "second," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. Furthermore, in the description of the present application, unless otherwise indicated, the meaning of "a plurality" is two or more.
In the video generation process, after providing files such as subtitle files and audio, the files are generally processed by using a multimedia processing frame to obtain video. However, it takes a while from providing the background audio, subtitle, etc. files to generating the complete green curtain video, the time is an end-to-end delay.
End-to-end delay refers to the time required from the video source end to the final display end. This time includes a series of processing and transmission links such as video encoding, transmission, decoding, and display. In practical applications, end-to-end delay is a very important parameter, which directly affects the viewing experience and interaction effect of the user.
When using a multimedia processing framework, for example, FFmpeg (a multimedia processing framework) is used to convert background audio and subtitles into video, when the input audio or subtitles are long, the end-to-end delay is correspondingly long, but the CPU resource utilization of FFmpeg is less than 3 cores at most, so that performance reaches a bottleneck, and the user experience is also deteriorated.
Fig. 1 is a flowchart of a video generation method according to an embodiment of the present invention. The video generating method shown in fig. 1 may be executed by a server, and specifically includes the following steps:
step S100, input data is acquired, wherein the input data comprises a subtitle file.
Specifically, the server receives a subtitle file sent by the client, where the subtitle file is loaded when playing video, and is used for recording and displaying the subtitle file. They are typically used with video files to display corresponding subtitle content in the video. The format of the subtitle file is srt (SubRip TextSubRip Text) format. srt the caption file has simple production specification and mainly consists of digital time codes and caption text, wherein one sentence of time codes is added with one sentence of caption. When a user needs to view or edit srt files, software such as notepads, srt caption production specialists, etc. can be used. When video playing, the subtitle can be normally opened and loaded only by matching with video playing software and corresponding video files. The input data may be input in various ways, such as file import and command line parameters, which are not limited by the embodiment of the present invention.
Step 200, splitting the subtitle file into a plurality of sub subtitles.
Namely splitting a srt form of subtitle file to obtain a plurality of sub-subtitles, wherein the sub-subtitles are one section of the subtitle file. In particular, the splitting may be performed for a predetermined period of time or a predetermined number.
Fig. 2 is a flowchart of splitting a subtitle file into a plurality of sub-subtitles according to a time length according to an embodiment of the present invention. As shown in fig. 2, the splitting the subtitle file into a plurality of sub-subtitles according to the duration includes the steps of:
step S210, determining the number of sub-subtitles.
And determining the number of the sub-subtitles after splitting. Specifically, the number n of sub-subtitles can be selected according to actual requirements, where n is less than or equal to m, where m represents the number of parallel processors. For example, for the multimedia processing framework FFmpeg, processing one task consumes 2-3 cores, taking 3 cores as an example, m= [ server core number/3 ], [ x ] represents a maximum integer not exceeding x.
Step S220, splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the number of the sub-subtitles.
Assuming that the total duration of the subtitle file is T, since the subtitle file needs to be split into n sub-subtitles, in order to reduce the execution time as much as possible, it is necessary to ensure that the time of each subtitle segment is as close as possible. This is because if some sub-subtitles are longer and some are shorter, then in the subsequent process, when the processor performs parallel processing on the sub-subtitles, the shorter sub-subtitles are processed, and when the processing of the shorter sub-subtitles is completed, the longer sub-subtitles are not completed and still need to be processed, so that the processor still needs to work, and the working time of the processor becomes longer, that is, the working time of the processor is determined by the longest subtitle. Therefore, in order to ensure that all sub-subtitles can be completed at the same time as much as possible, the time length of all sub-subtitles is equal as much as possible, and the total time length of n sub-subtitles is the time length T of the subtitle file, so that the time of each sub-subtitle is close to T/n.
However, since the subtitles need to maintain a certain consistency, they cannot be divided directly by taking the time length T/n as a segment. Specifically, if the sub-subtitles are directly divided into one segment with T/n, one segment of the sub-subtitles may be displayed separately, which is not beneficial to subsequent synthesis. Therefore, when subtitles are synthesized according to the duration, the length of each sub-subtitle is not T/n, and only needs to be ensured to be close to T/n as much as possible.
Specifically, the subtitle file is parsed first, i.e., the time stamp and content of each subtitle are extracted using a programming language or a specific software tool. And then determining a splitting point according to the number of the sub-subtitles and the duration of the subtitle file. For example, if the number of sub-subtitles to be split is 5, 4 splitting points need to be found in the whole subtitle file, where the splitting points are selected in a manner that ensures that the time of each sub-subtitle is as close as possible, for example, a 200 second video is split into 5 sub-subtitles, where the time of 5 sub-subtitles may be 38 seconds, 43 seconds, 41 seconds, 39 seconds, or 40 seconds, 41 seconds, 38 seconds, 39 seconds, and 42 seconds in sequence, which is not limited by the embodiment of the present invention. And then splitting the subtitle file into a plurality of sub subtitles according to the splitting point. The sub-subtitles include their start and end time stamps and corresponding text content. And finally, outputting sub-subtitle files, namely storing each sub-subtitle as an independent file, wherein the file comprises a start time stamp, an end time stamp and all subtitle texts in the time period.
Fig. 3 is a flowchart of splitting a subtitle file into a plurality of sub-subtitles according to a time length according to an embodiment of the present invention. As shown in fig. 3, the flowchart for splitting the subtitle file into a plurality of sub-subtitles according to time length includes the following steps:
step S230, determining the duration of the sub-subtitles.
And determining the time length of the split sub-subtitle. Specifically, a time may be determined according to actual requirements. For example, the duration of one sub-subtitle is determined to be 30 seconds, or the duration of one sub-subtitle is determined to be 100 seconds. The embodiments of the present invention are not limited in this regard.
Step S240, splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the duration of the sub-subtitles.
Specifically, assuming that the duration of the subtitle file is T, and the duration of the split sub-subtitles is T, the number of sub-subtitles is T/T. However, T/T is not necessarily an integer, and thus a rounding operation is required for T/T. The rounding operation may be implemented by various existing algorithms, such as rounding down, rounding up, rounding down, and the like. Taking rounding operation by rounding as an example, assuming that the length of the subtitle file is 200 seconds, if the length of the sub-subtitle to be obtained is 30 seconds, 7 sub-subtitles are required at this time, so the average length of the sub-subtitles is less than 30 seconds. The embodiments of the present invention are not limited in this regard.
As described in S220, the time lengths of the sub-subtitles are not necessarily equal, and the specific time length of each sub-subtitle is determined by a specific subtitle file, which is not described herein in detail.
Specifically, the subtitle file is parsed first, i.e., the time stamp and content of each subtitle are extracted using a programming language or a specific software tool. The split point is then determined, i.e. the split node is determined according to a predetermined time length. For example, if the length of each sub-subtitle needs to be split into about 30 seconds, the time stamps of the start and end of the time period of about 30 seconds need to be found, and the duration of each sub-subtitle is determined by the specific subtitle file. For example, when splitting a 200 second video into 7 segments, the length of each sub-subtitle may be 28 seconds, 29 seconds, 26 seconds, 27 seconds, 31 seconds, 29 seconds, 30 seconds, or may be 26 seconds, 28 seconds, 30 seconds, 29 seconds, 28 seconds, 29 seconds, or 29 seconds in order, which the embodiment of the present invention is not limited. After the time stamp is found, the subtitle file is split into a plurality of sub-subtitles according to the split point. The sub-subtitles include their start and end time stamps and corresponding text content. And finally, outputting sub-subtitle files, namely storing each sub-subtitle as an independent file, wherein the file comprises a start time stamp, an end time stamp and all subtitle texts in the time period.
After the splitting is completed, all sub-subtitle files are checked to ensure that they are properly split for a predetermined length of time and that no subtitles are missing or repeated.
And step S300, generating sub videos corresponding to the sub subtitles.
The video generation is carried out according to the obtained sub-subtitles, and the obtained sub-videos correspond to the sub-subtitles respectively. The sub-video is a video file generated by the corresponding sub-subtitle.
In particular, sub-subtitles can be directly converted into sub-videos by FFmpeg.
FFmpeg works on the principle of using FFmpeg command line tools to perform various operations, such as converting video files to other formats, adding subtitles, cropping video, adjusting audio volume, etc., based on the command line interface.
In the embodiment of the invention, first, sub-subtitles are input through a command line tool, and FFmpeg reads and parses subtitle data. FFmpeg will then process the subtitle data, making the necessary settings and conversions to ensure that the subtitle is synchronized with the video content. Next FFmpeg synthesizes the processed sub-subtitles with the blank video background, superimposes the sub-subtitles on the bottom or top of the video, and ensures that they are synchronized in time. Finally, the synthesized video and sub-subtitle are output as sub-video, and the user can select an output format according to the need. Through FFmpeg, video can be acquired according to subtitles, which provides flexible subtitle processing capability for video production and processing, so that video content is richer and easier to understand.
In some embodiments, the background image is a green curtain, and the sub-video acquired at this time is a green curtain sub-video.
Step 400, obtaining a target video according to the sub video.
After the sub-videos corresponding to the sub-subtitles are generated, the sub-subtitles are obtained by dividing the subtitle file, and the obtained sub-videos can be combined to obtain a complete video, namely a target video. The target video may not include audio, and may only sequentially combine the sub-videos, or may include audio, that is, the audio file is combined with the file after the sub-videos are combined.
In some embodiments, the input data further includes an audio file, and at this time, the acquiring the target video according to the sub-video specifically includes: and merging the sub-videos to obtain an intermediate video, and merging the intermediate video and the audio file to obtain a target video. That is, the sub-videos are combined first, then the intermediate video obtained after the sub-videos are combined is combined, and then the intermediate video and the audio file are combined to obtain the target video.
FIG. 4 is a flow chart of a method for capturing a target video from the sub-video and the audio file according to an embodiment of the invention. As shown in fig. 4, the step of obtaining the target video through the sub-video includes the following steps:
step S410, merging the sub-videos to obtain an intermediate video;
specifically, since each sub-video has a corresponding sub-subtitle, all the sub-videos are sequentially combined in order to obtain a target video, that is, the sub-videos are synthesized into an intermediate video according to the sequence of sub-subtitles when the subtitle file is split into sub-subtitles. For example, in an alternative embodiment, the subtitle file is split into 5 sub-subtitles, which are sub-subtitle 1, sub-subtitle 2, sub-subtitle 3, sub-subtitle 4, and sub-subtitle 5, respectively, in order of the subtitle file. After generating the sub-video 1, the sub-video 2, the sub-video 3, the sub-video 4 and the sub-video 5 corresponding to each sub-subtitle in step S300, merging the sub-videos according to the sequence, and thus obtaining the intermediate video.
Step S420, merging the intermediate video and the audio file to obtain a target video.
First, intermediate video and audio files are imported, i.e. the intermediate video and audio files are imported into the corresponding software or editor. It is then necessary to ensure that they are synchronized, i.e. that the timelines of the audio and video are aligned, so that they can be seamlessly joined together when played, i.e. that the time stamps of the audio and video are adjusted to ensure that they match in the correct position. When adjusting to video and audio file synchronization, the audio and video are combined into a single target video, i.e. the audio file and the intermediate video are mixed together so that they can be presented as a whole when played. Finally, the combined target video is exported to a proper format, namely parameters such as output format, resolution, bit rate and the like are selected. Finally, an output video is obtained.
Fig. 5 is a flowchart of a video generating method according to another embodiment of the present invention. As shown in fig. 5, the video generating method is performed according to the following steps:
first, a subtitle file and an audio file are input, and the subtitle file is divided into sub-subtitle 1, sub-subtitle 2, … …, sub-subtitle n by a subtitle divider. And then for each sub-subtitle, the sub-subtitles are processed in parallel through a multimedia processing framework to obtain a sub-video 1 and a sub-video 2 … … sub-video n. And combining all the sub videos to obtain an intermediate video, and finally combining the audio file and the intermediate video through an audio-video combiner to obtain a target video.
The embodiment of the invention discloses a video generation method, a video generation device, electronic equipment and a storage medium. And acquiring input data, wherein the input data comprises a subtitle file, splitting the subtitle file into a plurality of sub-subtitles, generating sub-videos corresponding to the sub-subtitles, and acquiring a target video according to the sub-videos. Therefore, the parallel synthesis of the subtitles can be realized, and the time for generating the video is reduced.
Fig. 6 is a schematic diagram of a video generating apparatus according to an embodiment of the present invention. As shown in fig. 6, the apparatus includes an input unit 61, a splitting unit 62, a sub-video generating unit 63, and a target video acquiring unit 64. Wherein the input unit 61 is configured to obtain input data, where the input data includes a subtitle file. The splitting unit 62 is used for splitting the subtitle file into a plurality of sub-subtitles. The sub-video generation unit 63 is configured to generate sub-videos corresponding to the sub-subtitles. The target video acquisition unit 64 is configured to acquire a target video from the sub-video.
In some embodiments, the input data further includes an audio file and video resolution parameters.
In some embodiments, the sub-video generating unit specifically includes:
and generating sub videos corresponding to the sub subtitles according to the video resolution parameters.
In some embodiments, the target video acquisition unit comprises:
an intermediate video acquisition subunit, configured to combine the sub-videos to acquire an intermediate video;
and the merging subunit is used for merging the intermediate video and the audio file to acquire a target video.
In some embodiments, the splitting unit comprises:
a number determination subunit configured to determine the number of sub-subtitles;
and the first splitting subunit is used for splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the number of the sub-subtitles.
In some embodiments, the splitting unit comprises:
a duration determining subunit, configured to determine a duration of the sub-subtitle;
and the second splitting subunit is used for splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the duration of the sub-subtitles.
In some embodiments, the target video acquisition unit is specifically:
and merging all the sub-videos in sequence to acquire the target video.
According to the embodiment of the invention, the input data comprises the subtitle file, the subtitle file is split into a plurality of sub-subtitles, the sub-video corresponding to each sub-subtitle is generated, and the target video is acquired according to the sub-video. Therefore, the parallel synthesis of the subtitles can be realized, and the time for generating the video is reduced.
Fig. 7 is a schematic diagram of an electronic device according to an embodiment of the invention. As shown in fig. 7, the electronic device shown in fig. 7 is a general address query device, which includes a general computer hardware structure including at least a processor 71 and a memory 72. The processor 71 and the memory 72 are connected by a bus 73. The memory 72 is adapted to store instructions or programs executable by the processor 71. The processor 71 may be a separate microprocessor or a collection of one or more microprocessors. Thus, the processor 71 performs the process flow of the embodiment of the present invention described above to realize the processing of data and the control of other devices by executing the instructions stored in the memory 72. Bus 73 connects the above components together, as well as to display controller 74 and display devices and input/output (I/O) devices 75. Input/output (I/O) devices 75 may be a mouse, keyboard, modem, network interface, touch input device, somatosensory input device, printer, and other devices known in the art. Typically, an input/output device 75 is connected to the system through an input/output (I/O) controller 76.
It will be appreciated by those skilled in the art that embodiments of the present application may be provided as a method, apparatus (device) or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may employ a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, etc.) having computer-usable program code embodied therein.
The present application is described with reference to flowchart illustrations of methods, apparatus (devices) and computer program products according to embodiments of the application. It will be understood that each of the flows in the flowchart may be implemented by computer program instructions.
These computer program instructions may be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows.
These computer program instructions may also be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows.
Another embodiment of the present invention is directed to a non-volatile storage medium storing a computer readable program for causing a computer to perform some or all of the method embodiments described above.
That is, it will be understood by those skilled in the art that all or part of the steps in implementing the methods of the embodiments described above may be implemented by specifying relevant hardware by a program, where the program is stored in a storage medium, and includes several instructions for causing a device (which may be a single-chip microcomputer, a chip or the like) or a processor (processor) to perform all or part of the steps in the methods of the embodiments described herein. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modifications, equivalent substitutions, improvements, etc. that fall within the spirit and principles of the present application are intended to be included within the scope of the present application.

Claims (10)

1. A method of video generation, the method comprising:
acquiring input data, wherein the input data comprises a subtitle file;
splitting the subtitle file into a plurality of sub subtitles;
generating sub videos corresponding to the sub subtitles;
and acquiring a target video according to the sub video.
2. The method of claim 1, wherein the input data further comprises an audio file and video resolution parameters.
3. The method according to claim 2, wherein the generating the sub video corresponding to each sub subtitle specifically includes:
and generating sub videos corresponding to the sub subtitles according to the video resolution parameters.
4. The method of claim 2, wherein the obtaining the target video from the sub-video comprises:
merging the sub-videos to obtain an intermediate video;
and merging the intermediate video and the audio file to acquire a target video.
5. The method of claim 1, wherein splitting the subtitle file into a plurality of sub-subtitles comprises:
determining the number of sub-subtitles;
splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the number of the sub-subtitles.
6. The method of claim 1, wherein splitting the subtitle file into a plurality of sub-subtitles comprises:
determining the duration of sub-subtitles;
splitting the subtitle file into a plurality of sub-subtitles according to the duration of the subtitle file and the duration of the sub-subtitles.
7. The method according to claim 1, wherein the obtaining the target video according to the sub-video specifically comprises:
and merging all the sub-videos in sequence to acquire the target video.
8. A video generating apparatus, the apparatus comprising:
the input unit is used for acquiring input data, wherein the input data comprises a subtitle file;
a splitting unit for splitting the subtitle file into a plurality of sub subtitles;
the sub video generation unit is used for generating sub videos corresponding to the sub subtitles;
and the target video acquisition unit is used for acquiring the target video according to the sub video.
9. A computer readable storage medium, on which computer program instructions are stored, which computer program instructions, when executed by a processor, implement the method of any of claims 1-7.
10. An electronic device comprising a memory and a processor, wherein the memory is configured to store one or more computer program instructions, wherein the one or more computer program instructions are executed by the processor to implement the method of any of claims 1-7.
CN202410032933.2A 2024-01-09 2024-01-09 Video generation method, device, electronic equipment and storage medium Pending CN117880601A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410032933.2A CN117880601A (en) 2024-01-09 2024-01-09 Video generation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410032933.2A CN117880601A (en) 2024-01-09 2024-01-09 Video generation method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117880601A true CN117880601A (en) 2024-04-12

Family

ID=90594275

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410032933.2A Pending CN117880601A (en) 2024-01-09 2024-01-09 Video generation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117880601A (en)

Similar Documents

Publication Publication Date Title
US10880598B2 (en) Video data generation method, computer device, and storage medium
US10348794B2 (en) Media production system with score-based display feature
US7765468B2 (en) Data processing apparatus and data processing method
JP4430882B2 (en) COMPOSITE MEDIA CONTENT CONVERSION DEVICE, CONVERSION METHOD, AND COMPOSITE MEDIA CONTENT CONVERSION PROGRAM
EP3198381B1 (en) Interactive video generation
KR101916874B1 (en) Apparatus, method for auto generating a title of video contents, and computer readable recording medium
KR20070020208A (en) Method and apparatus for locating content in a program
CN111930289B (en) Method and system for processing pictures and texts
JP5857449B2 (en) Image processing apparatus and recording apparatus
WO2010045736A1 (en) Reduced-latency rendering for a text-to-movie system
WO2015019774A1 (en) Data generating device, data generating method, translation processing device, program, and data
US6795092B1 (en) Data processing apparatus and method, and storage medium
JP6796376B2 (en) Divider and analyzer, and program
CN114363691A (en) Speech subtitle synthesis method, apparatus, computer device, and storage medium
CN112969043A (en) Media file generation and playing method and equipment
JP5291448B2 (en) Content production server and content production program
CN117880601A (en) Video generation method, device, electronic equipment and storage medium
US20090089354A1 (en) User device and method and authoring device and method for providing customized contents based on network
WO2002086760A1 (en) Meta data creation apparatus and meta data creation method
CN105847990A (en) Media file playing method and apparatus
CN103514196B (en) Information processing method and electronic equipment
JP7179387B1 (en) HIGHLIGHT MOVIE GENERATION SYSTEM, HIGHLIGHT MOVIE GENERATION METHOD, AND PROGRAM
JPH10134030A (en) System and method for multimedia data presentation
JP2010230948A (en) Content distribution system and text display method
US20140181882A1 (en) Method for transmitting metadata documents associated with a video

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination