CN109788335B - Video subtitle generating method and device - Google Patents

Video subtitle generating method and device Download PDF

Info

Publication number
CN109788335B
CN109788335B CN201910167851.8A CN201910167851A CN109788335B CN 109788335 B CN109788335 B CN 109788335B CN 201910167851 A CN201910167851 A CN 201910167851A CN 109788335 B CN109788335 B CN 109788335B
Authority
CN
China
Prior art keywords
subtitle
time
character
caption
total
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910167851.8A
Other languages
Chinese (zh)
Other versions
CN109788335A (en
Inventor
李涛
陈云贵
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Apas Technology Co ltd
Original Assignee
Zhuhai Tianyan Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Tianyan Technology Co ltd filed Critical Zhuhai Tianyan Technology Co ltd
Priority to CN201910167851.8A priority Critical patent/CN109788335B/en
Publication of CN109788335A publication Critical patent/CN109788335A/en
Application granted granted Critical
Publication of CN109788335B publication Critical patent/CN109788335B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The embodiment of the application provides a method and a device for generating video subtitles, wherein the method comprises the following steps: responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user, and determining the subtitle starting time selected by the user based on a video stream of subtitles to be drawn; acquiring subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template; respectively determining the time range of each animation stage of each subtitle character according to the subtitle time attribute information and the subtitle starting time; and drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. By the embodiment, the efficiency of generating the video subtitles can be improved, and the workload required for generating the subtitles can be reduced.

Description

Video subtitle generating method and device
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method and an apparatus for generating video subtitles.
Background
With the development of internet technology, a user can independently add subtitles to a video, such as adding a head introduction text or adding a staff form at the end of a film. At present, the way of adding subtitles in video is generally as follows: the staff manually locates the video frames to be added with the subtitles, manually inputs subtitle texts in each frame of video, and adjusts the style of the subtitle texts in each frame of video, such as adjusting the size and color of subtitle fonts. Therefore, the existing subtitle adding mode has the problems of large workload and low working efficiency of manual operation.
Disclosure of Invention
The embodiment of the application aims to provide a video subtitle generating method and device so as to improve the efficiency of generating video subtitles and reduce the workload required by generating the subtitles.
In order to solve the above technical problem, the embodiment of the present application is implemented as follows:
in a first aspect, an embodiment of the present application provides a method for generating a video subtitle, including:
responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user, and determining the subtitle starting time selected by the user based on a video stream of subtitles to be drawn;
acquiring subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template;
respectively determining the time range of each animation stage of each subtitle character according to the subtitle time attribute information and the subtitle starting time;
and drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
In a second aspect, an embodiment of the present application provides a video subtitle generating apparatus, including:
the first information acquisition module is used for responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user and determining the subtitle starting time selected by the user based on a video stream of subtitles to be drawn;
the second information acquisition module is used for acquiring the subtitle time attribute information, the subtitle style attribute information and the timestamp information of the video stream corresponding to the subtitle template;
a time range determining module, configured to determine a time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time;
and the subtitle character drawing module is used for drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
In a third aspect, an embodiment of the present application provides a video subtitle generating apparatus, including: a memory, a processor and computer executable instructions stored on the memory and executable on the processor, which when executed by the processor implement the steps of the video title generation method according to the first aspect as described above.
In a fourth aspect, the present application provides a computer-readable storage medium for storing computer-executable instructions, which when executed by a processor implement the steps of the video subtitle generating method according to the first aspect.
In the embodiment of the application, each subtitle character input by a user is obtained in response to a subtitle generating instruction of the user, a subtitle template selected by the user is determined, subtitle starting time selected by the user based on a video stream of a subtitle to be drawn is determined, subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template are obtained, the time range of each animation stage of each subtitle character is determined according to the subtitle time attribute information and the subtitle starting time, and each subtitle character is drawn in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. In the embodiment, the user only needs to input the caption characters, select the caption template and determine the caption starting time, and each caption character can be automatically drawn in the video stream according to the caption characters input by the user, the selected caption template and the determined caption starting time, so that the efficiency of generating the video caption is improved, and the workload required by generating the caption is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, it is obvious that the drawings in the following description are only some embodiments described in the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without any creative effort.
Fig. 1 is a schematic flowchart of a video subtitle generating method according to an embodiment of the present application;
fig. 2 is a schematic diagram of an animation phase of subtitle characters according to an embodiment of the present application;
fig. 3 is a schematic diagram illustrating an effect of an animation style of a subtitle character according to an embodiment of the present application;
fig. 4 is a schematic diagram illustrating a module composition of a video subtitle generating apparatus according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of a video subtitle generating apparatus according to an embodiment of the present application.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in the present application, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The method in the embodiment of the application can be applied to video subtitle generating equipment, and the video subtitle generating equipment can improve the efficiency of generating the video subtitle and reduce the workload required by generating the subtitle by executing the method in the embodiment of the application.
Fig. 1 is a schematic flowchart of a video subtitle generating method according to an embodiment of the present application, and as shown in fig. 1, the method includes the following steps:
s102, responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user, and determining the subtitle starting time selected by the user based on a video stream of subtitles to be drawn;
s104, acquiring subtitle time attribute information, subtitle style attribute information and timestamp information of a video stream corresponding to a subtitle template;
s106, respectively determining the time range of each animation stage of each subtitle character according to the subtitle time attribute information and the subtitle starting time;
and S108, drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
In the embodiment of the application, each subtitle character input by a user is obtained in response to a subtitle generating instruction of the user, a subtitle template selected by the user is determined, subtitle starting time selected by the user based on a video stream of a subtitle to be drawn is determined, subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template are obtained, the time range of each animation stage of each subtitle character is determined according to the subtitle time attribute information and the subtitle starting time, and each subtitle character is drawn in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. In the embodiment, the user only needs to input the caption characters, select the caption template and determine the caption starting time, and each caption character can be automatically drawn in the video stream according to the caption characters input by the user, the selected caption template and the determined caption starting time, so that the efficiency of generating the video caption is improved, and the workload required by generating the caption is reduced.
In step S102, the video subtitle generating device responds to the subtitle generating instruction of the user, obtains each subtitle character input by the user, determines the subtitle template selected by the user, and determines the subtitle start time selected by the user based on the video stream to be subjected to subtitle drawing. The video subtitle generating apparatus may preset a plurality of subtitle templates, in which the character size, font, mode, subtitle style attribute information, subtitle time attribute information, and the like are defined, and is not particularly limited herein. The user determines a subtitle start time on the video subtitle generating device that is selected based on the video stream for which the subtitle is to be drawn.
In step S104, the video subtitle generating device obtains subtitle time attribute information, subtitle style attribute information, and timestamp information of the video stream corresponding to the subtitle template. The subtitle style attribute information may include an offset of a character (a moving effect may be implemented), a rotation value of the character (a rotation effect may be implemented), a scaling value of the character (an effect that the character is changed from small to large, or is stretched or flattened), a color of the character (an effect that the character is changed in color may be implemented), and the like, which are not particularly limited herein.
In the above step S106, the video subtitle generating device determines the time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time. The caption time attribute information may include a total caption duration, a total caption adding stage duration in the total caption duration, an adding duration of each caption character in the total caption adding stage duration, a total caption hiding stage duration in the total caption duration, a hiding duration of each caption character in the total caption hiding stage duration, and the like. The subtitle start time refers to a start time of a subtitle in a video stream to which the subtitle is to be drawn. The animation phase may include a subtitle adding phase from no subtitle characters to appearance, a full display phase, and a subtitle hiding phase from display to disappearance, but the animation phase may include only the subtitle adding phase and the full display phase (e.g., a trailer subtitle), or only the full display phase and the subtitle hiding phase (e.g., a trailer subtitle). The time range of the animation phase refers to the time range of the animation phase in the video stream to be subtitled.
Fig. 2 is a schematic diagram of a subtitle character animation phase according to an embodiment of the present application, and fig. 2 is a schematic diagram of a movement of a character a from a subtitle adding phase (2a) to a full display phase (2b) and finally to a subtitle hiding phase (2 c). In stage (2a), the character enters from the left side of the picture until the character moves to the middle of the picture, in stage (2b), the character stays in the middle of the picture to be displayed, and in stage (2c), the character moves from the middle of the picture to the right side of the picture and moves out.
In the above step S108, the video subtitle generating apparatus draws each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation phase of each subtitle character, and the timestamp information of the video stream. In a specific example, the subtitle characters include characters a, the subtitle style attribute information includes color information of the characters a, the time range of the subtitle adding stage of the characters a is 1 min 10 sec to 2 min 10 sec, a video frame corresponding to 1 min 10 sec to 2 min 10 sec is determined in the video stream according to timestamp information of the video stream, and the characters a are drawn in the determined video frame according to the color information of the characters a.
In the embodiment of the present application, determining the time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time includes:
(a1) determining the total time range of the caption and the total occupied time range of each caption character in the total time range according to the caption time attribute information and the caption starting time;
(a2) extracting the duration of each animation stage of each subtitle character and the time sequence among the animation stages from the subtitle time attribute information;
(a3) and determining the time range of each animation stage of each subtitle character in the total occupied time range of each subtitle character according to the duration of each animation stage of each subtitle character and the time sequence among all the animation stages.
In the above-described action (a1), the video subtitle generating apparatus determines a total time range of the subtitles and a total occupied time range of each subtitle character within the total time range, based on the time attribute information and the subtitle start time. For example, the caption characters include a and B, and the total time of the caption ranges from 1 minute 10 seconds to 5 minutes 10 seconds, and within this total time range, the total occupied time of the character a ranges from 1 minute 20 seconds to 5 minutes 00 seconds, and the total occupied time of the character B ranges from 2 minutes 10 seconds to 4 minutes 10 seconds.
In the above-described acts (a2) and (a3), the duration of each animation phase and the time sequence between the animation phases of each subtitle character are extracted from the subtitle time attribute information, and the time range of each animation phase of each subtitle character is determined within the total occupied time range of each subtitle character according to the duration of each animation phase of each subtitle character and the time sequence between the animation phases.
In one embodiment, the caption characters include a and B, and the animation phase includes two animation phases: the first animation stage is a subtitle adding stage from no subtitle characters to appearance, and the second animation stage is a subtitle hiding stage from the existence of subtitles to disappearance of subtitles. It is determined through the action (a1) that the total time range of the subtitle is 1 st second to 8 th second, and that the total occupied time range of the character a is 1 st to 7 th seconds and the total occupied time range of the character B is 2 nd to 8 th seconds. By the action (a2), the subtitle addition phase is extracted from the subtitle time attribute information before the subtitle hiding phase, and the duration of the subtitle addition phase for the extracted character a is 5 seconds, the duration of the subtitle hiding phase for the extracted character a is 2 seconds, the duration of the subtitle addition phase for the extracted character B is 5 seconds, and the duration of the subtitle hiding phase for the extracted character B is 2 seconds. Through the above-described action (a3), it can be found that the time range of the subtitle addition phase for the character a is 1 st to 5 th seconds, the time range of the subtitle concealment phase for the character a is 6 th to 7 th seconds, the time range of the subtitle addition phase for the character B is 2 nd to 6 th seconds, and the time range of the subtitle concealment phase for the character B is 7 th to 8 th seconds.
In the embodiment of the present application, determining the total time range of the subtitles and the total occupied time range of each subtitle character in the total time range according to the subtitle time attribute information and the subtitle start time includes:
(b1) extracting the total caption duration from the caption time attribute information, and determining the total time range of the caption according to the total caption duration and the caption initial time;
(b2) extracting the total duration of a caption adding stage in the total duration of the captions and the adding duration of each caption character in the total duration of the caption adding stage from the caption time attribute information, and determining the adding starting time interval between each caption character according to the total duration of the caption adding stage, the adding duration of each caption character and the obtained number of the caption characters;
(b3) extracting the total duration of the subtitle hiding stage in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding stage from the subtitle time attribute information, and determining the hiding completion time interval between each subtitle character according to the total duration of the subtitle hiding stage, the hidden duration of each subtitle character and the number of the acquired subtitle characters;
(b4) and determining the total occupied time range of each caption character in the total time range according to the adding starting time interval, the adding sequence of each caption character, the hiding completion time interval and the hiding sequence of each caption character.
In the above action (b1), the video subtitle generating apparatus obtains the subtitle ending time according to the subtitle starting time plus the subtitle total duration, and determines the total time range of the subtitle from the subtitle starting time and the subtitle ending time.
In the above action (b2), the subtitle adding step is a step in which subtitle characters appear one by one until all subtitle characters appear, the total duration of the subtitle adding step and the adding duration of each subtitle character in the total duration of the subtitle adding step are extracted from the subtitle time attribute information, and the adding start time interval between each subtitle character is determined according to the total duration of the subtitle adding step, the adding duration of each subtitle character and the number of the acquired subtitle characters, where a is the total duration of the subtitle adding step, b is the adding duration of each subtitle character, and c is the number of subtitle characters.
Taking the example of the subtitles a and B as an example, the total duration of the subtitles is 8 seconds, the total duration of the subtitle adding stage is 6 seconds (the adding of the characters a and B is completed only by the 6 th second), the adding duration of each subtitle character is 5 seconds, the number of the characters is 2, and the adding start time interval between the subtitle characters is (6-5)/(2-1) equal to 1, that is, after the first character is added, the second character is added every second.
In the above action (b3), the subtitle hiding phase is a phase in which the subtitle characters disappear one by one until all the subtitle characters disappear, the total duration of the subtitle hiding phase in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding phase are extracted from the subtitle time attribute information, and the hiding completion time interval between each subtitle character is determined according to the total duration of the subtitle hiding phase, the hidden duration of each subtitle character and the number of the acquired subtitle characters, where d is the total duration of the subtitle hiding phase, e is the hidden duration of each subtitle character, and c is the number of the subtitle characters.
Taking the previous example of the subtitles a and B as an example, the total duration of the subtitles is 8 seconds, the total duration of the subtitle hiding period is 3 seconds (the character a begins to be hidden by the 6 th second), the hiding duration of each subtitle character is 2 seconds, the number of the characters is 2, and the hiding completion time interval between the subtitle characters is (3-2)/(2-1) equal to 1, that is, the previous subtitle character is completely hidden by the second before the last subtitle character is completely hidden.
In the above-described operation (b4), the total occupied time range of each caption character within the total time range is determined based on the addition start time interval, the addition order of each caption character, the concealment completion time interval, and the concealment order of each caption character. For example, the characters are subtitle characters of nihao, the adding start time interval of each subtitle character is 0.8 second, the adding sequence is n, i, h, a and o, the hiding completion time interval of each subtitle character is 0.8 second, the hiding sequence is o, a, h, i and n, the total time range of the subtitles is 0 to 10 seconds, the total occupied time range of the character n in the total time range is determined to be 0 to 10 seconds, the total occupied time range of the character i in the total time range is determined to be 0.8 to 9.2 seconds, the total occupied time range of the character h in the total time range is determined to be 1.6 to 8.4 seconds, the total occupied time range of the character a in the total time range is determined to be 2.4 to 7.6 seconds, and the total occupied time range of the character o in the total time range is determined to be 3.2 to 6.8 seconds. In this embodiment, the adding order and the hiding order of each subtitle character may be extracted from the subtitle time attribute information.
In the embodiment of the present application, drawing each subtitle character in a video stream according to subtitle style attribute information, a time range of each animation phase of each subtitle character, and timestamp information of the video stream includes:
(c1) for any subtitle character, extracting animation style information corresponding to each animation stage of the subtitle character from the subtitle style attribute information;
(c2) for any animation phase of the caption character, if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the caption character, the caption character is drawn in the first video frame according to the animation style information corresponding to the animation phase of the caption character.
In the above-mentioned acts (c1) and (c2), the video subtitle generating apparatus extracts, for any one subtitle character, animation style information corresponding to each animation phase of the subtitle character from the subtitle style attribute information, and if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the subtitle character, draws the subtitle character in the first video frame according to the animation style information corresponding to the animation phase of the subtitle character. For example, fig. 3 is a schematic diagram of an effect of a subtitle character animation style provided by an embodiment of the present application, as shown in fig. 3, a character g has an animation effect at 0 th second and an animation effect at 1 st second, where the 0 th second to the 1 st second are an animation phase of the character g, animation style information of the character g includes a rotation value, a video subtitle generating device can achieve an effect of subtitle character rotation according to the rotation value of the character, and if a time corresponding to timestamp information of a first video frame in a video stream is between the 0 th second and the 1 st second, the character g with the rotation effect is drawn in the first video frame.
In an embodiment of the present application, drawing the subtitle character in a first video frame according to animation style information corresponding to the animation phase of the subtitle character includes:
(d1) if the timestamp information of the first video frame corresponds to the initial time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the initial style information corresponding to the animation stage of the subtitle character;
(d2) if the timestamp information of the first video frame corresponds to the ending time in the time range of the animation stage of the caption character, drawing the caption character in the first video frame according to the ending style information corresponding to the animation stage of the caption character;
(d3) if the timestamp information of the first video frame corresponds to a time other than the start time and the end time, determining target style information of the caption character in the first video frame according to the start time, the end time, the start style information, the end style information and the timestamp information of the first video frame, and drawing the caption character in the first video frame according to the target style information.
In the above-described operations (d1) to (d3), taking the subtitle adding stage as an example, when the start time in the time range of the subtitle adding stage of the character a is t1 ═ 0s, the start style information of the subtitle adding stage of the character a is the character display completion rate of 0%, the end time in the time range of the subtitle adding stage of the character a is t1 ═ 10s, and the end style information of the subtitle adding stage of the character a is the character display completion rate of 100%, the character a is rendered in the first video frame according to the character display completion rate of 0% if the time stamp information of the first video frame corresponds to t1 ═ 0s, the character a is rendered in the first video frame according to the character display completion rate of 0%, if the time stamp information of the first video frame corresponds to t1 ═ 10s, the character a is rendered in the first video frame according to the character display completion rate of 100%, and the time stamp information of the first video frame corresponds to t5 ═ 5s, the end time is rendered according to the start time, And determining the target style information of the caption character A in the first video frame as the character display completion rate of 50%, and drawing the character A in the first video frame according to the character display completion rate of 50%. In this embodiment, the character display completion rate may be understood as a display ratio of characters, for example, a character display completion rate of 50% means that a character displays a left half or a right half.
In the embodiment of the present application, determining target style information of the subtitle character in a first video frame according to a start time, an end time, start style information, end style information, and timestamp information of the first video frame includes:
(e1) the start style information includes a start transparency value, and the end style information includes an end transparency value; calculating a transparency difference between the starting transparency value and the ending transparency value;
(e2) calculating a first time length between the starting time and the ending time, and calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time;
(e3) and calculating the transparency value of the caption character in the first video frame according to the first time length, the second time length and the transparency difference value, and taking the calculated transparency value as target style information.
In one embodiment, corresponding to the above-mentioned actions (d1) through (d4), the initial transparency of the subtitle character is 0%, the transparency of the subtitle character at the end is 100%, that is, it is found that the transparency difference between the starting transparency value and the ending transparency value is 100%, wherein the starting time of one subtitle a character is 0 th second, the ending time of the a character is 10 th second, then the first duration between the start time and the end time is calculated to be 10 seconds, the time corresponding to the timestamp information of the first video frame is calculated to be 2 seconds, a second duration of 2 seconds between the time corresponding to the timestamp information of the first video frame and the start time is calculated, and calculating the transparency of the character A in the first video frame to be 100% 2/5-20% according to the proportional relation according to the first time length of 10 seconds, the second time length of 2 seconds and the transparency difference value of 100%, and taking the calculated transparency value of 20% as the target style information.
In another embodiment, the start style information further includes a start position coordinate value, and the end style information includes an end position coordinate value; calculating a position coordinate difference value between the coordinate value of the starting position and the coordinate value of the ending position, calculating a first time length between the starting time and the ending time, calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time, calculating the position coordinate value of the caption character in the first video frame according to the first time length, the second time length and the position coordinate difference value, taking the calculated position coordinate value as target style information, and drawing the caption character in the first video frame according to the target style information.
The method in the embodiment of the application further comprises the following steps: playing the video stream after the subtitle is drawn; and after the video stream after the subtitle is drawn is played, modifying the subtitle time attribute information and/or the subtitle style attribute information in the subtitle template in response to a template modification instruction of a user. Furthermore, new caption characters can be drawn in the video stream according to the modified caption template.
For example, the subtitle style attribute information includes color values of characters, the subtitle template can achieve an effect of changing colors of the characters, after a video stream with the subtitle being drawn is played, the video subtitle generating device responds to a template modification instruction of a user to modify the color values of the characters, and then the video subtitle generating device draws new subtitle characters in the video stream according to the modified subtitle template.
The modification of the subtitle time attribute information and the subtitle style attribute information in this embodiment may include modifying a duration of each animation phase, a total subtitle duration, a total subtitle adding phase duration, a total subtitle hiding phase duration, an adding duration of each subtitle character, a hiding duration of each subtitle character, start style information of a subtitle character, end style information of a subtitle character, and the like, which is not particularly limited herein.
In the embodiment of the application, after the video subtitle generating device responds to a subtitle generating instruction of a user, acquires each subtitle character input by the user and determines a subtitle template selected by the user, the subtitle template of the video subtitle generating device determines parameters such as the size of a font size, the font, the alignment mode, the size of a text box and the like according to the number of subtitles input by the user. For example, the font size of the caption characters input by the user is determined based on the preset maximum font size and the preset minimum font size through a binary search method, so that the caption characters input by the user do not exceed the size of the text box corresponding to the caption template. Wherein the following information can be obtained using the CoreText framework according to the size of the font size: 1, TextFrame: actual size of the text box as a whole; 2, TextLine: all the messages of the text line can obtain the related information of the characters, the initial point position of the line and the like of the current line through the object; 3, TextRun: each line of text can obtain all characters, font information and initial positions of the current TextRun through the object according to different subtitle styles (including transparency, coordinate positions and the like) in the line; 4, Character: the Glyph (symbol, character, serial number, etc.) value and Bounds (coordinate values) of the character can be obtained from TextRun. In addition, the above position information obtained by the video subtitle generating device is position information in a coordinate system in the Y axis direction, and the position information in the coordinate system in the Y axis direction needs to be converted to be below the coordinate system of the current video subtitle generating device through a two-dimensional coordinate system conversion matrix arranged in the video subtitle generating device.
In the embodiment of the application, each subtitle character input by a user is obtained in response to a subtitle generating instruction of the user, a subtitle template selected by the user is determined, subtitle starting time selected by the user based on a video stream of a subtitle to be drawn is determined, subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template are obtained, the time range of each animation stage of each subtitle character is determined according to the subtitle time attribute information and the subtitle starting time, and each subtitle character is drawn in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. In the embodiment, the user only needs to input the caption characters, select the caption template and determine the caption starting time, and each caption character can be automatically drawn in the video stream according to the caption characters input by the user, the selected caption template and the determined caption starting time, so that the efficiency of generating the video caption is improved, and the workload required by generating the caption is reduced.
Fig. 4 is a schematic diagram illustrating a module composition of a video subtitle generating apparatus according to an embodiment of the present application, as shown in fig. 4, the apparatus includes:
a first obtaining information module 41, configured to, in response to a subtitle generating instruction of a user, obtain each subtitle character input by the user, determine a subtitle template selected by the user, and determine a subtitle start time selected by the user based on a video stream of subtitles to be drawn;
a second obtaining information module 42, configured to obtain subtitle time attribute information, subtitle style attribute information, and timestamp information of the video stream corresponding to the subtitle template;
a time range determining module 43, configured to determine a time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time;
a caption character drawing module 44, configured to draw each caption character in the video stream according to the caption style attribute information, the time range of each animation phase of each caption character, and the timestamp information of the video stream.
Optionally, the time range determining module 43 is specifically configured to:
determining a total time range of the caption and a total occupied time range of each caption character in the total time range according to the caption time attribute information and the caption starting time;
extracting the duration of each animation stage of each subtitle character and the time sequence among the animation stages from the subtitle time attribute information;
and determining the time range of each animation stage of each subtitle character in the total occupied time range of each subtitle character according to the duration of each animation stage of each subtitle character and the time sequence among all animation stages.
Optionally, the time range determining module 43 is further specifically configured to:
extracting the total caption duration from the caption time attribute information, and determining the total caption time range according to the total caption duration and the caption starting time;
extracting the total time length of the subtitle adding stage in the total time length of the subtitles and the adding time length of each subtitle character in the total time length of the subtitle adding stage from the subtitle time attribute information, and determining the adding starting time interval between the subtitle characters according to the total time length of the subtitle adding stage, the adding time length of each subtitle character and the obtained number of the subtitle characters;
extracting the total duration of the subtitle hiding stage in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding stage from the subtitle time attribute information, and determining the hiding completion time interval between the subtitle characters according to the total duration of the subtitle hiding stage, the hidden duration of each subtitle character and the obtained number of the subtitle characters;
and determining the total occupied time range of each caption character in the total time range according to the adding starting time interval, the adding sequence of each caption character, the hiding completion time interval and the hiding sequence of each caption character.
Optionally, the subtitle character drawing module 44 is specifically configured to:
for any subtitle character, extracting animation style information corresponding to each animation stage of the subtitle character from the subtitle style attribute information;
for any animation phase of the caption character, if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the caption character, drawing the caption character in the first video frame according to the animation style information corresponding to the animation phase of the caption character.
Optionally, the subtitle character drawing module 44 is further specifically configured to:
if the timestamp information of the first video frame corresponds to the starting time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the starting style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to the ending time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the ending style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to a time other than the start time and the end time, determining target style information of the caption character in the first video frame according to the start time, the end time, the start style information, the end style information and the timestamp information of the first video frame, and drawing the caption character in the first video frame according to the target style information.
Optionally, the subtitle character drawing module 44 is further specifically configured to:
the start style information includes a start transparency value, and the end style information includes an end transparency value; calculating a transparency difference between the starting transparency value and the ending transparency value;
calculating a first time length between the starting time and the ending time, and calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time;
and calculating the transparency value of the caption character in the first video frame according to the first time length, the second time length and the transparency difference value, and taking the calculated transparency value as the target style information.
In the embodiment of the application, each subtitle character input by a user is obtained in response to a subtitle generating instruction of the user, a subtitle template selected by the user is determined, subtitle starting time selected by the user based on a video stream of a subtitle to be drawn is determined, subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template are obtained, the time range of each animation stage of each subtitle character is determined according to the subtitle time attribute information and the subtitle starting time, and each subtitle character is drawn in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. In the embodiment, the user only needs to input the caption characters, select the caption template and determine the caption starting time, and each caption character can be automatically drawn in the video stream according to the caption characters input by the user, the selected caption template and the determined caption starting time, so that the efficiency of generating the video caption is improved, and the workload required by generating the caption is reduced.
The video subtitle generating apparatus provided by the embodiment of the present application can implement each process in the foregoing method embodiments, and achieve the same function and effect, which are not repeated here.
Further, an embodiment of the present application further provides a video subtitle generating apparatus, fig. 5 is a schematic structural diagram of the video subtitle generating apparatus provided in an embodiment of the present application, and as shown in fig. 5, the apparatus includes: memory 601, processor 602, bus 603, and communication interface 604. The memory 601, processor 602, and communication interface 604 communicate via the bus 603. the communication interface 604 may include input and output interfaces including, but not limited to, a keyboard, mouse, display, microphone, and the like.
In fig. 5, the memory 601 stores thereon computer-executable instructions executable on the processor 602, and when executed by the processor 602, the computer-executable instructions implement the following processes:
responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user, and determining the subtitle starting time selected by the user based on a video stream of subtitles to be drawn;
acquiring subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template;
respectively determining the time range of each animation stage of each subtitle character according to the subtitle time attribute information and the subtitle starting time;
and drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
Optionally, when executed by the processor, the computer-executable instructions respectively determine a time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time, including:
determining a total time range of the caption and a total occupied time range of each caption character in the total time range according to the caption time attribute information and the caption starting time;
extracting the duration of each animation stage of each subtitle character and the time sequence among the animation stages from the subtitle time attribute information;
and determining the time range of each animation stage of each subtitle character in the total occupied time range of each subtitle character according to the duration of each animation stage of each subtitle character and the time sequence among all animation stages.
Optionally, when executed by the processor, the determining, according to the subtitle time attribute information and the subtitle start time, a total time range of subtitles and a total occupied time range of each subtitle character in the total time range includes:
extracting the total caption duration from the caption time attribute information, and determining the total caption time range according to the total caption duration and the caption starting time;
extracting the total time length of the subtitle adding stage in the total time length of the subtitles and the adding time length of each subtitle character in the total time length of the subtitle adding stage from the subtitle time attribute information, and determining the adding starting time interval between the subtitle characters according to the total time length of the subtitle adding stage, the adding time length of each subtitle character and the obtained number of the subtitle characters;
extracting the total duration of the subtitle hiding stage in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding stage from the subtitle time attribute information, and determining the hiding completion time interval between the subtitle characters according to the total duration of the subtitle hiding stage, the hidden duration of each subtitle character and the obtained number of the subtitle characters;
and determining the total occupied time range of each caption character in the total time range according to the adding starting time interval, the adding sequence of each caption character, the hiding completion time interval and the hiding sequence of each caption character.
Optionally, when executed by the processor, the computer-executable instructions draw each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation phase of each subtitle character, and the timestamp information of the video stream, including:
for any subtitle character, extracting animation style information corresponding to each animation stage of the subtitle character from the subtitle style attribute information;
for any animation phase of the caption character, if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the caption character, drawing the caption character in the first video frame according to the animation style information corresponding to the animation phase of the caption character.
Optionally, when executed by the processor, the computer-executable instructions draw the subtitle character in the first video frame according to animation style information corresponding to the animation phase of the subtitle character, including:
if the timestamp information of the first video frame corresponds to the starting time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the starting style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to the ending time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the ending style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to a time other than the start time and the end time, determining target style information of the caption character in the first video frame according to the start time, the end time, the start style information, the end style information and the timestamp information of the first video frame, and drawing the caption character in the first video frame according to the target style information.
Optionally, when executed by the processor, the determining target style information of the subtitle character in the first video frame according to the start time, the end time, the start style information, the end style information, and timestamp information of the first video frame includes:
the start style information includes a start transparency value, and the end style information includes an end transparency value; calculating a transparency difference between the starting transparency value and the ending transparency value;
calculating a first time length between the starting time and the ending time, and calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time;
and calculating the transparency value of the caption character in the first video frame according to the first time length, the second time length and the transparency difference value, and taking the calculated transparency value as the target style information.
In the embodiment of the application, each subtitle character input by a user is obtained in response to a subtitle generating instruction of the user, a subtitle template selected by the user is determined, subtitle starting time selected by the user based on a video stream of a subtitle to be drawn is determined, subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template are obtained, the time range of each animation stage of each subtitle character is determined according to the subtitle time attribute information and the subtitle starting time, and each subtitle character is drawn in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. In the embodiment, the user only needs to input the caption characters, select the caption template and determine the caption starting time, and each caption character can be automatically drawn in the video stream according to the caption characters input by the user, the selected caption template and the determined caption starting time, so that the efficiency of generating the video caption is improved, and the workload required by generating the caption is reduced.
The video subtitle generating apparatus provided by the embodiment of the present application can implement each process in the foregoing method embodiments, and achieve the same function and effect, which are not repeated here.
Further, an embodiment of the present application also provides a computer-readable storage medium for storing computer-executable instructions, which when executed by a processor implement the following process:
responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user, and determining the subtitle starting time selected by the user based on a video stream of subtitles to be drawn;
acquiring subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template;
respectively determining the time range of each animation stage of each subtitle character according to the subtitle time attribute information and the subtitle starting time;
and drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
Optionally, when executed by the processor, the computer-executable instructions respectively determine a time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time, including:
determining a total time range of the caption and a total occupied time range of each caption character in the total time range according to the caption time attribute information and the caption starting time;
extracting the duration of each animation stage of each subtitle character and the time sequence among the animation stages from the subtitle time attribute information;
and determining the time range of each animation stage of each subtitle character in the total occupied time range of each subtitle character according to the duration of each animation stage of each subtitle character and the time sequence among all animation stages.
Optionally, when executed by the processor, the determining, according to the subtitle time attribute information and the subtitle start time, a total time range of subtitles and a total occupied time range of each subtitle character in the total time range includes:
extracting the total caption duration from the caption time attribute information, and determining the total caption time range according to the total caption duration and the caption starting time;
extracting the total time length of the subtitle adding stage in the total time length of the subtitles and the adding time length of each subtitle character in the total time length of the subtitle adding stage from the subtitle time attribute information, and determining the adding starting time interval between the subtitle characters according to the total time length of the subtitle adding stage, the adding time length of each subtitle character and the obtained number of the subtitle characters;
extracting the total duration of the subtitle hiding stage in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding stage from the subtitle time attribute information, and determining the hiding completion time interval between the subtitle characters according to the total duration of the subtitle hiding stage, the hidden duration of each subtitle character and the obtained number of the subtitle characters;
and determining the total occupied time range of each caption character in the total time range according to the adding starting time interval, the adding sequence of each caption character, the hiding completion time interval and the hiding sequence of each caption character.
Optionally, when executed by the processor, the computer-executable instructions draw each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation phase of each subtitle character, and the timestamp information of the video stream, including:
for any subtitle character, extracting animation style information corresponding to each animation stage of the subtitle character from the subtitle style attribute information;
for any animation phase of the caption character, if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the caption character, drawing the caption character in the first video frame according to the animation style information corresponding to the animation phase of the caption character.
Optionally, when executed by the processor, the computer-executable instructions draw the subtitle character in the first video frame according to animation style information corresponding to the animation phase of the subtitle character, including:
if the timestamp information of the first video frame corresponds to the starting time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the starting style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to the ending time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the ending style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to a time other than the start time and the end time, determining target style information of the caption character in the first video frame according to the start time, the end time, the start style information, the end style information and the timestamp information of the first video frame, and drawing the caption character in the first video frame according to the target style information.
Optionally, when executed by the processor, the determining target style information of the subtitle character in the first video frame according to the start time, the end time, the start style information, the end style information, and timestamp information of the first video frame includes:
the start style information includes a start transparency value, and the end style information includes an end transparency value; calculating a transparency difference between the starting transparency value and the ending transparency value;
calculating a first time length between the starting time and the ending time, and calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time;
and calculating the transparency value of the caption character in the first video frame according to the first time length, the second time length and the transparency difference value, and taking the calculated transparency value as the target style information.
In the embodiment of the application, each subtitle character input by a user is obtained in response to a subtitle generating instruction of the user, a subtitle template selected by the user is determined, subtitle starting time selected by the user based on a video stream of a subtitle to be drawn is determined, subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template are obtained, the time range of each animation stage of each subtitle character is determined according to the subtitle time attribute information and the subtitle starting time, and each subtitle character is drawn in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream. In the embodiment, the user only needs to input the caption characters, select the caption template and determine the caption starting time, and each caption character can be automatically drawn in the video stream according to the caption characters input by the user, the selected caption template and the determined caption starting time, so that the efficiency of generating the video caption is improved, and the workload required by generating the caption is reduced.
The computer-readable storage medium includes a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk.
The computer-readable storage medium provided by the embodiment of the present application can implement the processes in the foregoing method embodiments, and achieve the same functions and effects, which are not repeated here.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of the present application and is not intended to limit the present application. Various modifications and changes may occur to those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present application should be included in the scope of the claims of the present application.

Claims (12)

1. A method for generating a video subtitle, comprising:
responding to a subtitle generating instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user, and determining subtitle starting time selected by the user based on a video stream of a subtitle to be drawn, wherein the subtitle template is preset by video subtitle generating equipment, and subtitle style attribute information and subtitle time attribute information are defined in the subtitle template;
acquiring subtitle time attribute information, subtitle style attribute information and timestamp information of the video stream corresponding to the subtitle template, wherein the subtitle time attribute information comprises total subtitle duration, total subtitle adding stage duration in the total subtitle duration, adding duration of each subtitle character in the total subtitle adding stage duration, total subtitle hiding stage duration in the total subtitle duration, and hidden duration of each subtitle character in the total subtitle hiding stage duration;
respectively determining the time range of each animation stage of each subtitle character according to the subtitle time attribute information and the subtitle starting time, wherein the time range represents the playing time range of a video, and the animation stages comprise a subtitle adding stage from absence of the subtitle characters to appearance of the subtitle characters, a complete display stage and a subtitle hiding stage from display to disappearance of the subtitle characters;
and drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
2. The method of claim 1, wherein determining a time range for each animation phase of each subtitle character based on the subtitle time attribute information and the subtitle start time comprises:
determining a total time range of the caption and a total occupied time range of each caption character in the total time range according to the caption time attribute information and the caption starting time;
extracting the duration of each animation stage of each subtitle character and the time sequence among the animation stages from the subtitle time attribute information;
and determining the time range of each animation stage of each subtitle character in the total occupied time range of each subtitle character according to the duration of each animation stage of each subtitle character and the time sequence among all animation stages.
3. The method of claim 2, wherein determining a total time range of subtitles and a total occupied time range of each subtitle character within the total time range according to the subtitle time attribute information and the subtitle start time comprises:
extracting the total caption duration from the caption time attribute information, and determining the total caption time range according to the total caption duration and the caption starting time;
extracting the total time length of the subtitle adding stage in the total time length of the subtitles and the adding time length of each subtitle character in the total time length of the subtitle adding stage from the subtitle time attribute information, and determining the adding starting time interval between the subtitle characters according to the total time length of the subtitle adding stage, the adding time length of each subtitle character and the obtained number of the subtitle characters;
extracting the total duration of the subtitle hiding stage in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding stage from the subtitle time attribute information, and determining the hiding completion time interval between the subtitle characters according to the total duration of the subtitle hiding stage, the hidden duration of each subtitle character and the obtained number of the subtitle characters;
and determining the total occupied time range of each caption character in the total time range according to the adding starting time interval, the adding sequence of each caption character, the hiding completion time interval and the hiding sequence of each caption character.
4. The method of claim 1, wherein rendering each of the subtitle characters in the video stream according to the subtitle style attribute information, a time range of each animation phase for each subtitle character, and timestamp information for the video stream comprises:
for any subtitle character, extracting animation style information corresponding to each animation stage of the subtitle character from the subtitle style attribute information;
for any animation phase of the caption character, if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the caption character, drawing the caption character in the first video frame according to the animation style information corresponding to the animation phase of the caption character.
5. The method of claim 4, wherein rendering the caption character in the first video frame according to animation style information corresponding to the animation phase of the caption character comprises:
if the timestamp information of the first video frame corresponds to the starting time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the starting style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to the ending time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the ending style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to a time other than the start time and the end time, determining target style information of the caption character in the first video frame according to the start time, the end time, the start style information, the end style information and the timestamp information of the first video frame, and drawing the caption character in the first video frame according to the target style information.
6. The method of claim 5, wherein determining the target style information of the subtitle character in the first video frame according to the start time, the end time, the start style information, the end style information, and the timestamp information of the first video frame comprises:
the start style information includes a start transparency value, and the end style information includes an end transparency value; calculating a transparency difference between the starting transparency value and the ending transparency value;
calculating a first time length between the starting time and the ending time, and calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time;
and calculating the transparency value of the caption character in the first video frame according to the first time length, the second time length and the transparency difference value, and taking the calculated transparency value as the target style information.
7. A video subtitle generating apparatus, comprising:
the system comprises a first information acquisition module, a first information acquisition module and a second information acquisition module, wherein the first information acquisition module is used for responding to a subtitle generation instruction of a user, acquiring each subtitle character input by the user, determining a subtitle template selected by the user and determining subtitle starting time selected by the user based on a video stream of a subtitle to be drawn, the subtitle template is preset by video subtitle generation equipment, and subtitle style attribute information and subtitle time attribute information are defined in the subtitle template;
a second information obtaining module, configured to obtain subtitle time attribute information, subtitle style attribute information, and timestamp information of the video stream corresponding to the subtitle template, where the subtitle time attribute information includes a total subtitle duration, a total subtitle adding stage duration in the total subtitle duration, an adding duration of each subtitle character in the total subtitle adding stage duration, a total subtitle hiding stage duration in the total subtitle duration, and a hiding duration of each subtitle character in the total subtitle hiding stage duration;
a time range determining module, configured to determine a time range of each animation phase of each subtitle character according to the subtitle time attribute information and the subtitle start time, where the time range represents a video playing time range, and the animation phases include a subtitle adding phase in which the subtitle character appears from none to a complete display phase, and a subtitle hiding phase in which the subtitle character disappears from the display phase;
and the subtitle character drawing module is used for drawing each subtitle character in the video stream according to the subtitle style attribute information, the time range of each animation stage of each subtitle character and the timestamp information of the video stream.
8. The apparatus of claim 7, wherein the means for determining a time range is specifically configured to:
determining a total time range of the caption and a total occupied time range of each caption character in the total time range according to the caption time attribute information and the caption starting time;
extracting the duration of each animation stage of each subtitle character and the time sequence among the animation stages from the subtitle time attribute information;
and determining the time range of each animation stage of each subtitle character in the total occupied time range of each subtitle character according to the duration of each animation stage of each subtitle character and the time sequence among all animation stages.
9. The apparatus of claim 8, wherein the determine time range module is further specifically configured to:
extracting the total caption duration from the caption time attribute information, and determining the total caption time range according to the total caption duration and the caption starting time;
extracting the total time length of the subtitle adding stage in the total time length of the subtitles and the adding time length of each subtitle character in the total time length of the subtitle adding stage from the subtitle time attribute information, and determining the adding starting time interval between the subtitle characters according to the total time length of the subtitle adding stage, the adding time length of each subtitle character and the obtained number of the subtitle characters;
extracting the total duration of the subtitle hiding stage in the total duration of the subtitles and the hidden duration of each subtitle character in the total duration of the subtitle hiding stage from the subtitle time attribute information, and determining the hiding completion time interval between the subtitle characters according to the total duration of the subtitle hiding stage, the hidden duration of each subtitle character and the obtained number of the subtitle characters;
and determining the total occupied time range of each caption character in the total time range according to the adding starting time interval, the adding sequence of each caption character, the hiding completion time interval and the hiding sequence of each caption character.
10. The apparatus of claim 7, wherein the render caption character module is specifically configured to:
for any subtitle character, extracting animation style information corresponding to each animation stage of the subtitle character from the subtitle style attribute information;
for any animation phase of the caption character, if the timestamp information of the first video frame in the video stream is within the time range of the animation phase of the caption character, drawing the caption character in the first video frame according to the animation style information corresponding to the animation phase of the caption character.
11. The apparatus of claim 10, wherein the render caption character module is further specifically configured to:
if the timestamp information of the first video frame corresponds to the starting time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the starting style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to the ending time in the time range of the animation stage of the subtitle character, drawing the subtitle character in the first video frame according to the ending style information corresponding to the animation stage of the subtitle character;
if the timestamp information of the first video frame corresponds to a time other than the start time and the end time, determining target style information of the caption character in the first video frame according to the start time, the end time, the start style information, the end style information and the timestamp information of the first video frame, and drawing the caption character in the first video frame according to the target style information.
12. The apparatus of claim 11, wherein the render caption character module is further specifically configured to:
the start style information includes a start transparency value, and the end style information includes an end transparency value; calculating a transparency difference between the starting transparency value and the ending transparency value;
calculating a first time length between the starting time and the ending time, and calculating a second time length between the time corresponding to the timestamp information of the first video frame and the starting time;
and calculating the transparency value of the caption character in the first video frame according to the first time length, the second time length and the transparency difference value, and taking the calculated transparency value as the target style information.
CN201910167851.8A 2019-03-06 2019-03-06 Video subtitle generating method and device Active CN109788335B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910167851.8A CN109788335B (en) 2019-03-06 2019-03-06 Video subtitle generating method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910167851.8A CN109788335B (en) 2019-03-06 2019-03-06 Video subtitle generating method and device

Publications (2)

Publication Number Publication Date
CN109788335A CN109788335A (en) 2019-05-21
CN109788335B true CN109788335B (en) 2021-08-17

Family

ID=66486546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910167851.8A Active CN109788335B (en) 2019-03-06 2019-03-06 Video subtitle generating method and device

Country Status (1)

Country Link
CN (1) CN109788335B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110377208B (en) * 2019-07-17 2023-04-07 广州酷狗计算机科技有限公司 Audio playing method, device, terminal and computer readable storage medium
CN112752164B (en) * 2019-10-30 2023-02-17 深圳Tcl数字技术有限公司 Closed caption display method, intelligent terminal and storage medium
CN111182361B (en) * 2020-01-13 2022-06-17 青岛海信移动通信技术股份有限公司 Communication terminal and video previewing method
CN114697573A (en) * 2020-12-30 2022-07-01 深圳Tcl新技术有限公司 Subtitle generating method, computer device, computer-readable storage medium

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8214214B2 (en) * 2004-12-03 2012-07-03 Phoenix Solutions, Inc. Emotion detection device and method for use in distributed systems
CN100364322C (en) * 2005-11-21 2008-01-23 创维数字技术(深圳)有限公司 Method for dynamically forming caption image data and caption data flow
JP4965980B2 (en) * 2006-11-30 2012-07-04 株式会社東芝 Subtitle detection device
CN101594477B (en) * 2008-05-30 2013-02-20 新奥特(北京)视频技术有限公司 Processing system of ultralong caption rendering
JP5717629B2 (en) * 2008-06-30 2015-05-13 トムソン ライセンシングThomson Licensing Method and apparatus for dynamic display for digital movies
CN101662597B (en) * 2008-08-28 2012-09-05 新奥特(北京)视频技术有限公司 Template-based statistical system for subtitle rendering efficiency
CN102118584B (en) * 2009-12-31 2015-02-18 新奥特(北京)视频技术有限公司 Method and device for generating caption moving pictures with curve extension dynamic effect
CN102134027B (en) * 2011-04-12 2013-01-09 范奉和 Device and method for detecting and alarming elevator faults
CN102148048A (en) * 2011-05-12 2011-08-10 北京瑞信在线系统技术有限公司 Lyric display method and device
CN102436838A (en) * 2011-08-30 2012-05-02 北京瑞信在线系统技术有限公司 Lyric displaying method and device executed by computer
CN107220339A (en) * 2017-05-26 2017-09-29 北京酷我科技有限公司 A kind of lyrics word for word display methods
CN108419113B (en) * 2018-05-24 2021-01-08 广州酷狗计算机科技有限公司 Subtitle display method and device
CN109413478B (en) * 2018-09-26 2020-04-24 北京达佳互联信息技术有限公司 Video editing method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
《Optical flow based dynamic curved video text detection》;Palaiahnakote Shivakuamra ET AL;《2014 IEEE International Conference on Image Processing (ICIP)》;20150129;全文 *
融媒体时代下视频弹幕的美学特性与受众分析;李鹏;《大众文艺》;20180228(第04期);全文 *

Also Published As

Publication number Publication date
CN109788335A (en) 2019-05-21

Similar Documents

Publication Publication Date Title
CN109788335B (en) Video subtitle generating method and device
KR102523843B1 (en) Font rendering method, device and computer readable storage medium
US20090087035A1 (en) Cartoon Face Generation
WO2017016171A1 (en) Window display processing method, apparatus, device and storage medium for terminal device
KR20160013984A (en) Touch optimized design for video editing
CN113691854A (en) Video creation method and device, electronic equipment and computer program product
CN109714627A (en) A kind of rendering method of comment information, device and equipment
CN106126140B (en) A kind of method, apparatus and electronic equipment of rendering type
CN113806570A (en) Image generation method and generation device, electronic device and storage medium
CN106648623B (en) Display method and device for characters in android system
CN114997105A (en) Design template, material generation method, computing device and storage medium
CN112528596A (en) Rendering method and device for special effect of characters, electronic equipment and storage medium
CN114925656B (en) Rich text display method, device, equipment and storage medium
CN106709968A (en) Data visualization method and system for play story information
CN113010075B (en) Multi-signal source window interaction method and system, readable storage medium and electronic device
CN115688216A (en) User-defined design method and device of drawing
CN102118584B (en) Method and device for generating caption moving pictures with curve extension dynamic effect
KR20060030179A (en) Electronic cartoon and manufacturing methode thereof
KR101935926B1 (en) Server and method for webtoon editing
JP3991061B1 (en) Image processing system
CN111078785A (en) Method and device for visually displaying data, electronic equipment and storage medium
Concolato et al. Design of an efficient scalable vector graphics player for constrained devices
CN115883918A (en) Method, apparatus, device and storage medium for processing video stream
Zhu Application of Element Symbol of Beijing Opera Facial Painting (Lianpu) in Smart Phone Theme Design
KR20160115214A (en) Display apparatus and display method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20220727

Address after: No.16 and 17, unit 1, North District, Kailin center, No.51 Jinshui East Road, Zhengzhou area (Zhengdong), Henan pilot Free Trade Zone, Zhengzhou City, Henan Province, 450000

Patentee after: Zhengzhou Apas Technology Co.,Ltd.

Address before: E301-27, building 1, No.1, hagongda Road, Tangjiawan Town, Zhuhai City, Guangdong Province

Patentee before: ZHUHAI TIANYAN TECHNOLOGY Co.,Ltd.