CN111709342A - Subtitle segmentation method, device, equipment and storage medium - Google Patents

Subtitle segmentation method, device, equipment and storage medium Download PDF

Info

Publication number
CN111709342A
CN111709342A CN202010519055.9A CN202010519055A CN111709342A CN 111709342 A CN111709342 A CN 111709342A CN 202010519055 A CN202010519055 A CN 202010519055A CN 111709342 A CN111709342 A CN 111709342A
Authority
CN
China
Prior art keywords
target
subtitle
segmentation
view container
character
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010519055.9A
Other languages
Chinese (zh)
Other versions
CN111709342B (en
Inventor
郑嘉成
欧桐桐
谢飞
李占占
陈金平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing ByteDance Network Technology Co Ltd
Original Assignee
Beijing ByteDance Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing ByteDance Network Technology Co Ltd filed Critical Beijing ByteDance Network Technology Co Ltd
Priority to CN202010519055.9A priority Critical patent/CN111709342B/en
Publication of CN111709342A publication Critical patent/CN111709342A/en
Application granted granted Critical
Publication of CN111709342B publication Critical patent/CN111709342B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds

Abstract

The embodiment of the disclosure discloses a subtitle segmentation method, a subtitle segmentation device, subtitle segmentation equipment and a storage medium. The method comprises the following steps: receiving a subtitle segmentation instruction input by a user; determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode; and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container. The subtitle segmentation method provided by the embodiment of the disclosure can simply and quickly segment the subtitle, effectively reduces extra manual operation of a user, and greatly improves user experience.

Description

Subtitle segmentation method, device, equipment and storage medium
Technical Field
The disclosed embodiments relate to the field of computer technologies, and in particular, to a method, an apparatus, a device, and a storage medium for subtitle segmentation.
Background
Most of the existing video editing tools can provide a function of automatically recognizing subtitles by voice, but when subtitles are segmented, the accuracy of word segmentation and sentence segmentation needs to be further improved, and the method is particularly suitable for the situation that the speaking speed of a user in a video is too fast and the time interval of two sentences is short. For example, when the user is speaking too fast, it is easy to identify "weather today is really good (caption one) and" i want to go out to play (caption two) "as" weather today is really good i want to go out to play ". The original frame a picture corresponds to ' the weather is really good today ', the next frame b picture corresponds to ' i want to go out and play ', and the existing non-split scheme shows ' the weather is really good today ' i want to go out and play ' in both the frames a and b.
Disclosure of Invention
The disclosed embodiments provide a subtitle segmentation method, device, equipment and storage medium, which can simply and quickly segment subtitles.
In a first aspect, an embodiment of the present disclosure provides a subtitle segmentation method, including:
receiving a subtitle segmentation instruction input by a user;
determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode;
and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
In a second aspect, an embodiment of the present disclosure further provides a subtitle segmentation apparatus, including:
the segmentation instruction receiving module is used for receiving a subtitle segmentation instruction input by a user;
the target container determining module is used for determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode;
and the target subtitle segmentation module is used for segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
In a third aspect, an embodiment of the present disclosure further provides an electronic device, where the electronic device includes:
one or more processing devices;
storage means for storing one or more programs;
when the one or more programs are executed by the one or more processing devices, the one or more processing devices are caused to implement the subtitle segmentation method according to the embodiment of the present disclosure.
In a fourth aspect, the disclosed embodiments also provide a computer readable medium, on which a computer program is stored, which when executed by a processing device, implements a subtitle segmentation method according to an embodiment of the present disclosure.
The embodiment of the disclosure receives a subtitle segmentation instruction input by a user; determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode; and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container. The subtitle segmentation method provided by the embodiment of the disclosure can simply and quickly segment the subtitle, effectively reduces extra manual operation of a user, and greatly improves user experience.
Drawings
Fig. 1 is a flowchart of a subtitle segmentation method in an embodiment of the present disclosure;
FIG. 2 is a schematic diagram of a target subtitle stored in a view container in an embodiment of the present disclosure;
fig. 3 is a schematic diagram of a subtitle segmentation pointer in a view container where a target subtitle is located in an embodiment of the present disclosure;
fig. 4 is a flowchart of a subtitle segmentation method in another embodiment of the present disclosure;
fig. 5 is a flowchart of a subtitle segmentation method in another embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of a subtitle segmentation apparatus in another embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an electronic device in another embodiment of the present disclosure.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing different devices, modules or units, and are not used for limiting the order or interdependence relationship of the functions performed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
In the related art, the following two subtitle segmentation schemes are provided: the first scheme is as follows: the user directly and manually modifies and adds the identified subtitles, for example, the subtitle "today weather is really good and i want to go out and play" automatically identified by the video clipping tool is deleted to "today weather is really good" by the user, and then a subtitle "i want to go out and play" is manually added. Scheme II: the original caption automatically recognized by the video clipping tool is simply copied once, and then the user is required to manually delete redundant characters respectively, for example, the original caption of "today's weather is really good and I wants to go out to play" is copied once to form two identical captions of "today's weather is really good and I wants to go out to play (caption one)" and "today's weather is really good and I wants to go out to play (caption two)", then the user manually deletes the second half segment of the caption one to become "today's weather is really good (caption one)", and then manually deletes the first half segment of the caption two to become "I wants to go out to play (caption two)", thereby forming two captions. However, the first solution does not provide convenient operation and completely requires manual processing by the user, and the second solution can provide the "splitting" function and still requires the user to manually delete the redundant text. Therefore, there is a need for an improvement of the above scheme to simply and rapidly perform subtitle segmentation.
Fig. 1 is a flowchart of a subtitle segmentation method according to an embodiment of the present disclosure, where the method may be applied to a case of segmenting a recognized video subtitle, and the method may be executed by a subtitle segmentation apparatus, which may be composed of hardware and/or software and may be generally integrated in a device with a subtitle segmentation function, where the device may be an electronic device such as a server, a mobile terminal, or a server cluster. As shown in fig. 1, the method specifically includes the following steps:
step 110, receiving a subtitle segmentation instruction input by a user.
Specifically, when a user wants to divide a certain subtitle, a subtitle division instruction is usually input in a user interface, and a subtitle division pointer is slid to a position region to be divided in the subtitle, wherein the user can slide the subtitle division pointer randomly among the subtitles to be divided according to the division requirement. The subtitle segmentation instruction may be a "segmentation" button clicked by a user on a user interface, or a voice segmentation instruction input by the user.
Step 120, determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; and the view container corresponds to the characters contained in the target subtitle in a one-to-one mode.
In the embodiment of the present disclosure, the target caption may be understood as a caption to be segmented by a user, where the target caption may include a plurality of characters, and each character in the target caption is stored in a corresponding view container. The view container may be understood as a container unit that can be displayed on the user interface and is used for containing characters, the number of the characters contained in the target subtitle is the same as the number of the view containers, and it may be understood that each character in the target subtitle corresponds to one fixed view container, that is, the view containers correspond to the characters contained in the target subtitle in a one-to-one manner. The size of each view container may be the same or different, and the size of each view container is not limited in the embodiments of the present disclosure. Illustratively, the target subtitle is "invisible and therefore can accept a bar", and fig. 2 is a schematic view of the target subtitle stored in the view container.
In the embodiment of the disclosure, when a subtitle segmentation instruction input by a user is received, a view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction is determined from each view container, and the view container covered by the subtitle segmentation pointer is used as a target view container. For example, fig. 3 is a schematic diagram of a subtitle segmentation pointer in a view container where a target subtitle is located, and as shown in fig. 3, the subtitle segmentation pointer is overlaid on the view container where the character "still" is located, and the view container where the character "still" is located is taken as the target view container.
It should be noted that, the arrangement manner of the view containers is not limited in the embodiments of the present disclosure, where the view containers may be arranged longitudinally (for example, vertical subtitles), or arranged transversely (for example, horizontal subtitles), and of course, may be arranged at any angle to adapt to the subtitle presenting manners at different angles.
And step 130, segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
In the embodiment of the present disclosure, the position relationship between the subtitle segmentation pointer and the target view container is determined, and the target subtitle is segmented according to the position relationship between the subtitle segmentation pointer and the target view container. It can be understood that the subtitle division pointer has a different positional relationship with the target view container, and the division manner of the target subtitle is different. For example, if the view containers are arranged horizontally, when the subtitle segmentation pointer is located at the left half of the target view container, all characters before (excluding the target character) the target character in the target view container may be segmented from the target character and all characters after the target character; when the subtitle division pointer is located on the right half of the target view container, the target character in the target view container and all characters before and after the target character (excluding the target character) can be divided.
Optionally, segmenting the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container, including: determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container; segmenting the target subtitle based on the determined segmentation position.
According to the technical scheme of the embodiment of the disclosure, a subtitle segmentation instruction input by a user is received; determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; wherein, the view container corresponds to the characters contained in the target caption one by one; and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container. According to the subtitle segmentation method provided by the embodiment of the disclosure, as each character in the target subtitle to be segmented is placed in the corresponding view container one by one, the position relationship between the character segmentation pointer and the covered view container can be quickly determined, so that the subtitle can be simply and quickly segmented according to the position relationship, the extra manual operation of a user is effectively reduced, and the user experience is greatly improved.
In some embodiments, segmenting the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container includes: determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container; segmenting the target subtitle based on the determined segmentation position. Because the position relationship between the subtitle segmentation pointer and the target view container is different, the determined segmentation position of the target character is different, and correspondingly, the segmentation mode of the target subtitle is different, the segmentation position of the target subtitle can be further determined according to the position relationship between the subtitle segmentation pointer and the target view container, and then the target subtitle is segmented according to the segmentation position.
Optionally, determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container includes: respectively determining a first distance between the subtitle segmentation pointer and a first edge of the target view container and a second distance between the subtitle segmentation pointer and a second edge of the target view container; wherein the first edge is an adjacent edge of a previous view container of the target view container and the target view container, and the second edge is an adjacent edge of a subsequent view container of the target view container and the target view container; and determining the segmentation position of the target subtitle according to the size relation between the first distance and the second distance. The method has the advantages that the segmentation position of the target subtitle can be accurately and quickly determined, and the target subtitle can be accurately and quickly segmented.
For example, since the width of the target view container is fixed, the division position of the target subtitle may be further determined by calculating the distance between the subtitle division pointer and each edge of the target view container. Specifically, the adjacent edge of the previous view container of the target view container and the target view container is used as a first edge, the adjacent edge of the next view container of the target view container and the target view container is used as a second edge, the distances between the subtitle segmentation pointer and the first edge and between the subtitle segmentation pointer and the second edge are respectively calculated, the distance between the subtitle segmentation pointer and the first edge is used as a first distance, and the distance between the subtitle segmentation pointer and the second edge is used as a second distance. Then, the first distance and the second distance are compared, and the segmentation position of the target subtitle is determined according to the size relation of the first distance and the second distance.
Optionally, determining the segmentation position of the target subtitle according to the relationship between the first distance and the second distance includes: when the first distance is smaller than the second distance, determining the position between a target character in a target view container and a character before the target character as the segmentation position of the target caption; and when the first distance is greater than the second distance, determining the position between a target character in a target view container and a character next to the target character as the segmentation position of the target subtitle.
Illustratively, when the first distance is smaller than the second distance, the position between the target character and the character before the target character in the target view container is determined as the segmentation position of the target caption, and accordingly, when the target caption is segmented according to the determined segmentation position, the first caption is formed based on all characters before the target character in the target view container, and the second caption is formed based on the target character and all characters after the target character. When the first distance is larger than the second distance, determining the position between the target character in the target view container and the character next to the target character as the segmentation position of the target caption, correspondingly, when the target caption is segmented according to the determined segmentation position, forming a first caption based on the target character in the target view container and all characters in front of the target character, and forming a second caption based on all characters behind the target character.
The subtitle division manner described above will be explained by taking the horizontal arrangement of the view containers as an example. When the first distance is smaller than the second distance, the subtitle segmentation pointer is positioned at the left half side of the target view container, and the position between the target subtitle in the target view container and the subtitle adjacent to the left side of the target subtitle is determined as the segmentation position of the target subtitle; and when the first distance is smaller than the second distance, the subtitle segmentation pointer is positioned on the right half side of the target view container, and the position between the target subtitle in the target view container and the subtitle adjacent to the right of the target subtitle is determined as the segmentation position of the target subtitle. Illustratively, the subtitle segmentation pointer is positioned at the left half of the character 'can' in the target subtitle, the character 'can' is taken as the target character, the character 'can' before the target character 'can' is cut from the character 'can' before the target character 'can' and all characters behind the target character 'can' be cut from the target character 'can' to receive the bar ', namely the character' can 'in the target subtitle can' not be seen from the target character and the character 'can' be cut from the target character 'can' to form two subtitles, so as to realize the segmentation of the target subtitle. As another example, if the subtitle segmentation pointer is located on the half of the character "can" in the target subtitle, the character "can" is also used as the target character, and the target character "can" and all characters "behind the target character" can be segmented from the character "can" because the character "can" and the target character "can not see the character" so that the character "can" and all characters "behind the target character" can be segmented from each other, that is, the character "can not see the character" can "and the character" can be segmented from each other in the target subtitle, so that two subtitles are formed, and the target subtitle is segmented.
In some embodiments, the segmenting the target subtitle based on the determined segmentation position includes: determining a first subtitle and a second subtitle from the target subtitle based on the determined division position; and respectively storing all characters in the first caption and all characters in the second caption in two different array structures. The characters contained in the two subtitles divided by the user for the target subtitle are respectively stored in two different data structures (that is, all the characters in the divided first subtitle and second subtitle are respectively stored in two different array structures), so that the subtitle formed by the characters stored in each data structure can be used as a new target subtitle again, and the new target subtitle is divided continuously according to the position relation between the target view container covered by the subtitle dividing pointer corresponding to the subtitle dividing instruction input by the user and the subtitle dividing pointer. Therefore, multiple groups of characters split from the initial target caption can be respectively stored in multiple different data group structures, and small captions obtained by splitting the target caption are stored in each data group structure, so that the target caption can be split into multiple small captions.
In some embodiments, after the segmenting the target subtitle based on the determined segmentation position, the method further comprises: and increasing the distance between the first caption and the second caption obtained by segmentation. The method has the advantages that the visual effect of the user after the target subtitle is segmented can be increased, and the segmentation result is clear for the user. Illustratively, when the target subtitle "cannot see him and can also receive bar" is divided to "cannot see him and" can receive bar "based on the determined division position, the distance between the character" can "and the character" can "is increased. Optionally, increasing the distance between the first subtitle and the second subtitle obtained by the segmentation includes: and increasing the distance between the view container in which the last character in the first subtitle is positioned and the view container in which the first character in the second subtitle is positioned. For example, the margin between the view container where the character "still" is located and the view container where the character "can" is located is increased, so that it is possible to clearly know at which position the target subtitle is divided, so that the division result is clear.
Optionally, increasing the distance between the first subtitle and the second subtitle obtained by the segmentation includes: and when a view list refreshing instruction input by a user is received, increasing the distance between the first subtitle and the second subtitle obtained by segmentation. Wherein, when receiving the 'refresh' button clicked by the user, the method indicates that the view list refresh command is received, or when receiving the refresh command input by the user in the form of voice, the method indicates that the view list refresh command is received. And when a view list refreshing instruction is received, increasing the distance between the first subtitle and the second subtitle obtained by segmentation.
In some embodiments, before receiving a subtitle segmentation instruction input by a user, the method further includes: acquiring a target subtitle identified from a target video; and respectively placing each character in the target caption into a corresponding view container and displaying the characters in a user interface. The target video may include a video recorded on site, a video selected from a terminal album, or a video loaded from a network side, and it should be noted that the source of the target video is not limited in the embodiments of the present disclosure. Optionally, the target video is acquired, the voice in the target video is recognized, and the recognized voice information may be correspondingly configured in the image frame corresponding to the target video in the form of subtitles. Of course, the number of subtitles identified from the target video may include a plurality of subtitles, and one subtitle may be selected from the plurality of subtitles as the target subtitle, for example, the longest subtitle with a subtitle length greater than a preset length threshold may be used as the target subtitle, and one subtitle may be selected from the plurality of subtitles as the target subtitle according to a selection instruction of a user. It should be noted that, the embodiment of the present disclosure does not limit the manner of determining the target subtitle from the subtitles recognized in the target video. And respectively placing each character in the target caption into a corresponding view container, wherein each view container corresponds to each character in the target caption one by one.
In some embodiments, before placing each character in the target subtitle in the corresponding view container, the method further includes: storing all characters in the target caption in an array structure; placing each character in the target caption in a corresponding view container respectively, including: and reading each character in the target caption from the array structure in sequence, and respectively placing each character in the target caption in a corresponding view container.
Illustratively, after a target subtitle identified from a target video is acquired, each character in the target subtitle is stored in an array structure, when the target subtitle needs to be displayed on a user interface, each character in the target subtitle is sequentially read from the array structure and is respectively placed in a corresponding view container, wherein one character is correspondingly placed in one view container. Because each character in the target caption is stored in the same array structure in advance, the target caption can be segmented for multiple times according to the caption segmentation instructions input by a user for multiple times, and the effect of 'one caption and multiple segmentation' is achieved.
In some embodiments, before obtaining the target subtitles identified from the target video, the method further comprises: acquiring a video to be edited; determining the playing time length of the video to be clipped; when the playing duration is longer than a preset playing duration, clipping the video to be clipped to generate the target video; and the playing time length of the target video is less than or equal to the preset playing time length. Specifically, when a certain video needs to be edited, the video to be edited is acquired, for example, the video to be edited is selected from a local album, or a certain video is loaded from the network side as the video to be edited. And determining the playing time length of the video to be clipped, clipping the video to be clipped when the playing time length of the video to be clipped is greater than the preset playing time length, controlling the playing time length of the clipped video to be less than or equal to the preset playing time length, and taking the clipped video as the target video. When the video to be clipped is clipped to generate the target video, the initial playing time, the ending playing time and the playing content of the target video in the video to be clipped can be determined according to the user selection, and of course, the video to be clipped can also be automatically clipped, for example, the initial playing time of the video to be clipped is used as the initial playing time of the target video, and the video to be clipped is clipped as the target video with the playing duration being the preset playing duration. When the playing duration of the video to be clipped is less than the preset playing duration, the video to be clipped is directly used as the target video without clipping operation, and is released.
Optionally, when the playing duration is longer than a preset playing duration, clipping the video to be clipped to generate the target video, including: when the playing duration is longer than a preset playing duration, prompting a user whether to clip the video to be clipped; when a clipping instruction input by the user is received, clipping the video to be clipped according to the clipping instruction, and generating the target video. The advantage of this arrangement is that the user's real needs for the video clip can be better met. Specifically, when the playing duration of the video to be clipped is longer than the preset playing duration, the user is prompted whether to clip the video to be clipped, for example, the user is prompted whether to clip the video to be clipped in a voice mode or a text mode. When a clipping instruction input by a user is received, clipping a video to be clipped according to the clipping instruction. The clipping instruction may include a start time of clipping the video to be clipped, a playing time of the clipped video, and an end time of clipping. Then, the clipped video is taken as the target video, and of course, the target video may be distributed. Optionally, when a clipping instruction input by a user is not received within a preset time period, the video to be clipped may be directly used as the target video and issued.
Fig. 4 is a flowchart of a subtitle segmentation method in another embodiment of the present disclosure, as shown in fig. 4, the method includes the following steps:
step 410, the target subtitles identified from the target video are obtained.
Step 420, storing all characters in the target caption in an array structure.
And 430, sequentially reading each character in the target caption from the array structure, respectively placing each character in the target caption in a corresponding view container with a fixed width, and displaying the characters in a user interface.
Wherein the number of characters contained in the target subtitle is the same as the number of the view containers.
Step 440, when receiving a subtitle segmentation instruction input by a user, determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container.
And the view container corresponds to the characters contained in the target subtitle in a one-to-one mode.
And step 450, determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container.
Step 460, segmenting the target caption based on the determined segmentation position.
Step 470, when receiving a view list refresh command input by the user, increasing the distance between the divided first subtitle and the second subtitle.
The subtitle segmentation method provided by the embodiment of the disclosure can simply and quickly segment the subtitle, effectively reduces extra manual operation of a user, can also increase the visual effect of the user after the target subtitle is segmented, enables the user to be clear of the segmentation result, simultaneously supports the subtitle segmentation instruction input by the user for many times, and segments the target subtitle for many times, namely achieving the effect of 'one subtitle and multiple segmentation'.
Fig. 5 is a flowchart of a subtitle segmentation method in another embodiment of the present disclosure, as shown in fig. 5, the method includes the following steps:
step 510, acquiring a video to be edited.
Step 520, determining the playing time length of the video to be clipped.
Step 530, determining whether the playing time length is greater than a preset playing time length, if so, executing step 540, otherwise, executing step 560.
And 540, prompting a user whether to clip the video to be clipped.
And step 550, when a clipping instruction input by the user is received, clipping the video to be clipped according to the clipping instruction, and generating the target video.
And the playing time length of the target video is less than or equal to the preset playing time length.
And step 560, taking the video to be clipped as the target video.
Step 570, obtaining the target subtitles identified from the target video.
Step 580, store all characters in the target subtitle in an array structure.
Step 590, reading each character in the target caption from the array structure in sequence, placing each character in the target caption in a corresponding view container with a fixed width, and displaying the characters in a user interface.
Wherein the number of characters contained in the target subtitle is the same as the number of the view containers. Optionally, the dimensions of the containers are the same for each of the views.
Step 600, when a subtitle segmentation instruction input by a user is received, determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container.
Step 610, respectively determining a first distance between the subtitle segmentation pointer and a first edge of the target view container and a second distance between the subtitle segmentation pointer and a second edge of the target view container.
Wherein the first edge is an adjacent edge of a previous view container of the target view container and the target view container, and the second edge is an adjacent edge of a subsequent view container of the target view container and the target view container.
Step 620, when the first distance is smaller than the second distance, determining a position between a target character in a target view container and a character before the target character as a segmentation position of the target subtitle.
Step 630, when the first distance is greater than the second distance, determining a position between a target character in a target view container and a character next to the target character as a segmentation position of the target subtitle.
Step 640, determining a first subtitle and a second subtitle from the target subtitles based on the determined segmentation position.
Step 650, storing all the characters in the first caption and all the characters in the second caption in two different array structures respectively, so as to segment the target caption.
It should be noted that step 620 and step 630 are alternatively performed.
The subtitle segmentation method provided by the embodiment of the disclosure can not only clip the video, but also simply and quickly segment the subtitles in the clipped video, thereby effectively reducing extra manual operations of a user and greatly improving the user experience.
Fig. 6 is a schematic structural diagram of a subtitle segmentation apparatus according to another embodiment of the present disclosure. As shown in fig. 6, the apparatus includes: a segmentation instruction receiving module 660, a target container determination module 670 and a target subtitle segmentation module 680.
A division instruction receiving module 660, configured to receive a subtitle division instruction input by a user;
a target container determining module 670, configured to determine, from each view container, a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction; the view container corresponds to characters contained in the target subtitle in a one-to-one mode;
and the target subtitle segmentation module 680 is configured to segment the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container.
The embodiment of the disclosure receives a subtitle segmentation instruction input by a user; determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode; and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container. The subtitle segmentation scheme provided by the embodiment of the disclosure can simply and quickly segment subtitles, effectively reduces extra manual operation of a user, and greatly improves user experience.
Optionally, the target subtitle segmentation module includes:
a dividing position determining unit, configured to determine a dividing position of the target subtitle according to a positional relationship between the subtitle dividing pointer and the target view container;
and the target subtitle segmentation unit is used for segmenting the target subtitle based on the determined segmentation position.
Optionally, the apparatus further comprises:
and the distance increasing module is used for increasing the distance between the first subtitle and the second subtitle obtained by segmentation after the target subtitle is segmented based on the determined segmentation position.
Optionally, the distance increasing module is configured to:
and when a view list refreshing instruction input by a user is received, increasing the distance between the first subtitle and the second subtitle obtained by segmentation.
Optionally, the target subtitle segmentation unit is configured to:
determining a first subtitle and a second subtitle from the target subtitle based on the determined division position;
and respectively storing all characters in the first caption and all characters in the second caption in two different array structures.
Optionally, the segmentation position determining unit includes:
an edge distance determining subunit, configured to determine a first distance between the subtitle segmentation pointer and a first edge of the target view container and a second distance between the subtitle segmentation pointer and a second edge of the target view container, respectively; wherein the first edge is an adjacent edge of a previous view container of the target view container and the target view container, and the second edge is an adjacent edge of a subsequent view container of the target view container and the target view container;
and the division position determining subunit is configured to determine the division position of the target subtitle according to a size relationship between the first distance and the second distance.
Optionally, the segmentation position determining subunit is configured to:
when the first distance is smaller than the second distance, determining the position between a target character in a target view container and a character before the target character as the segmentation position of the target caption;
and when the first distance is greater than the second distance, determining the position between a target character in a target view container and a character next to the target character as the segmentation position of the target subtitle.
Optionally, the apparatus further comprises:
the target caption acquisition module is used for acquiring a target caption identified from a target video before receiving a caption segmentation instruction input by a user;
and the target caption display module is used for respectively placing each character in the target caption into the corresponding view container and displaying the characters in the user interface.
Optionally, the apparatus further comprises:
a character storage module, configured to store all characters in the target subtitle in an array structure before placing each character in the target subtitle in a corresponding view container with a fixed width;
the target caption display module is used for:
and reading each character in the target caption from the array structure in sequence, and respectively placing each character in the target caption in a corresponding view container with fixed width.
Optionally, the apparatus further comprises:
the device comprises a to-be-clipped video acquisition module, a to-be-clipped video acquisition module and a to-be-clipped video acquisition module, wherein the to-be-clipped video acquisition module is used for acquiring a to-be-clipped video before acquiring a target subtitle identified from a target video;
the playing time length determining module is used for determining the playing time length of the video to be clipped;
the video clipping module is used for clipping the video to be clipped to generate the target video when the playing duration is longer than a preset playing duration; and the playing time length of the target video is less than or equal to the preset playing time length.
Optionally, the video clip module is configured to:
when the playing duration is longer than a preset playing duration, prompting a user whether to clip the video to be clipped;
when a clipping instruction input by the user is received, clipping the video to be clipped according to the clipping instruction, and generating the target video.
Optionally, the dimensions of the containers are the same for each of the views.
The device can execute the methods provided by all the embodiments of the disclosure, and has corresponding functional modules and beneficial effects for executing the methods. For technical details which are not described in detail in the embodiments of the present disclosure, reference may be made to the methods provided in all the aforementioned embodiments of the present disclosure.
Referring now to FIG. 7, a block diagram of an electronic device 300 suitable for use in implementing embodiments of the present disclosure is shown. The electronic device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a fixed terminal such as a digital TV, a desktop computer, and the like, or various forms of servers such as a stand-alone server or a server cluster. The electronic device shown in fig. 7 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 300 may include a processing means (e.g., central processing unit, graphics processor, etc.) 301 that may perform various appropriate actions and processes in accordance with a program stored in a read-only memory device (ROM)302 or a program loaded from a storage device 305 into a random access memory device (RAM) 303. In the RAM 303, various programs and data necessary for the operation of the electronic apparatus 300 are also stored. The processing device 301, the ROM 302, and the RAM 303 are connected to each other via a bus 304. An input/output (I/O) interface 305 is also connected to bus 304.
Generally, the following devices may be connected to the I/O interface 305: input devices 306 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 307 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage devices 308 including, for example, magnetic tape, hard disk, etc.; and a communication device 309. The communication means 309 may allow the electronic device 300 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 300 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program containing program code for performing a method for recommending words. In such an embodiment, the computer program may be downloaded and installed from a network through the communication means 309, or installed from the storage means 305, or installed from the ROM 302. The computer program, when executed by the processing device 301, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure can be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network protocol, such as HTTP (HyperText transfer protocol), and may be interconnected with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: receiving a subtitle segmentation instruction input by a user; determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode; and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. Where the name of an element does not in some cases constitute a limitation on the element itself.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
According to one or more embodiments of the present disclosure, an embodiment of the present disclosure provides a subtitle segmentation method, including:
receiving a subtitle segmentation instruction input by a user;
determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode;
and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
Further, segmenting the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container, including:
determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container;
segmenting the target subtitle based on the determined segmentation position.
Further, after the segmenting the target subtitle based on the determined segmentation position, the method further includes:
and increasing the distance between the first caption and the second caption obtained by segmentation.
Further, increasing the distance between the divided first caption and the second caption includes:
and when a view list refreshing instruction input by a user is received, increasing the distance between the first subtitle and the second subtitle obtained by segmentation.
Further, the segmenting the target caption based on the determined segmentation position includes:
determining a first subtitle and a second subtitle from the target subtitle based on the determined division position;
and respectively storing all characters in the first caption and all characters in the second caption in two different array structures.
Further, determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container includes:
respectively determining a first distance between the subtitle segmentation pointer and a first edge of the target view container and a second distance between the subtitle segmentation pointer and a second edge of the target view container; wherein the first edge is an adjacent edge of a previous view container of the target view container and the target view container, and the second edge is an adjacent edge of a subsequent view container of the target view container and the target view container;
and determining the segmentation position of the target subtitle according to the size relation between the first distance and the second distance.
Further, determining the segmentation position of the target subtitle according to the magnitude relationship between the first distance and the second distance includes:
when the first distance is smaller than the second distance, determining the position between a target character in a target view container and a character before the target character as the segmentation position of the target caption;
and when the first distance is greater than the second distance, determining the position between a target character in a target view container and a character next to the target character as the segmentation position of the target subtitle.
Further, before receiving a subtitle segmentation instruction input by a user, the method further comprises the following steps:
acquiring a target subtitle identified from a target video;
and respectively placing each character in the target caption into a corresponding view container and displaying the characters in a user interface.
Further, before each character in the target subtitle is respectively placed in the corresponding view container, the method further includes:
storing all characters in the target caption in an array structure;
placing each character in the target caption in a corresponding view container respectively, including:
and reading each character in the target caption from the array structure in sequence, and respectively placing each character in the target caption in a corresponding view container.
Further, before obtaining the target subtitles identified from the target video, the method further includes:
acquiring a video to be edited;
determining the playing time length of the video to be clipped;
when the playing duration is longer than a preset playing duration, clipping the video to be clipped to generate the target video; and the playing time length of the target video is less than or equal to the preset playing time length.
Further, when the playing duration is longer than a preset playing duration, clipping the video to be clipped to generate the target video, including:
when the playing duration is longer than a preset playing duration, prompting a user whether to clip the video to be clipped;
when a clipping instruction input by the user is received, clipping the video to be clipped according to the clipping instruction, and generating the target video.
Further, the dimensions of the individual view containers are the same.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present disclosure and the technical principles employed. Those skilled in the art will appreciate that the present disclosure is not limited to the particular embodiments described herein, and that various obvious changes, adaptations, and substitutions are possible, without departing from the scope of the present disclosure. Therefore, although the present disclosure has been described in greater detail with reference to the above embodiments, the present disclosure is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present disclosure, the scope of which is determined by the scope of the appended claims.

Claims (15)

1. A subtitle segmentation method, comprising:
receiving a subtitle segmentation instruction input by a user;
determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode;
and segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
2. The method of claim 1, wherein segmenting the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container comprises:
determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container;
segmenting the target subtitle based on the determined segmentation position.
3. The method of claim 2, wherein after the segmenting the target subtitle based on the determined segmentation location, the method further comprises:
and increasing the distance between the first caption and the second caption obtained by segmentation.
4. The method of claim 3, wherein increasing the distance between the segmented first subtitle and the segmented second subtitle comprises:
and when a view list refreshing instruction input by a user is received, increasing the distance between the first subtitle and the second subtitle obtained by segmentation.
5. The method of claim 2, wherein the segmenting the target caption based on the determined segmentation location comprises:
determining a first subtitle and a second subtitle from the target subtitle based on the determined division position;
and respectively storing all characters in the first caption and all characters in the second caption in two different array structures.
6. The method of claim 2, wherein determining the segmentation position of the target subtitle according to the position relationship between the subtitle segmentation pointer and the target view container comprises:
respectively determining a first distance between the subtitle segmentation pointer and a first edge of the target view container and a second distance between the subtitle segmentation pointer and a second edge of the target view container; wherein the first edge is an adjacent edge of a previous view container of the target view container and the target view container, and the second edge is an adjacent edge of a subsequent view container of the target view container and the target view container;
and determining the segmentation position of the target subtitle according to the size relation between the first distance and the second distance.
7. The method of claim 6, wherein determining the segmentation position of the target subtitle according to the magnitude relationship between the first distance and the second distance comprises:
when the first distance is smaller than the second distance, determining the position between a target character in a target view container and a character before the target character as the segmentation position of the target caption;
and when the first distance is greater than the second distance, determining the position between a target character in a target view container and a character next to the target character as the segmentation position of the target subtitle.
8. The method of claim 1, wherein before receiving a subtitle segmentation instruction input by a user, the method further comprises:
acquiring a target subtitle identified from a target video;
and respectively placing each character in the target caption into a corresponding view container and displaying the characters in a user interface.
9. The method of claim 8, further comprising, prior to placing each character in the target subtitle in a corresponding view container:
storing all characters in the target caption in an array structure;
placing each character in the target caption in a corresponding view container respectively, including:
and reading each character in the target caption from the array structure in sequence, and respectively placing each character in the target caption in a corresponding view container.
10. The method of claim 8, further comprising, prior to obtaining the target subtitles identified from the target video:
acquiring a video to be edited;
determining the playing time length of the video to be clipped;
when the playing duration is longer than a preset playing duration, clipping the video to be clipped to generate the target video; and the playing time length of the target video is less than or equal to the preset playing time length.
11. The method of claim 10, wherein clipping the video to be clipped to generate the target video when the playing duration is longer than a preset playing duration comprises:
when the playing duration is longer than a preset playing duration, prompting a user whether to clip the video to be clipped;
when a clipping instruction input by the user is received, clipping the video to be clipped according to the clipping instruction, and generating the target video.
12. The method of any of claims 1-11, wherein the dimensions of the respective view containers are the same.
13. A subtitle division apparatus, comprising:
the segmentation instruction receiving module is used for receiving a subtitle segmentation instruction input by a user;
the target container determining module is used for determining a target view container covered by a subtitle segmentation pointer corresponding to the subtitle segmentation instruction from each view container; the view container corresponds to characters contained in the target subtitle in a one-to-one mode;
and the target subtitle segmentation module is used for segmenting the target subtitle according to the position relation between the subtitle segmentation pointer and the target view container.
14. An electronic device, characterized in that the electronic device comprises:
one or more processing devices;
storage means for storing one or more programs;
when executed by the one or more processing devices, cause the one or more processing devices to implement the subtitle segmentation method as claimed in any one of claims 1-12.
15. A computer-readable medium, on which a computer program is stored, which program, when being executed by processing means, is adapted to carry out a method of subtitle segmentation according to any one of claims 1-12.
CN202010519055.9A 2020-06-09 2020-06-09 Subtitle segmentation method, device, equipment and storage medium Active CN111709342B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010519055.9A CN111709342B (en) 2020-06-09 2020-06-09 Subtitle segmentation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010519055.9A CN111709342B (en) 2020-06-09 2020-06-09 Subtitle segmentation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN111709342A true CN111709342A (en) 2020-09-25
CN111709342B CN111709342B (en) 2023-05-16

Family

ID=72539208

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010519055.9A Active CN111709342B (en) 2020-06-09 2020-06-09 Subtitle segmentation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111709342B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113422996A (en) * 2021-05-10 2021-09-21 北京达佳互联信息技术有限公司 Subtitle information editing method, device and storage medium

Citations (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6052508A (en) * 1997-04-04 2000-04-18 Avid Technology, Inc. User interface for managing track assignment for portable digital moving picture recording and editing system
JP2006114955A (en) * 2004-10-12 2006-04-27 Seiko Epson Corp Caption extraction apparatus, caption extraction method, and program and recording medium thereof
US20070253678A1 (en) * 2006-05-01 2007-11-01 Sarukkai Ramesh R Systems and methods for indexing and searching digital video content
US20150003797A1 (en) * 2013-06-27 2015-01-01 Johannes P. Schmidt Alignment of closed captions
US9558784B1 (en) * 2015-03-24 2017-01-31 Amazon Technologies, Inc. Intelligent video navigation techniques
CN106993227A (en) * 2016-01-20 2017-07-28 腾讯科技(北京)有限公司 It is a kind of enter row information displaying method and apparatus
WO2017191397A1 (en) * 2016-05-03 2017-11-09 Orange Method and device for synchronising subtitles
US20170358064A1 (en) * 2016-06-12 2017-12-14 Apple Inc. Rendering Information into Images
CN107480670A (en) * 2016-06-08 2017-12-15 北京新岸线网络技术有限公司 A kind of method and apparatus of caption extraction
CN108924619A (en) * 2018-06-29 2018-11-30 北京优酷科技有限公司 The display methods and device of subtitle
WO2019085980A1 (en) * 2017-11-03 2019-05-09 腾讯科技(深圳)有限公司 Method and device for video caption automatic adjustment, terminal, and readable medium
WO2019218770A1 (en) * 2018-05-18 2019-11-21 高新兴科技集团股份有限公司 Video playing method for synchronously displaying ar information
US20200007902A1 (en) * 2018-06-29 2020-01-02 Alibaba Group Holding Limited Video subtitle display method and apparatus

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6052508A (en) * 1997-04-04 2000-04-18 Avid Technology, Inc. User interface for managing track assignment for portable digital moving picture recording and editing system
JP2006114955A (en) * 2004-10-12 2006-04-27 Seiko Epson Corp Caption extraction apparatus, caption extraction method, and program and recording medium thereof
US20070253678A1 (en) * 2006-05-01 2007-11-01 Sarukkai Ramesh R Systems and methods for indexing and searching digital video content
US20150003797A1 (en) * 2013-06-27 2015-01-01 Johannes P. Schmidt Alignment of closed captions
US9558784B1 (en) * 2015-03-24 2017-01-31 Amazon Technologies, Inc. Intelligent video navigation techniques
CN106993227A (en) * 2016-01-20 2017-07-28 腾讯科技(北京)有限公司 It is a kind of enter row information displaying method and apparatus
WO2017191397A1 (en) * 2016-05-03 2017-11-09 Orange Method and device for synchronising subtitles
CN107480670A (en) * 2016-06-08 2017-12-15 北京新岸线网络技术有限公司 A kind of method and apparatus of caption extraction
US20170358064A1 (en) * 2016-06-12 2017-12-14 Apple Inc. Rendering Information into Images
WO2019085980A1 (en) * 2017-11-03 2019-05-09 腾讯科技(深圳)有限公司 Method and device for video caption automatic adjustment, terminal, and readable medium
WO2019218770A1 (en) * 2018-05-18 2019-11-21 高新兴科技集团股份有限公司 Video playing method for synchronously displaying ar information
CN108924619A (en) * 2018-06-29 2018-11-30 北京优酷科技有限公司 The display methods and device of subtitle
US20200007902A1 (en) * 2018-06-29 2020-01-02 Alibaba Group Holding Limited Video subtitle display method and apparatus
WO2020006309A1 (en) * 2018-06-29 2020-01-02 Alibaba Group Holding Limited Video subtitle display method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113422996A (en) * 2021-05-10 2021-09-21 北京达佳互联信息技术有限公司 Subtitle information editing method, device and storage medium
CN113422996B (en) * 2021-05-10 2023-01-20 北京达佳互联信息技术有限公司 Subtitle information editing method, device and storage medium

Also Published As

Publication number Publication date
CN111709342B (en) 2023-05-16

Similar Documents

Publication Publication Date Title
CN111970577B (en) Subtitle editing method and device and electronic equipment
CN111445902B (en) Data collection method, device, storage medium and electronic equipment
CN110267113B (en) Video file processing method, system, medium, and electronic device
CN112015926B (en) Search result display method and device, readable medium and electronic equipment
CN113259740A (en) Multimedia processing method, device, equipment and medium
CN113010698B (en) Multimedia interaction method, information interaction method, device, equipment and medium
CN111753558B (en) Video translation method and device, storage medium and electronic equipment
CN111800671A (en) Method and apparatus for aligning paragraphs and video
CN111263186A (en) Video generation, playing, searching and processing method, device and storage medium
CN113507637A (en) Media file processing method, device, equipment, readable storage medium and product
CN113889113A (en) Sentence dividing method and device, storage medium and electronic equipment
CN112380365A (en) Multimedia subtitle interaction method, device, equipment and medium
CN113886612A (en) Multimedia browsing method, device, equipment and medium
CN114329223A (en) Media content searching method, device, equipment and medium
CN110379406B (en) Voice comment conversion method, system, medium and electronic device
CN110855626A (en) Electronic whiteboard packet loss processing method, system, medium and electronic equipment
CN111709342B (en) Subtitle segmentation method, device, equipment and storage medium
CN113011169A (en) Conference summary processing method, device, equipment and medium
CN109816670B (en) Method and apparatus for generating image segmentation model
CN115269920A (en) Interaction method, interaction device, electronic equipment and storage medium
US20140297285A1 (en) Automatic page content reading-aloud method and device thereof
EP4207775A1 (en) Method and apparatus for determining object addition mode, electronic device, and medium
CN113791858A (en) Display method, device, equipment and storage medium
CN114430491A (en) Live broadcast-based data processing method and device
CN112699687A (en) Content cataloging method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant