CN113905267A - Subtitle editing method and device, electronic equipment and storage medium - Google Patents

Subtitle editing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113905267A
CN113905267A CN202110998486.2A CN202110998486A CN113905267A CN 113905267 A CN113905267 A CN 113905267A CN 202110998486 A CN202110998486 A CN 202110998486A CN 113905267 A CN113905267 A CN 113905267A
Authority
CN
China
Prior art keywords
subtitle
component
preset icon
subtitle component
text
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110998486.2A
Other languages
Chinese (zh)
Other versions
CN113905267B (en
Inventor
付硕
赵伊
韩乔
林斐凡
范艺含
郝刚
张一舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110998486.2A priority Critical patent/CN113905267B/en
Publication of CN113905267A publication Critical patent/CN113905267A/en
Application granted granted Critical
Publication of CN113905267B publication Critical patent/CN113905267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Studio Circuits (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The present disclosure relates to a subtitle editing method, apparatus, electronic device, and storage medium, the method comprising: responding to the selection operation of the preset icon acting on the subtitle editing interface, and taking the subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon; moving an action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time; and responding to a release operation corresponding to the selected operation, determining a second subtitle component and a target insertion position based on the latest acquired positioning information, inserting the text information in the first subtitle component into the target insertion position in the second subtitle component, and displaying the new subtitle component obtained by the inserting. By adopting the method and the device, the subtitle components to be combined and the specific combining position can be quickly positioned based on the selection and movement of the action points of the preset icons, and the efficiency of subtitle combining processing is improved.

Description

Subtitle editing method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for editing subtitles, an electronic device, and a storage medium.
Background
At present, with the increase of the demand of video production, video production gradually spreads from a small number of professionals to a common user, and therefore the usability of a multimedia editor is more and more emphasized.
Because the subtitles in the video are stored in an array data format and are directly displayed in a list form, when the subtitles are merged, a user is usually required to select the subtitles to be merged, and the selected subtitle data are merged one by one through clicking operation. By adopting the traditional method, additional checking components need to be introduced, the engineering complexity is increased, and the computer processing logic is redundant.
Therefore, the related art has a problem that the efficiency of the subtitle merging process is low.
Disclosure of Invention
The present disclosure provides a subtitle editing method, apparatus, electronic device, and storage medium to at least solve the problem of low efficiency of subtitle merging processing in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of the embodiments of the present disclosure, there is provided a subtitle editing method, including:
responding to the selection operation of the preset icon acting on the subtitle editing interface, and taking the subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon;
moving an action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
and responding to a release operation corresponding to the selected operation, determining a second subtitle component and a target insertion position based on the latest acquired positioning information, inserting the text information in the first subtitle component into the target insertion position in the second subtitle component, and displaying the new subtitle component obtained by the inserting.
In one possible implementation manner, after the step of responding to the selection operation of the preset icon acting in the subtitle editing interface, the method further includes:
displaying the selected preset icon as a triggered state;
and in the process of moving the action point corresponding to the selected operation, displaying the moving track indication of the action point by taking the preset icon in the triggered state as a starting point.
In one possible implementation, the displaying the selected preset icon as a triggered state includes:
displaying the selected preset icon as a first preset color; the first preset color is different from the display color of the unselected preset icons.
In a possible implementation manner, in the moving of the action point corresponding to the selected operation, taking the preset icon in the triggered state as a starting point, displaying a movement trajectory indication of the action point includes:
displaying a straight line segment with a second preset color as the moving track indication in the process of moving the action point corresponding to the selected operation;
and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
In one possible implementation manner, in response to the selection operation of the preset icon acting in the subtitle editing interface, the subtitle component corresponding to the selected preset icon is used as the first subtitle component; after the step of respectively corresponding the preset icons to each subtitle component in the subtitle editing interface, the method further comprises the following steps:
and caching the text information in the first caption component, so as to acquire the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into the target insertion position in the second caption component, and displaying to obtain a new caption component.
In a possible implementation manner, after the step of moving the action point corresponding to the selected operation and acquiring the positioning information of the action point in the subtitle editing interface in real time, the method further includes:
and caching the positioning information, so as to obtain the last cached positioning information from the cache as the latest positioning information when responding to the releasing operation, and determining the second subtitle component and the target insertion position based on the latest positioning information.
In one possible implementation, each of the caption components has a corresponding time interval, and after the step of presenting the new caption component, the method further includes:
displaying a new time interval at the corresponding position of the new caption component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation, the first subtitle component is adjacent to the second subtitle component; before the step of presenting a new time interval at a corresponding position of the new subtitle component, the method further includes:
determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain the new time interval.
In one possible implementation, the inserting the text information in the first caption component into the target insertion position in the second caption component and presenting the new caption component thereby includes:
splitting the text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text precedes the second split text in the text information of the second subtitle component;
and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying the new caption component containing the new text information.
In one possible implementation, the presentation resulting therefrom a new caption component includes:
and deleting the first subtitle component, the second subtitle component and the preset icons corresponding to the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the rendered new subtitle component and the preset icons corresponding to the new subtitle component.
According to a second aspect of the embodiments of the present disclosure, there is provided a subtitle editing apparatus including:
the subtitle component selecting unit is configured to execute a selecting operation responding to a preset icon acting in the subtitle editing interface, and the subtitle component corresponding to the selected preset icon is used as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon;
the positioning information real-time acquisition unit is configured to execute moving of the action point corresponding to the selected operation and acquire positioning information of the action point in the subtitle editing interface in real time;
and the subtitle component merging unit is configured to execute a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert the text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component obtained by the insertion.
In one possible implementation, the apparatus further includes:
a triggered state display unit, specifically configured to perform displaying the selected preset icon as a triggered state;
and the movement track indication display unit is specifically configured to display the movement track indication of the action point by taking the preset icon in the triggered state as a starting point in the process of moving the action point corresponding to the selected operation.
In a possible implementation manner, the triggered state display unit is specifically configured to perform displaying the selected preset icon as a first preset color; the first preset color is different from the display color of the unselected preset icons.
In a possible implementation manner, the movement track indication presenting unit is specifically configured to perform, in the process of moving the action point corresponding to the selected operation, presenting a straight line segment of a second preset color as the movement track indication; and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
In one possible implementation, the apparatus further includes:
and the text information caching unit is specifically configured to perform caching on the text information in the first subtitle component, so as to acquire the text information in the first subtitle component from the cache in response to the release operation, insert the text information in the first subtitle component into the target insertion position in the second subtitle component, and display a new subtitle component obtained thereby.
In one possible implementation, the apparatus further includes:
a positioning information caching unit, specifically configured to perform caching of the positioning information, so as to obtain last cached positioning information from a cache as the latest positioning information in response to the release operation, and determine the second subtitle component and the target insertion position based on the latest positioning information.
In one possible implementation, each of the subtitle components has a corresponding time interval, and the apparatus further includes:
a new time interval presentation unit, configured to perform presentation of a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation, the first caption component is adjacent to the second caption component, and the apparatus further includes:
an earliest time point and latest time point determining unit, which is specifically configured to determine the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
a new time interval obtaining unit, configured to perform obtaining the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
In a possible implementation manner, the caption component merging unit is specifically configured to split the text information in the second caption component according to the target insertion position to obtain a first split text and a second split text; the first split text precedes the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first subtitle component and the second split text in sequence to obtain new text information, and displaying the new subtitle component containing the new text information.
In a possible implementation manner, the subtitle component merging unit is further specifically configured to execute, in the subtitle editing interface, deleting the first subtitle component, the second subtitle component, and preset icons corresponding to the first subtitle component and the second subtitle component, and displaying the rendered new subtitle component and the preset icon corresponding to the new subtitle component.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic device, including a memory and a processor, where the memory stores a computer program, and the processor implements the subtitle editing method according to the first aspect or any one of the possible implementations of the first aspect when executing the computer program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements a subtitle editing method as set forth in the first aspect or any one of the possible implementations of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, the program product comprising a computer program, the computer program being stored in a readable storage medium, from which the computer program is read and executed by at least one processor of an apparatus, such that the apparatus performs the subtitle editing method as set forth in any one of the first aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the scheme, the subtitle component corresponding to the selected preset icon is used as a first subtitle component by responding to the selection operation of the preset icon acting on the subtitle editing interface, the subtitle components in the subtitle editing interface are respectively corresponding to the preset icon, then the action point corresponding to the selection operation is moved, the positioning information of the action point in the subtitle editing interface is obtained in real time, the release operation corresponding to the selection operation is further responded, the second subtitle component and the target insertion position are determined based on the newly obtained positioning information, the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and the new subtitle component is obtained by displaying. Therefore, the subtitle components to be merged and the specific merging position can be quickly positioned on the basis of the selection and movement of the action points of the preset icons, and the efficiency of subtitle merging processing is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 is a flowchart illustrating a subtitle editing method according to an exemplary embodiment.
Fig. 2 is a flow chart illustrating a step of indicating a movement trajectory according to an exemplary embodiment.
Fig. 3a is a diagram illustrating an example subtitle editing interface (before merging) according to an example embodiment.
Fig. 3b is a schematic diagram illustrating an example subtitle editing interface (in merge) according to an example embodiment.
Fig. 3c is a diagram illustrating an example subtitle editing interface (after merging) according to an example embodiment.
Fig. 4 is a flowchart illustrating another subtitle editing method according to an example embodiment.
Fig. 5 is a block diagram illustrating a subtitle editing apparatus according to an exemplary embodiment.
Fig. 6 is an internal block diagram of an electronic device shown in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure.
Fig. 1 is a flowchart illustrating a subtitle editing method according to an exemplary embodiment, which may be used in a computer device such as a terminal, for example, in a video editing interface displayed by the terminal, as shown in fig. 1, and includes the following steps.
In step S110, in response to a selection operation acting on a preset icon in a subtitle editing interface, taking a subtitle component corresponding to the selected preset icon as a first subtitle component, where each subtitle component in the subtitle editing interface corresponds to a preset icon;
for example, when a user performs video editing through the video editing interface, the subtitle component displayed in the subtitle editing interface is the subtitle component to be added to the video.
In practical applications, at least two subtitle components may be displayed in the subtitle editing interface, each subtitle component in the subtitle editing interface may correspond to a preset icon, for example, a preset icon may be displayed in an end region of each subtitle component, and by responding to a selection operation acting on a preset icon corresponding to a certain subtitle component, the subtitle component corresponding to the selected preset icon may be used as a first subtitle component, and the first subtitle component may be inserted into another subtitle component when merging the subtitle components.
Specifically, by responding to a preset action of the operating medium acting on a preset icon in the subtitle editing interface, a preset I/O interface may be called, and the I/O interface may be used to take a subtitle component corresponding to the selected preset icon as the first subtitle component.
For example, when it is detected that a user long-presses (i.e., performs a preset action) a preset icon at a terminal area of a certain subtitle component using a mouse (i.e., an operating medium), a callback may be triggered, that is, in response to a selection operation, a subtitle component corresponding to the selected preset icon is used as a first subtitle component, and then, in response to an operating medium movement event, a drag function for the first subtitle component may be implemented.
In step S120, moving an action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
the action point may be a virtual display corresponding to the operation medium, such as a cursor display of a mouse in a subtitle editing interface.
As an example, the positioning information may be a text information positioning position corresponding to a position where the action point is located in a text region of another subtitle component to be merged, such as a position where a current cursor of a mouse is located at a corresponding character in text information displayed in the text region.
In a specific implementation, the action point corresponding to the selected operation may be moved, for example, a user moves a corresponding cursor using a mouse, so that the first subtitle component implements a dragging function, and may obtain the positioning information of the action point in the subtitle editing interface in real time, for example, a preset I/O interface may be called in the process of moving the action point corresponding to the selected operation, and the I/O interface may be used to obtain the positioning information of the action point in the subtitle editing interface in real time.
In an example, the terminal may recognize that a current cursor of a mouse is positioned at a corresponding character in text information of another subtitle component to be merged based on an application program interface (e.g., field selection start) provided by the subtitle editing application platform.
In step S130, in response to the release operation corresponding to the selection operation, the second subtitle component and the target insertion position are determined based on the newly acquired positioning information, the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is displayed.
The second caption component can be another caption component to be merged; the text information in the subtitle component may be characters identified from the video to be edited or characters added by the user.
As an example, the target insertion position may be located at a certain character in the text information displayed in the text region of the second subtitle component.
In practical application, in response to a release operation corresponding to the selection operation, the second subtitle component and the target insertion position are determined based on the latest acquired positioning information, and then the text information in the first subtitle component can be inserted into the target insertion position in the second subtitle component to obtain a new subtitle component, and the new subtitle component after the merging processing can be displayed.
Specifically, a preset I/O interface may be invoked by responding to a preset release action corresponding to the selected operation, and the I/O interface may be configured to determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the target insertion position to obtain a new subtitle component.
For example, after it is detected that the user releases the mouse, the operation of pressing the mouse is released, the latest acquired positioning information is determined based on an application program interface provided by the subtitle editing application platform, and then the second subtitle component to be merged and the target insertion position are determined according to the latest acquired positioning information, so that the text information in the first subtitle component and the text information in the second subtitle component are merged by using the target insertion position, and a new merged subtitle component is obtained.
Compared with the traditional method for combining the subtitles by checking and clicking, the technical scheme of the embodiment realizes the subtitle component selection, the subtitle combining position determination and the subtitle component combining processing in one action by the user through the interactive mode of dragging and connecting lines based on the operations of pressing, moving and releasing the mouse, does not need the user to select and click the button, optimizes the operation flow and can support accurate and flexible insertion position detection.
According to the subtitle editing method, the subtitle component corresponding to the selected preset icon is used as the first subtitle component by responding to the selection operation of the preset icon acting on the subtitle editing interface, the preset icon corresponds to each subtitle component in the subtitle editing interface, then the action point corresponding to the selection operation is moved, the positioning information of the action point in the subtitle editing interface is obtained in real time, the release operation corresponding to the selection operation is further responded, the second subtitle component and the target insertion position are determined based on the newly obtained positioning information, the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and the new subtitle component is obtained by displaying. Therefore, the subtitle components to be merged and the specific merging position can be quickly positioned on the basis of the selection and movement of the action points of the preset icons, and the efficiency of subtitle merging processing is improved.
In an exemplary embodiment, as shown in fig. 2, after the step of responding to the selection operation of the preset icon in the subtitle editing interface, the following steps may be further included:
in step S210, displaying the selected preset icon as a triggered state;
in a specific implementation, after responding to a selection operation of a preset icon acting in a subtitle editing interface, the terminal may display the selected preset icon in a triggered state, for example, display the preset icon in a designated color to represent that the preset icon is in the triggered state.
In an example, a certain subtitle component area in the subtitle editing interface can be highlighted in response to a click operation acting on the subtitle component area, and then a selection operation can be carried out on a preset icon of the terminal area of the subtitle component to display the preset icon in a triggered state.
In step S220, in the process of moving the action point corresponding to the selection operation, the movement track indication of the action point is displayed with the preset icon in the triggered state as a starting point.
After the preset icon is displayed in the triggered state, in the process of moving the action point corresponding to the selected operation, the movement track indication of the action point may be displayed by using the preset icon in the triggered state as a starting point, for example, by using the preset icon in the triggered state as a starting point, the movement track indication corresponding to the action point is constructed in a linear manner.
According to the technical scheme, the selected preset icon is displayed as the triggered state, and then in the action point process corresponding to the mobile selection operation, the preset icon in the triggered state is used as the starting point to display the moving track indication of the action point, the moving track indication in the action point moving process can be displayed as the triggered state based on the preset icon, and the visualization effect in the subtitle merging process is improved.
In an exemplary embodiment, displaying the selected preset icon as a triggered state includes: displaying the selected preset icon as a first preset color; the first preset color is different from a display color of the unselected preset icons.
As an example, the first preset color may be red to enhance the display of the preset icon in the triggered state.
In practical application, a first preset color corresponding to the triggered state may be preset, and then the selected preset icon may be displayed as the first preset color, where the first preset color may be different from a display color of the unselected preset icon to represent that the preset icon is in the triggered state.
According to the technical scheme of the embodiment, the selected preset icon is displayed in the first preset color, and the first preset color is different from the display color of the unselected preset icon, so that the preset icon can be highlighted in the triggered state, and the triggered subtitle merging function can be represented conveniently.
In an exemplary embodiment, in the process of moving the action point corresponding to the selected operation, taking a preset icon in a triggered state as a starting point, displaying a movement track indication of the action point, includes: displaying a straight line segment with a second preset color as a moving track indication in the process of moving the action point corresponding to the selected operation; the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
As an example, the second predetermined color may be red to enhance the display of the movement trace indication of the action point.
In practical application, in the process of moving the action point corresponding to the selection operation, a straight line segment of a second preset color is displayed as a moving track indication, the straight line segment takes the preset icon in the triggered state as a starting point, and takes the real-time position of the action point in the subtitle editing interface as an end point, for example, a red straight line segment from the preset icon to the current real-time position of the action point.
Specifically, in the subtitle editing interface, when the mouse is moved, a dragging effect may be exhibited with a line as an association between a preset icon at the end of the selected subtitle component (i.e., the first subtitle component) and the current cursor position of the mouse.
According to the technical scheme, in the action point process corresponding to the mobile selection operation, the linear line segment with the second preset color is displayed to serve as the mobile track indication, the linear line segment takes the preset icon in the triggered state as the starting point and the real-time position of the action point in the subtitle editing interface as the terminal point, the quick positioning can be carried out based on the mobile track indication, the visual display effect is achieved, and the efficiency of subtitle merging processing is improved.
In an exemplary embodiment, after the step of using the subtitle component corresponding to the selected preset icon as the first subtitle component in response to the selection operation acting on the preset icon in the subtitle editing interface, the method further includes: and caching the text information in the first caption component, so as to acquire the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into the target insertion position in the second caption component, and displaying to obtain a new caption component.
In a specific implementation, after the first subtitle component is determined, text information in the first subtitle component may be cached, where the text information may include information such as subtitle text content, subtitle text font size, and subtitle text style, and further, when a release operation is responded, the text information in the first subtitle component may be acquired from the cache, and the text information in the first subtitle component may be inserted into a target insertion position in the second subtitle component and displayed to obtain a new subtitle component.
According to the technical scheme, the text information in the first caption component is cached, so that the text information in the first caption component is obtained from the cache when the release operation is responded, the text information in the first caption component is inserted into the target insertion position in the second caption component, and the new caption component is obtained by displaying the text information in the caption component to be merged based on the cache, and the efficiency of caption merging processing is improved.
In an exemplary embodiment, after the step of moving the action point corresponding to the selected operation and acquiring the positioning information of the action point in the subtitle editing interface in real time, the method further includes: and caching the positioning information, so that when responding to the releasing operation, the last cached positioning information is obtained from the cache as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information.
In practical application, after the positioning information is obtained in real time, the obtained positioning information can be cached, and then when a release operation is responded, the last cached positioning information can be obtained from the cache to serve as the latest positioning information, so that the second subtitle component and the target insertion position can be determined based on the latest positioning information.
Specifically, when the mouse is moved, the subtitle component to be merged (i.e., the second subtitle component) and the subtitle merging position (i.e., the target insertion position) corresponding to the current cursor position of the mouse can be dynamically determined through an application program interface provided by the subtitle editing application platform, and can be placed into the cache, and when the mouse is released, merging between the subtitle components can be realized according to the positioning information of the last cache through the application program interface.
In an example, according to the latest positioning information, a character number corresponding to a position where a cursor is located when the cursor is cached for the last time may be obtained, and may be determined as a specified position where a text is to be subsequently merged and inserted.
According to the technical scheme of the embodiment, the positioning information is cached, so that the last cached positioning information is obtained from the cache as the latest positioning information when the release operation is responded, the second subtitle component and the target insertion position are determined based on the latest positioning information, the other subtitle component to be merged and the specified position of the inserted text can be determined based on the cache, and the efficiency of subtitle merging processing is improved.
In an exemplary embodiment, each subtitle component may have a corresponding time interval, and after the step of presenting the new subtitle component obtained thereby, further includes: displaying a new time interval at the corresponding position of the new caption component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
The time interval corresponding to each subtitle component is consistent with the time interval corresponding to the subtitle component in the video to be edited, for example, the time interval corresponding to the subtitle component may be the time interval for displaying the subtitle component in the video to be edited.
In a specific implementation, each subtitle component may have a corresponding time interval, and after the first subtitle component is determined, the time interval corresponding to the first subtitle component may be cached, so that in a subtitle merging process, a new time interval may be determined according to the time interval corresponding to the first subtitle component and the time interval corresponding to the second subtitle component, and the new time interval may be displayed at a corresponding position of the new subtitle component.
According to the technical scheme of the embodiment, a new time interval is displayed at the corresponding position of the new caption component, and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component, so that the proper caption time span after combination can be ensured.
In an exemplary embodiment, the first caption component may be adjacent to the second caption component, and before the step of presenting the new time interval at the corresponding position of the new caption component, the method further includes: determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component; and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain a new time interval.
In practical applications, the subtitle components to be merged may be two adjacent subtitle components from the same video, that is, the first subtitle component may be adjacent to the second subtitle component, and the time span of the new subtitle component (i.e., the new time interval) may be calculated as follows:
the earliest starting time (namely the earliest time point) of the two subtitles to be merged is used as the starting time (namely the starting time) of the new subtitle component, and the latest ending time (namely the latest time point) of the two subtitles to be merged is used as the ending time (namely the ending time point) of the new subtitle component, so that the time span of the new subtitle component can contain the time span of the two subtitles to be merged.
According to the technical scheme, the earliest time point and the latest time point are determined from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component, the earliest time point is used as the starting time point, the latest time point is used as the ending time point, a new time interval is obtained, a suitable new time interval can be determined based on the earliest time point and the latest time point, and the time span of the merged captions can include the time spans of two captions to be merged.
In an exemplary embodiment, inserting the text information in the first caption component into the target insertion position in the second caption component and presenting the new caption component therefrom includes: splitting the text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is positioned in front of the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying the new caption component containing the new text information.
The subtitle text content in the text information can be represented in a character string mode, and the first split text and the second split text can be two character strings.
In the process of combining the caption components, the text information in the second caption component can be split according to the target insertion position to obtain a first split text and a second split text, the first split text is positioned in front of the second split text in the text information of the second caption component, and then the first split text, the text information in the first caption component and the second split text can be sequentially spliced to obtain new text information, and the new caption component containing the new text information can be displayed.
In an example, the character strings of two subtitle components to be merged can be extracted, and based on the target insertion position, the text information of the dragged subtitle component (i.e. the text information in the first subtitle component) can be inserted at the specified position by using the character string splitting interface and the splicing interface, so as to obtain a new subtitle component containing new text information.
For example, by using the character string splitting interface, the text information in the second subtitle component may be split into two character strings with the acquired target insertion position as a boundary, so as to obtain a first split text and a second split text, and further by using the character string splicing interface, the first split text may be spliced to the start position of the text information in the first subtitle component, and the second split text may be spliced to the end position of the text information in the first subtitle component, so as to obtain a completely new character string text (i.e., new text information).
According to the technical scheme of the embodiment, the text information in the second caption component is split according to the target insertion position to obtain the first split text and the second split text, and then the first split text, the text information in the first caption component and the second split text are sequentially spliced to obtain new text information, and the new caption component containing the new text information is displayed.
In an exemplary embodiment, the presentation thereby results in a new caption component, comprising: and deleting the first subtitle component, the second subtitle component and the preset icons corresponding to the first subtitle component and the second subtitle component in a subtitle editing interface, and displaying the rendered new subtitle component and the preset icons corresponding to the new subtitle component.
In practical application, a new subtitle component can be created according to the obtained new time interval and new text information, and then the preset icons corresponding to the first subtitle component, the second subtitle component, the first subtitle component and the second subtitle component can be deleted in a subtitle editing interface, and the rendered new subtitle component and the preset icon corresponding to the new subtitle component are displayed.
According to the technical scheme, the first subtitle component, the second subtitle component, the preset icons corresponding to the first subtitle component and the second subtitle component are deleted in the subtitle editing interface, the rendered new subtitle component and the preset icons corresponding to the new subtitle component are displayed, and the visualization effect in the subtitle merging process is improved.
In order to enable those skilled in the art to better understand the above steps, the embodiment of the present disclosure is illustrated below by an example, but it should be understood that the embodiment of the present disclosure is not limited thereto.
As shown in fig. 3a, in the subtitle editing interface, two subtitle components to be merged are 1 in fig. 3a (i.e., a first subtitle component) and 2 in fig. 3a (i.e., a second subtitle component), the two subtitle components to be merged are adjacent, each subtitle component corresponds to a preset icon, and the first subtitle component is located below the adjacent second subtitle component. By pressing a preset icon (e.g., the icon indicated by the cursor in fig. 3 a) at the end region of the first subtitle component, the first subtitle component corresponding to the selected preset icon can be determined, and the first subtitle component can be inserted into another subtitle component when the subtitle components are merged.
When the mouse moves, a line shape is used as a correlation between a preset icon at the tail end of a selected subtitle component (such as 1 in fig. 3 b) and the current cursor position of the mouse, a dragging effect is displayed, positioning information of the current cursor of the mouse in text information displayed in a text area of a second subtitle component (such as 2 in fig. 3 b) can be obtained in real time through an application program interface provided by a subtitle editing application platform and is cached, and when the mouse is released, a specified position of a subsequent text to be merged and inserted (namely a target insertion position, such as the position of the cursor in fig. 3b is located between subtitle text content a and subtitle text content b) can be determined through the interface according to the last cached positioning information.
According to the target inserting position, the text information of the second caption component can be divided into the text information of the second caption component, the target insertion position is taken as a boundary to be split into two character strings which can be respectively marked as a segment A and a segment B, further, as shown in fig. 3c, the segment a (e.g. the subtitle text content a) can be spliced to the start position of the text information of the first subtitle component (e.g. the subtitle text content 1), the segment b (e.g. the subtitle text content b) can be spliced to the end position of the text information of the first subtitle component, and then a new character string text (e.g. caption text content a + caption text content 1+ caption text content b in fig. 3 c) is obtained, so that the first caption component is inserted into the specified text position of the second caption component, and a new caption component (e.g. 1 in fig. 3 c) is obtained.
Fig. 4 is a flowchart illustrating another subtitle editing method according to an exemplary embodiment, as shown in fig. 4, for use in a computer device such as a terminal, including the following steps.
In step S410, in response to a selection operation applied to a preset icon in the subtitle editing interface, the selected preset icon is displayed in a triggered state. In step S420, in the process of moving the action point corresponding to the selected operation, the preset icon in the triggered state is used as a starting point to display the movement track indication of the action point. In step S430, the subtitle component corresponding to the selected preset icon is used as a first subtitle component; and each subtitle component in the subtitle editing interface corresponds to a preset icon respectively. In step S440, the text information in the first subtitle component is buffered, so that when the release operation is responded, the text information in the first subtitle component is obtained from the buffer, the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is obtained by displaying the text information in the first subtitle component. In step S450, the action point corresponding to the selected operation is moved, and the positioning information of the action point in the subtitle editing interface is obtained in real time. In step S460, the positioning information is buffered, so that in response to the release operation, the last buffered positioning information is obtained from the buffer as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information. In step S470, in response to the release operation corresponding to the selected operation, a second subtitle component and a target insertion position are determined based on the latest acquired positioning information, and the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component. In step S480, in the subtitle editing interface, the first subtitle component, the second subtitle component, and preset icons corresponding to the first subtitle component and the second subtitle component are deleted, and the rendered new subtitle component and the preset icon corresponding to the new subtitle component are displayed. In step S490, each subtitle component has a corresponding time interval, and a new time interval is displayed at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component. It should be noted that, for the specific limitations of the above steps, reference may be made to the above specific limitations of a subtitle editing method, which is not described herein again.
It should be understood that although the steps in the flowcharts of fig. 1, 2, and 4 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least a portion of the steps in fig. 1, 2, and 4 may include multiple steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, which are not necessarily performed in sequence, but may be performed alternately or alternately with other steps or at least a portion of the steps or stages in other steps.
Fig. 5 is a block diagram illustrating a subtitle editing apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes:
a subtitle component selecting unit 501 configured to perform a selection operation in response to a preset icon acting in a subtitle editing interface, and take a subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon;
a positioning information real-time obtaining unit 502 configured to execute moving of an action point corresponding to the selected operation, and obtain positioning information of the action point in the subtitle editing interface in real time;
a caption component merging unit 503 configured to perform a release operation corresponding to the selected operation, determine a second caption component and a target insertion position based on the latest acquired positioning information, insert the text information in the first caption component into the target insertion position in the second caption component, and display a new caption component obtained thereby.
In one possible implementation manner, the subtitle editing apparatus further includes:
a triggered state display unit, specifically configured to perform displaying the selected preset icon as a triggered state;
and the movement track indication display unit is specifically configured to display the movement track indication of the action point by taking the preset icon in the triggered state as a starting point in the process of moving the action point corresponding to the selected operation.
In a possible implementation manner, the triggered state display unit is specifically configured to perform displaying the selected preset icon as a first preset color; the first preset color is different from the display color of the unselected preset icons.
In a possible implementation manner, the movement track indication presenting unit is specifically configured to perform, in the process of moving the action point corresponding to the selected operation, presenting a straight line segment of a second preset color as the movement track indication; and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
In one possible implementation manner, the subtitle editing apparatus further includes:
and the text information caching unit is specifically configured to perform caching on the text information in the first subtitle component, so as to acquire the text information in the first subtitle component from the cache in response to the release operation, insert the text information in the first subtitle component into the target insertion position in the second subtitle component, and display a new subtitle component obtained thereby.
In one possible implementation manner, the subtitle editing apparatus further includes:
a positioning information caching unit, specifically configured to perform caching of the positioning information, so as to obtain last cached positioning information from a cache as the latest positioning information in response to the release operation, and determine the second subtitle component and the target insertion position based on the latest positioning information.
In one possible implementation, each of the subtitle components has a corresponding time interval, and the subtitle editing apparatus further includes:
a new time interval presentation unit, configured to perform presentation of a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation, the first subtitle component is adjacent to the second subtitle component, and the subtitle editing apparatus further includes:
an earliest time point and latest time point determining unit, which is specifically configured to determine the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
a new time interval obtaining unit, configured to perform obtaining the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
In a possible implementation manner, the subtitle component merging unit 503 is specifically configured to split the text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text precedes the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first subtitle component and the second split text in sequence to obtain new text information, and displaying the new subtitle component containing the new text information.
In a possible implementation manner, the subtitle component merging unit 503 is specifically further configured to execute, in the subtitle editing interface, deleting the first subtitle component, the second subtitle component, and preset icons corresponding to the first subtitle component and the second subtitle component, and displaying the rendered new subtitle component and the preset icon corresponding to the new subtitle component.
With regard to the apparatus in the above-described embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated here.
Fig. 6 is a block diagram illustrating an electronic device 600 for subtitle editing according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a gaming console, a tablet device, a medical device, a fitness device, a personal digital assistant, and so forth.
Referring to fig. 6, electronic device 600 may include one or more of the following components: processing component 602, memory 604, power component 606, multimedia component 608, audio component 610, interface to input/output (I/O) 612, sensor component 614, and communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 can include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 604 may be implemented by any type or combination of volatile or non-volatile storage devices, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
Power supply component 606 provides power to the various components of electronic device 600. The power components 606 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen providing an output interface between the electronic device 600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 600 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signal may further be stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 also includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor component 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor component 614 may detect an open/closed state of the electronic device 600, the relative positioning of components, such as a display and keypad of the electronic device 600, the sensor component 614 may also detect a change in the position of the electronic device 600 or components of the electronic device 600, the presence or absence of user contact with the electronic device 600, orientation or acceleration/deceleration of the device 600, and a change in the temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communications between the electronic device 600 and other devices in a wired or wireless manner. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, a carrier network (such as 2G, 3G, 4G, or 5G), or a combination thereof. In an exemplary embodiment, the communication component 616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a computer-readable storage medium comprising instructions, such as the memory 604 comprising instructions, executable by the processor 620 of the electronic device 600 to perform the above-described method is also provided. For example, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer program product is also provided that includes instructions executable by the processor 620 of the electronic device 600 to perform the above-described method.
It should be noted that the descriptions of the above-mentioned apparatus, the electronic device, the computer-readable storage medium, the computer program product, and the like according to the method embodiments may also include other embodiments, and specific implementations may refer to the descriptions of the related method embodiments, which are not described in detail herein.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. A method for subtitle editing, the method comprising:
responding to the selection operation of the preset icon acting on the subtitle editing interface, and taking the subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon;
moving an action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
and responding to a release operation corresponding to the selected operation, determining a second subtitle component and a target insertion position based on the latest acquired positioning information, inserting the text information in the first subtitle component into the target insertion position in the second subtitle component, and displaying the new subtitle component obtained by the inserting.
2. The method of claim 1, further comprising, after the step of responding to the selected operation on the preset icon in the subtitle editing interface:
displaying the selected preset icon as a triggered state;
and in the process of moving the action point corresponding to the selected operation, displaying the moving track indication of the action point by taking the preset icon in the triggered state as a starting point.
3. The method of claim 2, wherein displaying the selected preset icon as a triggered state comprises:
displaying the selected preset icon as a first preset color; the first preset color is different from the display color of the unselected preset icons.
4. The method according to claim 2, wherein the displaying, with a preset icon in the triggered state as a starting point, a movement trajectory indication of the action point in moving the action point corresponding to the selected operation comprises:
displaying a straight line segment with a second preset color as the moving track indication in the process of moving the action point corresponding to the selected operation;
and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
5. The method according to claim 1, further comprising, after the step of moving the action point corresponding to the selected operation and acquiring the positioning information of the action point in the subtitle editing interface in real time, the step of:
and caching the positioning information, so as to obtain the last cached positioning information from the cache as the latest positioning information when responding to the releasing operation, and determining the second subtitle component and the target insertion position based on the latest positioning information.
6. The method of claim 1, wherein inserting the text information in the first subtitle component into the target insertion position in the second subtitle component and presenting a new subtitle component therefrom comprises:
splitting the text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text precedes the second split text in the text information of the second subtitle component;
and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying the new caption component containing the new text information.
7. A subtitle editing apparatus, comprising:
the subtitle component selecting unit is configured to execute a selecting operation responding to a preset icon acting in the subtitle editing interface, and the subtitle component corresponding to the selected preset icon is used as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon;
the positioning information real-time acquisition unit is configured to execute moving of the action point corresponding to the selected operation and acquire positioning information of the action point in the subtitle editing interface in real time;
and the subtitle component merging unit is configured to execute a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert the text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component obtained by the insertion.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the subtitle editing method of any one of claims 1 to 6.
9. A computer-readable storage medium, wherein instructions in the computer-readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the subtitle editing method of any one of claims 1-6.
10. A computer program product comprising instructions which, when executed by a processor of an electronic device, enable the electronic device to perform a subtitle editing method according to any one of claims 1 to 6.
CN202110998486.2A 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium Active CN113905267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110998486.2A CN113905267B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110998486.2A CN113905267B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113905267A true CN113905267A (en) 2022-01-07
CN113905267B CN113905267B (en) 2023-06-20

Family

ID=79187960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110998486.2A Active CN113905267B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113905267B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119261A (en) * 2023-08-09 2023-11-24 广东保伦电子股份有限公司 Subtitle display method and system based on subtitle merging

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2421369A1 (en) * 2002-03-08 2003-09-08 Caption Colorado Llc Method and apparatus for control of closed captioning
JP2009260823A (en) * 2008-04-18 2009-11-05 Toshiba Corp Caption checking apparatus and caption checking method
US20110314485A1 (en) * 2009-12-18 2011-12-22 Abed Samir Systems and Methods for Automated Extraction of Closed Captions in Real Time or Near Real-Time and Tagging of Streaming Data for Advertisements
CN102547145A (en) * 2010-12-16 2012-07-04 新奥特(北京)视频技术有限公司 Method for automatically testing subtitle function and system
US20130151532A1 (en) * 2011-12-12 2013-06-13 William Christian Hoyer Methods, Apparatuses, and Computer Program Products for Preparing Narratives Relating to Investigative Matters
KR101419871B1 (en) * 2013-12-09 2014-07-16 넥스트리밍(주) Apparatus and method for editing subtitles
US20140365953A1 (en) * 2013-06-09 2014-12-11 Apple Inc. Device, method, and graphical user interface for displaying application status information
US20150269033A1 (en) * 2011-12-12 2015-09-24 Microsoft Technology Licensing, Llc Techniques to manage collaborative documents
CN106851401A (en) * 2017-03-20 2017-06-13 惠州Tcl移动通信有限公司 A kind of method and system of automatic addition captions
CN107316642A (en) * 2017-06-30 2017-11-03 联想(北京)有限公司 Video file method for recording, audio file method for recording and mobile terminal
CN207251794U (en) * 2017-09-26 2018-04-17 安徽省精英机械制造有限公司 A kind of multifunctional assembled film titler
CN107967093A (en) * 2017-12-21 2018-04-27 维沃移动通信有限公司 A kind of multistage text clone method and mobile terminal
KR101961750B1 (en) * 2017-10-11 2019-03-25 (주)아이디어콘서트 System for editing caption data of single screen
WO2019236727A1 (en) * 2018-06-06 2019-12-12 Home Box Office, Inc. Editing timed-text elements
CN111970577A (en) * 2020-08-25 2020-11-20 北京字节跳动网络技术有限公司 Subtitle editing method and device and electronic equipment

Patent Citations (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2421369A1 (en) * 2002-03-08 2003-09-08 Caption Colorado Llc Method and apparatus for control of closed captioning
US20170366760A1 (en) * 2002-03-08 2017-12-21 Vitac Corporation Method and apparatus for control of closed captioning
JP2009260823A (en) * 2008-04-18 2009-11-05 Toshiba Corp Caption checking apparatus and caption checking method
US20110314485A1 (en) * 2009-12-18 2011-12-22 Abed Samir Systems and Methods for Automated Extraction of Closed Captions in Real Time or Near Real-Time and Tagging of Streaming Data for Advertisements
CN102547145A (en) * 2010-12-16 2012-07-04 新奥特(北京)视频技术有限公司 Method for automatically testing subtitle function and system
US20150269033A1 (en) * 2011-12-12 2015-09-24 Microsoft Technology Licensing, Llc Techniques to manage collaborative documents
US20130151532A1 (en) * 2011-12-12 2013-06-13 William Christian Hoyer Methods, Apparatuses, and Computer Program Products for Preparing Narratives Relating to Investigative Matters
US20140365953A1 (en) * 2013-06-09 2014-12-11 Apple Inc. Device, method, and graphical user interface for displaying application status information
WO2015088196A1 (en) * 2013-12-09 2015-06-18 넥스트리밍(주) Subtitle editing apparatus and subtitle editing method
KR101419871B1 (en) * 2013-12-09 2014-07-16 넥스트리밍(주) Apparatus and method for editing subtitles
CN106851401A (en) * 2017-03-20 2017-06-13 惠州Tcl移动通信有限公司 A kind of method and system of automatic addition captions
CN107316642A (en) * 2017-06-30 2017-11-03 联想(北京)有限公司 Video file method for recording, audio file method for recording and mobile terminal
CN207251794U (en) * 2017-09-26 2018-04-17 安徽省精英机械制造有限公司 A kind of multifunctional assembled film titler
KR101961750B1 (en) * 2017-10-11 2019-03-25 (주)아이디어콘서트 System for editing caption data of single screen
CN107967093A (en) * 2017-12-21 2018-04-27 维沃移动通信有限公司 A kind of multistage text clone method and mobile terminal
WO2019236727A1 (en) * 2018-06-06 2019-12-12 Home Box Office, Inc. Editing timed-text elements
CN111970577A (en) * 2020-08-25 2020-11-20 北京字节跳动网络技术有限公司 Subtitle editing method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
潘洪波: "非线性编辑软件Edius5.0在视频编辑中的应用", 仪器仪表用户 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119261A (en) * 2023-08-09 2023-11-24 广东保伦电子股份有限公司 Subtitle display method and system based on subtitle merging
CN117119261B (en) * 2023-08-09 2024-06-07 广东保伦电子股份有限公司 Subtitle display method and system based on subtitle merging

Also Published As

Publication number Publication date
CN113905267B (en) 2023-06-20

Similar Documents

Publication Publication Date Title
US10728196B2 (en) Method and storage medium for voice communication
EP3817395A1 (en) Video recording method and apparatus, device, and readable storage medium
CN105845124B (en) Audio processing method and device
CN109413478B (en) Video editing method and device, electronic equipment and storage medium
CN112738618B (en) Video recording method and device and electronic equipment
CN113905192B (en) Subtitle editing method and device, electronic equipment and storage medium
CN113411516B (en) Video processing method, device, electronic equipment and storage medium
CN110968364B (en) Method and device for adding shortcut plugins and intelligent device
CN110704647A (en) Content processing method and device
CN111432288A (en) Video playing method and device, electronic equipment and storage medium
CN113553472B (en) Information display method and device, electronic equipment and storage medium
WO2022205828A1 (en) Video editing method and apparatus
CN113905267B (en) Subtitle editing method and device, electronic equipment and storage medium
CN112333518B (en) Function configuration method and device for video and electronic equipment
CN113988021A (en) Content interaction method and device, electronic equipment and storage medium
CN113613082A (en) Video playing method and device, electronic equipment and storage medium
CN112764636A (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium
CN112035691A (en) Method, device, equipment and medium for displaying cell labeling data of slice image
US10637905B2 (en) Method for processing data and electronic apparatus
CN110809184A (en) Video processing method, device and storage medium
CN114397990A (en) Image distribution method and device, electronic equipment and computer readable storage medium
CN113965792A (en) Video display method and device, electronic equipment and readable storage medium
CN111782110A (en) Screen capturing method and device, electronic equipment and storage medium
CN111049732A (en) Push message display method and device, electronic equipment and medium
CN115202798B (en) Operation reproduction method, apparatus, electronic device, and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant