CN113905267B - Subtitle editing method and device, electronic equipment and storage medium - Google Patents

Subtitle editing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113905267B
CN113905267B CN202110998486.2A CN202110998486A CN113905267B CN 113905267 B CN113905267 B CN 113905267B CN 202110998486 A CN202110998486 A CN 202110998486A CN 113905267 B CN113905267 B CN 113905267B
Authority
CN
China
Prior art keywords
component
subtitle
caption
new
text information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110998486.2A
Other languages
Chinese (zh)
Other versions
CN113905267A (en
Inventor
付硕
赵伊
韩乔
林斐凡
范艺含
郝刚
张一舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110998486.2A priority Critical patent/CN113905267B/en
Publication of CN113905267A publication Critical patent/CN113905267A/en
Application granted granted Critical
Publication of CN113905267B publication Critical patent/CN113905267B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/8166Monomedia components thereof involving executable data, e.g. software

Abstract

The disclosure relates to a subtitle editing method, a subtitle editing device, an electronic device and a storage medium, wherein the method comprises the following steps: responding to the selected operation of the preset icon acting in the subtitle editing interface, and taking the subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon respectively; moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time; and responding to a release operation corresponding to the selected operation, determining a second caption component and a target insertion position based on the latest acquired positioning information, inserting text information in the first caption component into the target insertion position in the second caption component, and displaying the new caption component. By adopting the method and the device, the subtitle component to be combined and the specific position of the combination can be rapidly positioned based on the selection of the preset icon and the movement of the action point, so that the subtitle combining processing efficiency is improved.

Description

Subtitle editing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a subtitle editing method, a subtitle editing device, an electronic device and a storage medium.
Background
Currently, as video production needs increase, video production is gradually spread from a small portion of professionals to general users, and thus the usability of multimedia editors is becoming more and more important.
Because the subtitles in the video are stored in an array data format and are directly displayed in a list form, when the subtitles are combined, a user is usually required to select the subtitles to be combined, and then the selected subtitle data are combined one by one through clicking operation. With the adoption of the traditional method, an additional checking component is required to be introduced, so that engineering complexity is increased, and logic redundancy is processed by a computer.
Accordingly, the related art has a problem in that the subtitle combining process is inefficient.
Disclosure of Invention
The disclosure provides a subtitle editing method, a subtitle editing device, electronic equipment and a storage medium, which at least solve the problem of low subtitle merging processing efficiency in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a subtitle editing method including:
responding to the selected operation of the preset icon acting in the subtitle editing interface, and taking the subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon respectively;
Moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
and responding to a release operation corresponding to the selected operation, determining a second caption component and a target insertion position based on the latest acquired positioning information, inserting text information in the first caption component into the target insertion position in the second caption component, and displaying the new caption component.
In one possible implementation, after the step of responding to the selection operation of the preset icon acting in the subtitle editing interface, the method further includes:
displaying the selected preset icon as a triggered state;
and in the process of moving the action point corresponding to the selected operation, displaying the movement track indication of the action point by taking the preset icon in the triggered state as a starting point.
In one possible implementation manner, the displaying the selected preset icon as the triggered state includes:
displaying the selected preset icon as a first preset color; the first preset color is different from a display color of the unselected preset icon.
In one possible implementation manner, in the moving the action point corresponding to the selected operation, displaying a movement track indication of the action point with a preset icon in the triggered state as a starting point includes:
In the process of moving the action point corresponding to the selected operation, displaying a straight line segment with a second preset color as the movement track indication;
and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
In one possible implementation manner, in the responding to the selection operation of the preset icon acting in the subtitle editing interface, taking the subtitle component corresponding to the selected preset icon as the first subtitle component; after the step that each subtitle component in the subtitle editing interface corresponds to the preset icon, the method further comprises the following steps:
and caching the text information in the first caption component, acquiring the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into the target insertion position in the second caption component, and displaying the text information to obtain a new caption component.
In one possible implementation manner, after the step of moving the action point corresponding to the selected operation to obtain the positioning information of the action point in the subtitle editing interface in real time, the method further includes:
And caching the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information.
In one possible implementation, each of the caption components has a corresponding time interval, and after the step of presenting the new caption component, further includes:
displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation, the first caption component is adjacent to the second caption component; before the step of displaying the new time interval at the corresponding position of the new subtitle component, the method further includes:
determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain the new time interval.
In one possible implementation manner, the inserting text information in the first caption component into the target insertion position in the second caption component and displaying the new caption component obtained thereby includes:
splitting text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component;
and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
In one possible implementation, the presenting thereby results in a new subtitle component, comprising:
and deleting the first subtitle component, the second subtitle component, preset icons corresponding to the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the new subtitle component after rendering and the preset icons corresponding to the new subtitle component.
According to a second aspect of the embodiments of the present disclosure, there is provided a subtitle editing apparatus including:
A subtitle component selecting unit configured to perform a selecting operation in response to a preset icon acting in the subtitle editing interface, taking a subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon respectively;
the positioning information real-time acquisition unit is configured to execute moving of an action point corresponding to the selected operation and acquire positioning information of the action point in the subtitle editing interface in real time;
and the subtitle component merging unit is configured to execute a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component.
In one possible implementation, the apparatus further includes:
a triggered state display unit specifically configured to perform displaying the selected preset icon as a triggered state;
the movement track indication display unit is specifically configured to display the movement track indication of the action point by taking the preset icon in the triggered state as a starting point in the process of moving the action point corresponding to the selected operation.
In one possible implementation, the triggered state display unit is specifically configured to perform displaying the selected preset icon as a first preset color; the first preset color is different from a display color of the unselected preset icon.
In one possible implementation manner, the movement track indication display unit is specifically configured to perform displaying a straight line segment of a second preset color as the movement track indication in a process of moving the action point corresponding to the selected operation; and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
In one possible implementation, the apparatus further includes:
the text information caching unit is specifically configured to perform caching of the text information in the first caption component, so as to obtain the text information in the first caption component from the cache when responding to the release operation, insert the text information in the first caption component into the target insertion position in the second caption component, and display the text information to obtain a new caption component.
In one possible implementation, the apparatus further includes:
And the positioning information caching unit is specifically configured to perform caching of the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second caption component and the target insertion position are determined based on the latest positioning information.
In one possible implementation, each of the subtitle components has a corresponding time interval, and the apparatus further includes:
a new time interval display unit, specifically configured to perform displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation, the first caption component is adjacent to the second caption component, and the apparatus further includes:
an earliest time point and latest time point determining unit, specifically configured to perform determining an earliest time point and a latest time point from a time interval corresponding to the first caption component and a time interval corresponding to the second caption component;
the new time interval obtaining unit is specifically configured to obtain the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
In one possible implementation manner, the subtitle component merging unit is specifically configured to perform splitting of text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
In one possible implementation manner, the subtitle component merging unit is specifically further configured to execute deleting the first subtitle component, the second subtitle component, preset icons corresponding to the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the new subtitle component after rendering and the preset icon corresponding to the new subtitle component.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory storing a computer program and a processor implementing the subtitle editing method according to the first aspect or any possible implementation of the first aspect when the processor executes the computer program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the subtitle editing method according to the first aspect or any possible implementation of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the subtitle editing method as described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the scheme, a subtitle component corresponding to a preset icon which is selected is used as a first subtitle component in response to a selection operation of the preset icon which acts on a subtitle editing interface, each subtitle component in the subtitle editing interface corresponds to the preset icon, then an acting point corresponding to the selection operation is moved, positioning information of the acting point in the subtitle editing interface is obtained in real time, further a second subtitle component and a target insertion position are determined based on the latest obtained positioning information in response to a release operation corresponding to the selection operation, text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is obtained through display. Therefore, the subtitle component to be combined and the specific position of the combination can be rapidly positioned based on the selection of the preset icon and the movement of the action point, and the subtitle combining processing efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flowchart illustrating a subtitle editing method according to an exemplary embodiment.
Fig. 2 is a flowchart showing movement trajectory indication steps, according to an exemplary embodiment.
Fig. 3a is a schematic diagram showing an example of a subtitle editing interface (before merging) according to an exemplary embodiment.
Fig. 3b is a schematic diagram illustrating an example subtitle editing interface (in merging) according to an exemplary embodiment.
Fig. 3c is a schematic diagram showing an example of a subtitle editing interface (after merging) according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating another subtitle editing method according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a subtitle editing apparatus according to an exemplary embodiment.
Fig. 6 is an internal structural diagram of an electronic device, which is shown according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure.
Fig. 1 is a flowchart illustrating a subtitle editing method according to an exemplary embodiment, which may be used in a computer device such as a terminal, for example, a video editing interface displayed by the terminal, as shown in fig. 1, the method including the following steps.
In step S110, in response to a selection operation of a preset icon acting on a subtitle editing interface, taking a subtitle component corresponding to the selected preset icon as a first subtitle component, where each subtitle component in the subtitle editing interface corresponds to the preset icon;
The subtitle component displayed in the subtitle editing interface is associated with video content displayed in the video editing interface, for example, when a user edits the video through the video editing interface, the subtitle component displayed in the subtitle editing interface is a subtitle component to be added to the video.
In practical application, at least two caption components can be displayed in the caption editing interface, each caption component in the caption editing interface can be respectively corresponding to a preset icon, for example, a preset icon can be displayed in the end area of each caption component, and the caption component corresponding to the selected preset icon can be used as a first caption component by responding to the selection operation of the preset icon corresponding to a certain caption component, and the first caption component can be inserted into another caption component when the caption components are combined.
Specifically, by responding to a preset action of an operation medium on a preset icon in the subtitle editing interface, a preset I/O interface can be invoked, and the I/O interface can be used for taking a subtitle component corresponding to the selected preset icon as a first subtitle component.
For example, when detecting that a user presses (i.e., presets actions) a preset icon in an end region of a certain subtitle component for a long time by using a mouse (i.e., an operation medium), a callback is triggered, that is, in response to a selection operation, the subtitle component corresponding to the selected preset icon is used as a first subtitle component, and further, in response to an operation medium movement event, a dragging function for the first subtitle component can be started.
In step S120, moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
the action point may be a virtual display corresponding to the operation medium, such as a cursor display of a mouse in a subtitle editing interface.
As an example, the positioning information may be a text information positioning position corresponding to a position where the action point is located in a text region of another subtitle component to be combined, for example, a current cursor position of a mouse is at a corresponding character in text information displayed in the text region.
In a specific implementation, an action point corresponding to the selected operation may be moved, for example, a user uses a mouse to move a corresponding cursor, so that the first subtitle component realizes a dragging function, and positioning information of the action point in a subtitle editing interface may be obtained in real time, for example, a preset I/O interface may be invoked in a process of moving the action point corresponding to the selected operation, and the I/O interface may be used to obtain positioning information of the action point in the subtitle editing interface in real time.
In an example, the terminal may identify that the current cursor of the mouse is positioned at a corresponding character in text information of another subtitle component to be combined based on an application program interface (e.g., field. Selection start) provided by the subtitle editing application platform.
In step S130, in response to a release operation corresponding to the selected operation, the second caption component and the target insertion position are determined based on the latest acquired positioning information, text information in the first caption component is inserted into the target insertion position in the second caption component, and a new caption component is displayed thereby.
The second subtitle component may be another subtitle component to be combined; the text information in the subtitle component may be text identified from the video to be edited, or text added by the user.
As an example, the target insertion position may be located at a character in text information displayed in a text region of the second subtitle component.
In practical application, the second caption component and the target insertion position can be determined based on the latest acquired positioning information in response to the release operation corresponding to the selected operation, so that the text information in the first caption component can be inserted into the target insertion position in the second caption component to obtain a new caption component, and the new caption component after the merging process can be displayed.
Specifically, a preset I/O interface may be invoked by responding to a preset release action corresponding to a selected operation, where the I/O interface may be configured to determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component thereby.
For example, after the user is detected to release the mouse, the mouse pressing operation is released, the latest acquired positioning information is determined based on an application program interface provided by the caption editing application platform, and then the second caption component to be combined and the target insertion position can be determined according to the latest acquired positioning information, so that the text information in the first caption component and the text information in the second caption component are combined by adopting the target insertion position, and a new combined caption component is obtained.
Compared with the traditional method that the subtitle combining operation is carried out through hooking and clicking, the technical scheme of the embodiment realizes the subtitle component selection, subtitle combining position determination and subtitle component combining processing in one action by the user based on the pressing, moving and loosening operations of the mouse in an interactive mode of dragging and connecting lines, does not need to select and click buttons firstly by the user, optimizes the operation flow and can support accurate and flexible insertion position detection.
According to the subtitle editing method, the subtitle component corresponding to the selected preset icon is used as the first subtitle component in response to the selected operation of the preset icon acting on the subtitle editing interface, the preset icon is respectively corresponding to each subtitle component in the subtitle editing interface, then the acting point corresponding to the selected operation is moved, positioning information of the acting point in the subtitle editing interface is obtained in real time, further the second subtitle component and the target insertion position are determined based on the latest obtained positioning information in response to the release operation corresponding to the selected operation, text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is obtained through display. Therefore, the subtitle component to be combined and the specific position of the combination can be rapidly positioned based on the selection of the preset icon and the movement of the action point, and the subtitle combining processing efficiency is improved.
In an exemplary embodiment, as shown in fig. 2, after the step of responding to the selection operation of the preset icon acting in the subtitle editing interface, the steps of:
in step S210, displaying the selected preset icon as a triggered state;
in a specific implementation, after responding to a selection operation of a preset icon acting in a subtitle editing interface, the terminal may display the selected preset icon as a triggered state, for example, display the preset icon as a specified color, so as to characterize that the preset icon is in the triggered state.
In an example, a subtitle component may be highlighted in response to a click operation acting on a certain subtitle component area in a subtitle editing interface, and then a selection operation may be performed with respect to a preset icon of an end area of the subtitle component, the preset icon being displayed as a triggered state.
In step S220, in the process of moving the action point corresponding to the selected operation, the movement track indication of the action point is displayed with the preset icon in the triggered state as the starting point.
After the preset icon is displayed as the triggered state, in the process of moving the action point corresponding to the selected operation, the movement track indication of the action point can be displayed by taking the preset icon in the triggered state as a starting point, for example, the movement track indication corresponding to the action point is constructed in a linear mode by taking the preset icon in the triggered state as the starting point.
According to the technical scheme, the selected preset icon is displayed as the triggered state, and then in the process of moving the action point corresponding to the selected operation, the preset icon in the triggered state is taken as a starting point, the movement track indication of the action point is displayed, the movement track indication in the process of moving the action point can be displayed as the triggered state based on the preset icon, and the visual effect in the subtitle merging process is improved.
In an exemplary embodiment, displaying the selected preset icon as a triggered state includes: displaying the selected preset icon as a first preset color; the first preset color is different from the display color of the unselected preset icon.
As an example, the first preset color may be red to enhance the display of the preset icon in the triggered state.
In practical application, a first preset color corresponding to the triggered state may be preset, and then the selected preset icon may be displayed as the first preset color, where the first preset color may be different from a display color of the unselected preset icon, so as to represent that the preset icon is in the triggered state.
According to the technical scheme, the selected preset icon is displayed as the first preset color, and the first preset color is different from the display color of the unselected preset icon, so that the preset icon can be highlighted to be in the triggered state, and the triggered subtitle merging function can be conveniently represented.
In an exemplary embodiment, in a process of moving an action point corresponding to a selected operation, displaying a movement track indication of the action point with a preset icon in a triggered state as a starting point, including: in the process of moving the action point corresponding to the selected operation, displaying a straight line segment with a second preset color as a movement track indication; the straight line segment takes a preset icon in a triggered state as a starting point and takes the real-time position of an action point in a subtitle editing interface as an end point.
As an example, the second preset color may be red to enhance the indication of the movement track of the action point.
In practical application, a straight line segment with a second preset color can be displayed as a movement track indication in the process of moving the action point corresponding to the selected operation, the straight line segment takes a preset icon in a triggered state as a starting point, and the real-time position of the action point in the subtitle editing interface as an end point, such as a red straight line segment from the preset icon to the current real-time position of the action point.
Specifically, in the subtitle editing interface, when the mouse moves, a drag effect may be shown by using a line as a relationship between a preset icon at the end of the selected subtitle component (i.e., the first subtitle component) and the current cursor position of the mouse.
According to the technical scheme, the linear line segment with the second preset color is displayed as the movement track indication in the process of moving the action point corresponding to the selected operation, the preset icon in the triggered state is used as the starting point of the linear line segment, and the real-time position of the action point in the subtitle editing interface is used as the end point, so that quick positioning can be performed based on the movement track indication, visual display effect is achieved, and the subtitle merging efficiency is improved.
In an exemplary embodiment, after the step of taking, as the first subtitle component, the subtitle component corresponding to the selected preset icon in response to the selection operation of the preset icon acting in the subtitle editing interface, further includes: and caching the text information in the first caption component, acquiring the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into a target insertion position in the second caption component, and displaying to obtain a new caption component.
In a specific implementation, after determining the first caption component, the text information in the first caption component may be cached, where the text information may include information such as caption text content, caption text font, caption text word size, caption text style, and so on, and further, when responding to the release operation, the text information in the first caption component may be obtained from the cache, the text information in the first caption component may be inserted into the target insertion position in the second caption component, and the new caption component may be obtained by displaying the text information.
According to the technical scheme, the text information in the first subtitle component is cached, so that the text information in the first subtitle component is obtained from the cache when the release operation is responded, the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is obtained through display, the text information in the subtitle component to be combined can be called based on the cache, and the subtitle combining processing efficiency is improved.
In an exemplary embodiment, after the step of moving the action point corresponding to the selected operation to obtain the positioning information of the action point in the subtitle editing interface in real time, the method further includes: and caching the positioning information to acquire the positioning information of the last time of caching from the cache as the latest positioning information when responding to the release operation, and determining the second caption component and the target insertion position based on the latest positioning information.
In practical application, after the positioning information is obtained in real time, the obtained positioning information can be cached, and then the positioning information cached last time can be obtained from the cache as the latest positioning information when the release operation is responded, so that the second caption assembly and the target insertion position are determined based on the latest positioning information.
Specifically, when the mouse is moved, the subtitle component to be combined (i.e. the second subtitle component) and the subtitle combining position (i.e. the target inserting position) corresponding to the current cursor position of the mouse can be dynamically judged through an application program interface provided by the subtitle editing application platform, and can be put into a cache, so that when the mouse is released, the combination among the subtitle components can be realized according to the positioning information of the last cache through the application program interface.
In an example, according to the latest positioning information, a character sequence number corresponding to the position of the cursor in the last time of buffering can be obtained, and can be determined as the appointed position of the text to be merged and inserted subsequently.
According to the technical scheme, the positioning information is cached, so that when the release operation is responded, the positioning information cached for the last time is obtained from the cache as the latest positioning information, the second subtitle component and the target insertion position are determined based on the latest positioning information, the other subtitle component to be combined and the appointed position of the inserted text can be determined based on the cache, and the subtitle combining processing efficiency is improved.
In an exemplary embodiment, each caption component may have a corresponding time interval, and after the step of presenting the new caption component thereby, further comprises: displaying a new time interval at a corresponding position of the new subtitle component; the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
The time interval corresponding to each caption component is consistent with the time interval corresponding to the caption component in the video to be edited, for example, the time interval corresponding to the caption component may be the time interval for displaying the caption component in the video to be edited.
In a specific implementation, since each caption component may have a corresponding time interval, after determining the first caption component, the time interval corresponding to the first caption component may be cached, so that in the caption merging process, a new time interval may be determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component, and the new time interval may be used as the time interval corresponding to the new caption component, and may be displayed at the corresponding position of the new caption component.
According to the technical scheme, the new time interval is displayed at the corresponding position of the new subtitle component, and is determined according to the time interval corresponding to the first subtitle component and the time interval corresponding to the second subtitle component, so that the proper subtitle time span after merging can be ensured.
In an exemplary embodiment, the first caption component may be adjacent to the second caption component, and before the step of presenting the new time interval at the corresponding position of the new caption component, further comprising: determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component; and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain a new time interval.
In practical applications, the subtitle components to be combined may be two adjacent subtitle components from the same video, i.e. the first subtitle component may be adjacent to the second subtitle component, and the time span of the new subtitle component (i.e. the new time interval) may be calculated as follows:
the earliest starting time (i.e., earliest time point) in the two subtitles to be combined is taken as the starting time (i.e., starting time) of the new subtitle component, and the latest ending time (i.e., latest time point) in the two subtitles to be combined is taken as the ending time (i.e., ending time point) of the new subtitle component, so that the time span of the new subtitle component can contain the time spans of the two subtitles to be combined.
According to the technical scheme of the embodiment, the earliest time point and the latest time point are determined from the time interval corresponding to the first subtitle component and the time interval corresponding to the second subtitle component, the earliest time point is further taken as the starting time point, the latest time point is taken as the ending time point, a new time interval is obtained, the proper new time interval can be determined based on the earliest time point and the latest time point, and the time span of the combined subtitles can be ensured to comprise the time spans of the two subtitles to be combined.
In an exemplary embodiment, inserting text information in a first caption component into a target insertion location in a second caption component and exposing a new caption component therefrom, comprising: splitting text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying the new caption component containing the new text information.
The text information may be a text information, and the text information may be a text information.
In the process of merging the caption components, the text information in the second caption component can be split according to the target insertion position to obtain a first split text and a second split text, the first split text is positioned before the second split text in the text information of the second caption component, and then the first split text, the text information in the first caption component and the second split text can be spliced in sequence to obtain new text information, and the new caption component containing the new text information can be displayed.
In an example, character strings of two subtitle components to be combined may be extracted, and text information of the dragged subtitle component (i.e., text information in the first subtitle component) may be inserted at a specified position based on the target insertion position by using a character string splitting interface and a splicing interface, so as to obtain a new subtitle component containing new text information.
For example, by using a character string splitting interface, text information in the second subtitle component can be split into two character strings with the obtained target insertion position as a boundary, so as to obtain a first split text and a second split text, and further by using a character string splicing interface, the first split text can be spliced to a starting position of the text information in the first subtitle component, and the second split text can be spliced to an ending position of the text information in the first subtitle component, so as to obtain a brand new character string text (i.e., new text information).
According to the technical scheme, text information in a second subtitle component is split according to a target insertion position to obtain a first split text and a second split text, then the first split text, the text information in the first subtitle component and the second split text are spliced in sequence to obtain new text information, the new subtitle component containing the new text information is displayed, and compared with a traditional method, the method only aims at the subtitle data in an integral layer of the subtitle to carry out head-to-tail splicing, the method can merge the text information in the subtitle component based on a specified insertion position, and can support a scene flexibly inserted into the specified position.
In an exemplary embodiment, presenting the new subtitle component resulting therefrom includes: and deleting the first subtitle component and the second subtitle component and preset icons corresponding to the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the new subtitle component after rendering and the preset icon corresponding to the new subtitle component.
In practical application, a new caption component can be created according to the obtained new time interval and the new text information, and then the first caption component and the second caption component, the preset icons corresponding to the first caption component and the second caption component, and the new caption component and the preset icon corresponding to the new caption component after rendering are displayed in a caption editing interface.
According to the technical scheme, the first subtitle component, the second subtitle component, the preset icons corresponding to the first subtitle component and the second subtitle component are deleted in the subtitle editing interface, the new subtitle component after rendering and the preset icons corresponding to the new subtitle component are displayed, and the visual effect in the subtitle merging process is improved.
In order that those skilled in the art will better understand the above steps, an embodiment of the present disclosure will be exemplarily described below by way of an example, but it should be understood that the embodiment of the present disclosure is not limited thereto.
As shown in fig. 3a, in the subtitle editing interface, two subtitle assemblies to be combined are 1 (i.e., a first subtitle assembly) in fig. 3a and 2 (i.e., a second subtitle assembly) in fig. 3a, the two subtitle assemblies to be combined are adjacent, each subtitle assembly corresponds to a preset icon, and the first subtitle assembly is located below the adjacent second subtitle assembly. By pressing a preset icon (such as the icon pointed by the cursor in fig. 3 a) at the end region of the first subtitle component for a long time, the first subtitle component corresponding to the selected preset icon can be determined, and then the first subtitle component can be inserted into another subtitle component when the subtitle components are combined.
When the mouse moves, a linear relation can be used between a preset icon at the tail end of the selected caption component (1 in fig. 3 b) and the current cursor position of the mouse, a dragging effect is shown, positioning information of the current cursor of the mouse in text information displayed in a text area of the second caption component (2 in fig. 3 b) can be acquired in real time through an application program interface provided by a caption editing application platform and cached, and when the mouse is released, a designated position (namely a target insertion position, such as the cursor position is positioned between caption text content a and caption text content b in fig. 3 b) of a text to be merged and inserted subsequently can be determined through the interface according to the positioning information cached last time.
According to the target insertion position, the text information of the second caption component can be split into two character strings by using the target insertion position as a boundary through the character string splitting interface, and can be respectively marked as a segment A and a segment B, and then the segment A (such as caption text content a) can be spliced to the starting position (such as caption text content 1) of the text information of the first caption component, the segment B (such as caption text content b) can be spliced to the ending position of the text information of the first caption component through the character string splicing interface, and then a brand new character string text (such as caption text content a+caption text content 1+caption text content b in FIG. 3 c) is obtained, so that the first caption component is inserted into the appointed text position of the second caption component, and a new caption component (such as 1 in FIG. 3 c) is obtained.
Fig. 4 is a flowchart illustrating another subtitle editing method according to an exemplary embodiment, which is used in a computer device such as a terminal, as shown in fig. 4, and includes the following steps.
In step S410, in response to a selection operation of a preset icon acting in the subtitle editing interface, the selected preset icon is displayed as a triggered state. In step S420, in the process of moving the action point corresponding to the selected operation, the movement track indication of the action point is displayed with the preset icon in the triggered state as a starting point. In step S430, the subtitle component corresponding to the selected preset icon is used as the first subtitle component; and each subtitle component in the subtitle editing interface corresponds to a preset icon respectively. In step S440, the text information in the first caption component is cached, so that when the release operation is responded, the text information in the first caption component is obtained from the cache, the text information in the first caption component is inserted into the target insertion position in the second caption component, and the new caption component is displayed. In step S450, the action point corresponding to the selected operation is moved, and positioning information of the action point in the subtitle editing interface is obtained in real time. In step S460, the positioning information is cached, so that when the release operation is responded, the positioning information cached last time is obtained from the cache as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information. In step S470, in response to a release operation corresponding to the selected operation, a second subtitle component and a target insertion position are determined based on the latest acquired positioning information, and text information in the first subtitle component is inserted into the target insertion position in the second subtitle component. In step S480, in the subtitle editing interface, deleting the first subtitle component, the second subtitle component, preset icons corresponding to the first subtitle component and the second subtitle component, and displaying the new subtitle component and the preset icon corresponding to the new subtitle component after rendering. In step S490, each caption component has a corresponding time interval, and a new time interval is shown at a corresponding position of the new caption component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component. It should be noted that, the specific limitation of the above steps may be referred to the specific limitation of a subtitle editing method, which is not described herein.
It should be understood that, although the steps in the flowcharts of fig. 1, 2, and 4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1, 2, and 4 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least some of the other steps or stages.
Fig. 5 is a block diagram illustrating a subtitle editing apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes:
a subtitle component selecting unit 501 configured to perform a selecting operation in response to a preset icon acting in a subtitle editing interface, taking a subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon respectively;
A positioning information real-time obtaining unit 502 configured to perform moving the action point corresponding to the selected operation, and obtain positioning information of the action point in the subtitle editing interface in real time;
and a subtitle component merging unit 503 configured to perform a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component thereby.
In one possible implementation manner, the subtitle editing apparatus further includes:
a triggered state display unit specifically configured to perform displaying the selected preset icon as a triggered state;
the movement track indication display unit is specifically configured to display the movement track indication of the action point by taking the preset icon in the triggered state as a starting point in the process of moving the action point corresponding to the selected operation.
In one possible implementation, the triggered state display unit is specifically configured to perform displaying the selected preset icon as a first preset color; the first preset color is different from a display color of the unselected preset icon.
In one possible implementation manner, the movement track indication display unit is specifically configured to perform displaying a straight line segment of a second preset color as the movement track indication in a process of moving the action point corresponding to the selected operation; and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
In one possible implementation manner, the subtitle editing apparatus further includes:
the text information caching unit is specifically configured to perform caching of the text information in the first caption component, so as to obtain the text information in the first caption component from the cache when responding to the release operation, insert the text information in the first caption component into the target insertion position in the second caption component, and display the text information to obtain a new caption component.
In one possible implementation manner, the subtitle editing apparatus further includes:
and the positioning information caching unit is specifically configured to perform caching of the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second caption component and the target insertion position are determined based on the latest positioning information.
In one possible implementation manner, each caption component has a corresponding time interval, and the caption editing device further includes:
a new time interval display unit, specifically configured to perform displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation manner, the first caption component is adjacent to the second caption component, and the caption editing device further includes:
an earliest time point and latest time point determining unit, specifically configured to perform determining an earliest time point and a latest time point from a time interval corresponding to the first caption component and a time interval corresponding to the second caption component;
the new time interval obtaining unit is specifically configured to obtain the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
In one possible implementation manner, the subtitle component merging unit 503 is specifically configured to perform splitting of the text information in the second subtitle component according to the target insertion position, so as to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
In one possible implementation manner, the subtitle component merging unit 503 is specifically further configured to execute deleting the first subtitle component, the second subtitle component, preset icons corresponding to the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the new subtitle component after rendering and the preset icon corresponding to the new subtitle component.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 6 is a block diagram illustrating an electronic device 600 for subtitle editing according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, an electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, video, and so forth. The memory 604 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
The power supply component 606 provides power to the various components of the electronic device 600. The power supply components 606 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen between the electronic device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. When the electronic device 600 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor assembly 614 may detect an on/off state of the electronic device 600, a relative positioning of the components, such as a display and keypad of the electronic device 600, the sensor assembly 614 may also detect a change in position of the electronic device 600 or an electronic device 600 component, the presence or absence of a user's contact with the electronic device 600, an orientation or acceleration/deceleration of the device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the electronic device 600 and other devices, either wired or wireless. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a computer-readable storage medium is also provided, such as memory 604, including instructions executable by processor 620 of electronic device 600 to perform the above-described method. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising instructions executable by the processor 620 of the electronic device 600 to perform the above-described method.
It should be noted that the descriptions of the foregoing apparatus, the electronic device, the computer readable storage medium, the computer program product, and the like according to the method embodiments may further include other implementations, and the specific implementation may refer to the descriptions of the related method embodiments and are not described herein in detail.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (20)

1. A subtitle editing method, the method comprising:
responding to the selected operation of the preset icon acting in the subtitle editing interface, and taking the subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon respectively;
moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
determining a second caption component and a target insertion position based on the latest acquired positioning information in response to a release operation corresponding to the selected operation, inserting text information in the first caption component into the target insertion position in the second caption component, and displaying the text information to obtain a new caption component; the target insertion position is positioned at a certain character of the text information in the second subtitle component;
wherein after the step of responding to the selection operation of the preset icon acting in the subtitle editing interface, the method further comprises:
displaying the selected preset icon as a triggered state;
and in the process of moving the action point corresponding to the selected operation, displaying the movement track indication of the action point by taking the preset icon in the triggered state as a starting point.
2. The method of claim 1, wherein displaying the selected preset icon as a triggered state comprises:
displaying the selected preset icon as a first preset color; the first preset color is different from a display color of the unselected preset icon.
3. The method according to claim 1, wherein the displaying the movement track indication of the action point with the preset icon in the triggered state as a starting point in the process of moving the action point corresponding to the selected operation includes:
in the process of moving the action point corresponding to the selected operation, displaying a straight line segment with a second preset color as the movement track indication;
and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
4. The method according to claim 1, further comprising, after the step of setting, as the first subtitle component, a subtitle component corresponding to the selected preset icon in response to the selection of the preset icon acting in the subtitle editing interface:
and caching the text information in the first caption component, acquiring the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into the target insertion position in the second caption component, and displaying the text information to obtain a new caption component.
5. The method of claim 1, further comprising, after the step of moving the action point corresponding to the selected operation to obtain, in real time, positioning information of the action point in the subtitle editing interface:
and caching the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information.
6. The method of claim 1, wherein each of the caption components has a corresponding time interval, and further comprising, after the step of presenting thereby a new caption component:
displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
7. The method of claim 6, wherein the first caption component is adjacent to the second caption component, and wherein prior to the step of displaying a new time interval at a corresponding location of the new caption component, further comprising:
Determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain the new time interval.
8. The method of claim 1, wherein inserting text information in the first caption component into the target insertion location in the second caption component and exposing a new caption component therefrom, comprises:
splitting text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component;
and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
9. The method according to any one of claims 1 to 8, wherein the presenting thereby results in a new subtitle component comprising:
And deleting the first subtitle component, the second subtitle component, preset icons corresponding to the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the new subtitle component after rendering and the preset icons corresponding to the new subtitle component.
10. A subtitle editing apparatus, comprising:
a subtitle component selecting unit configured to perform a selecting operation in response to a preset icon acting in the subtitle editing interface, taking a subtitle component corresponding to the selected preset icon as a first subtitle component; each subtitle component in the subtitle editing interface corresponds to a preset icon respectively;
the positioning information real-time acquisition unit is configured to execute moving of an action point corresponding to the selected operation and acquire positioning information of the action point in the subtitle editing interface in real time;
a subtitle component merging unit configured to perform a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display a new subtitle component thereby; the target insertion position is positioned at a certain character of the text information in the second subtitle component;
Wherein the apparatus further comprises:
a triggered state display unit specifically configured to perform displaying the selected preset icon as a triggered state;
the movement track indication display unit is specifically configured to display the movement track indication of the action point by taking the preset icon in the triggered state as a starting point in the process of moving the action point corresponding to the selected operation.
11. The apparatus according to claim 10, wherein the triggered state display unit is specifically configured to perform displaying the selected preset icon as a first preset color; the first preset color is different from a display color of the unselected preset icon.
12. The apparatus according to claim 10, wherein the movement trajectory indication display unit is specifically configured to perform, as the movement trajectory indication, displaying a straight line segment of a second preset color during movement of the action point corresponding to the selected operation; and the straight line segment takes the preset icon in the triggered state as a starting point and takes the real-time position of the action point in the subtitle editing interface as an end point.
13. The apparatus of claim 10, wherein the apparatus further comprises:
The text information caching unit is specifically configured to perform caching of the text information in the first caption component, so as to obtain the text information in the first caption component from the cache when responding to the release operation, insert the text information in the first caption component into the target insertion position in the second caption component, and display the text information to obtain a new caption component.
14. The apparatus of claim 10, wherein the apparatus further comprises:
and the positioning information caching unit is specifically configured to perform caching of the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second caption component and the target insertion position are determined based on the latest positioning information.
15. The apparatus of claim 10, wherein each of the caption components has a corresponding time interval, the apparatus further comprising:
a new time interval display unit, specifically configured to perform displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
16. The apparatus of claim 15, wherein the first caption component is adjacent to the second caption component, the apparatus further comprising:
an earliest time point and latest time point determining unit, specifically configured to perform determining an earliest time point and a latest time point from a time interval corresponding to the first caption component and a time interval corresponding to the second caption component;
the new time interval obtaining unit is specifically configured to obtain the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
17. The apparatus according to claim 10, wherein the subtitle component merging unit is specifically configured to perform splitting of text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
18. The apparatus according to any one of claims 10 to 17, wherein the subtitle component merging unit is specifically further configured to execute deleting, in the subtitle editing interface, the first subtitle component, the second subtitle component, and preset icons corresponding to the first subtitle component and the second subtitle component, and displaying the new subtitle component after rendering, and the preset icon corresponding to the new subtitle component.
19. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the subtitle editing method of any one of claims 1 to 9.
20. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the subtitle editing method according to any one of claims 1 to 9.
CN202110998486.2A 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium Active CN113905267B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110998486.2A CN113905267B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110998486.2A CN113905267B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113905267A CN113905267A (en) 2022-01-07
CN113905267B true CN113905267B (en) 2023-06-20

Family

ID=79187960

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110998486.2A Active CN113905267B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113905267B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119261A (en) * 2023-08-09 2023-11-24 广东保伦电子股份有限公司 Subtitle display method and system based on subtitle merging

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316642A (en) * 2017-06-30 2017-11-03 联想(北京)有限公司 Video file method for recording, audio file method for recording and mobile terminal
KR101961750B1 (en) * 2017-10-11 2019-03-25 (주)아이디어콘서트 System for editing caption data of single screen

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8522267B2 (en) * 2002-03-08 2013-08-27 Caption Colorado Llc Method and apparatus for control of closed captioning
JP2009260823A (en) * 2008-04-18 2009-11-05 Toshiba Corp Caption checking apparatus and caption checking method
US8424052B2 (en) * 2009-12-18 2013-04-16 Samir ABED Systems and methods for automated extraction of closed captions in real time or near real-time and tagging of streaming data for advertisements
CN102547145B (en) * 2010-12-16 2015-09-23 新奥特(北京)视频技术有限公司 A kind of automatic test approach of caption function and system
US9081848B2 (en) * 2011-12-12 2015-07-14 William Christian Hoyer Methods, apparatuses, and computer program products for preparing narratives relating to investigative matters
US9053079B2 (en) * 2011-12-12 2015-06-09 Microsoft Technology Licensing, Llc Techniques to manage collaborative documents
US9477393B2 (en) * 2013-06-09 2016-10-25 Apple Inc. Device, method, and graphical user interface for displaying application status information
KR101419871B1 (en) * 2013-12-09 2014-07-16 넥스트리밍(주) Apparatus and method for editing subtitles
CN106851401A (en) * 2017-03-20 2017-06-13 惠州Tcl移动通信有限公司 A kind of method and system of automatic addition captions
CN207251794U (en) * 2017-09-26 2018-04-17 安徽省精英机械制造有限公司 A kind of multifunctional assembled film titler
CN107967093B (en) * 2017-12-21 2020-01-31 维沃移动通信有限公司 multi-segment text copying method and mobile terminal
US10728623B2 (en) * 2018-06-06 2020-07-28 Home Box Office, Inc. Editing timed-text elements
CN111970577B (en) * 2020-08-25 2023-07-25 北京字节跳动网络技术有限公司 Subtitle editing method and device and electronic equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107316642A (en) * 2017-06-30 2017-11-03 联想(北京)有限公司 Video file method for recording, audio file method for recording and mobile terminal
KR101961750B1 (en) * 2017-10-11 2019-03-25 (주)아이디어콘서트 System for editing caption data of single screen

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
非线性编辑软件Edius5.0在视频编辑中的应用;潘洪波;仪器仪表用户;全文 *

Also Published As

Publication number Publication date
CN113905267A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
US10728196B2 (en) Method and storage medium for voice communication
CN105845124B (en) Audio processing method and device
CN112752047A (en) Video recording method, device, equipment and readable storage medium
CN114003326B (en) Message processing method, device, equipment and storage medium
CN111381739B (en) Application icon display method and device, electronic equipment and storage medium
CN113905192B (en) Subtitle editing method and device, electronic equipment and storage medium
CN110968364B (en) Method and device for adding shortcut plugins and intelligent device
CN113065021B (en) Video preview method, apparatus, electronic device, storage medium and program product
CN113905267B (en) Subtitle editing method and device, electronic equipment and storage medium
CN111736746A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN111862272B (en) Animation state machine creation method, animation control method, device, equipment and medium
CN113157181B (en) Operation guiding method and device
CN111427449A (en) Interface display method, device and storage medium
CN114153346A (en) Picture processing method and device, storage medium and electronic equipment
CN112199552B (en) Video image display method and device, electronic equipment and storage medium
CN113986083A (en) File processing method and electronic equipment
CN113010157A (en) Code generation method and device
CN113613082A (en) Video playing method and device, electronic equipment and storage medium
CN112035691A (en) Method, device, equipment and medium for displaying cell labeling data of slice image
CN111240927B (en) Method, device and storage medium for detecting time consumption of method in program
CN114397990A (en) Image distribution method and device, electronic equipment and computer readable storage medium
CN112581102A (en) Task management method and device, electronic equipment and storage medium
CN111782110A (en) Screen capturing method and device, electronic equipment and storage medium
CN110825891B (en) Method and device for identifying multimedia information and storage medium
CN112817662A (en) Method and device for starting application program functional interface and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant