CN113905192B - Subtitle editing method and device, electronic equipment and storage medium - Google Patents

Subtitle editing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113905192B
CN113905192B CN202110996900.6A CN202110996900A CN113905192B CN 113905192 B CN113905192 B CN 113905192B CN 202110996900 A CN202110996900 A CN 202110996900A CN 113905192 B CN113905192 B CN 113905192B
Authority
CN
China
Prior art keywords
component
subtitle
caption
new
caption component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110996900.6A
Other languages
Chinese (zh)
Other versions
CN113905192A (en
Inventor
付硕
赵伊
韩乔
林斐凡
范艺含
郝刚
张一舟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110996900.6A priority Critical patent/CN113905192B/en
Publication of CN113905192A publication Critical patent/CN113905192A/en
Application granted granted Critical
Publication of CN113905192B publication Critical patent/CN113905192B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Circuits (AREA)
  • Television Signal Processing For Recording (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The disclosure relates to a subtitle editing method, a subtitle editing device, an electronic device and a storage medium, wherein the method comprises the following steps: determining a selected first caption component in response to a selection operation of a caption component area acting in a caption editing interface; moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time; and responding to a release operation corresponding to the selected operation, determining a second caption component and a target insertion position based on the latest acquired positioning information, inserting text information in the first caption component into the target insertion position in the second caption component, and displaying to obtain a new caption component. By adopting the method and the device, the subtitle components to be combined can be moved based on the selected operation, and the new subtitle components are combined to be displayed through the release operation corresponding to the selected operation, so that no additional operation components are required to be introduced, the engineering complexity is reduced, the subtitle combining processing efficiency is improved, and the scene which is flexibly inserted into the appointed position can be supported.

Description

Subtitle editing method and device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of computer technology, and in particular, to a subtitle editing method, a subtitle editing device, an electronic device and a storage medium.
Background
Currently, as video production needs increase, video production is gradually spread from a small portion of professionals to general users, and thus the usability of multimedia editors is becoming more and more important.
Because the subtitles in the video are stored in an array data format and are directly displayed in a list form, when the subtitles are combined, a user is usually required to select the subtitles to be combined, and then the selected subtitle data are combined one by one through clicking operation. With the adoption of the traditional method, an additional checking component is required to be introduced, so that engineering complexity is increased, and logic redundancy is processed by a computer.
Accordingly, the related art has a problem in that the subtitle combining process is inefficient.
Disclosure of Invention
The disclosure provides a subtitle editing method, a subtitle editing device, electronic equipment and a storage medium, which at least solve the problem of low subtitle merging processing efficiency in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of an embodiment of the present disclosure, there is provided a subtitle editing method including:
determining a selected first caption component in response to a selection operation of a caption component area acting in a caption editing interface;
Moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
and responding to a release operation corresponding to the selected operation, determining a second caption component and a target insertion position based on the latest acquired positioning information, inserting text information in the first caption component into the target insertion position in the second caption component, and displaying the new caption component.
In one possible implementation, after the step of determining the selected first subtitle component in response to the selection of the subtitle component area acting in the subtitle editing interface, the method further includes:
displaying the first caption component as a floating component;
and adjusting the display position of the floating assembly in real time during the process of moving the action point corresponding to the selected operation so that the floating assembly moves along with the movement of the action point.
In one possible implementation, after the step of determining the selected first subtitle component in response to the selection of the subtitle component area acting in the subtitle editing interface, the method further includes:
and caching the text information in the first caption component, acquiring the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into the target insertion position in the second caption component, and displaying the text information to obtain a new caption component.
In one possible implementation manner, after the step of moving the action point corresponding to the selected operation to obtain the positioning information of the action point in the subtitle editing interface in real time, the method further includes:
and caching the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information.
In one possible implementation, each of the caption components has a corresponding time interval, and after the step of presenting the new caption component, further includes:
displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation manner, the first caption component is adjacent to the second caption component, and before the step of displaying the new time interval at the corresponding position of the new caption component, the method further includes:
determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
And taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain the new time interval.
In one possible implementation manner, the inserting text information in the first caption component into the target insertion position in the second caption component and displaying the new caption component obtained thereby includes:
splitting text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component;
and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
In one possible implementation, the presenting thereby results in a new subtitle component, comprising:
and deleting the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the rendered new subtitle component.
According to a second aspect of the embodiments of the present disclosure, there is provided a subtitle editing apparatus including:
A caption component selecting unit configured to perform a selection operation in response to a caption component area acting in the caption editing interface, determining a selected first caption component;
the positioning information real-time acquisition unit is configured to execute moving of an action point corresponding to the selected operation and acquire positioning information of the action point in the subtitle editing interface in real time;
and the subtitle component merging unit is configured to execute a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component.
In one possible implementation, the apparatus further includes:
a floating component display unit specifically configured to perform display of the first subtitle component as a floating component;
and a display position adjustment unit specifically configured to perform real-time adjustment of the display position of the floating assembly during movement of the action point corresponding to the selected operation, so that the floating assembly moves following the movement of the action point.
In one possible implementation, the apparatus further includes:
the text information caching unit is specifically configured to perform caching of the text information in the first caption component, so as to obtain the text information in the first caption component from the cache when responding to the release operation, insert the text information in the first caption component into the target insertion position in the second caption component, and display the text information to obtain a new caption component.
In one possible implementation, the apparatus further includes:
and the positioning information caching unit is specifically configured to perform caching of the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second caption component and the target insertion position are determined based on the latest positioning information.
In one possible implementation, each of the subtitle components has a corresponding time interval, and the apparatus further includes:
a new time interval display unit, specifically configured to perform displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation, the first caption component is adjacent to the second caption component, and the apparatus further includes:
an earliest time point and latest time point determining unit, specifically configured to perform determining an earliest time point and a latest time point from a time interval corresponding to the first caption component and a time interval corresponding to the second caption component;
the new time interval obtaining unit is specifically configured to obtain the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
In one possible implementation manner, the subtitle component merging unit is specifically configured to perform splitting of text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
In one possible implementation manner, the subtitle component merging unit is specifically further configured to delete the first subtitle component and the second subtitle component in the subtitle editing interface, and display the rendered new subtitle component.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device comprising a memory storing a computer program and a processor implementing the subtitle editing method according to the first aspect or any possible implementation of the first aspect when the processor executes the computer program.
According to a fourth aspect of embodiments of the present disclosure, there is provided a storage medium having stored thereon a computer program which, when executed by a processor, implements the subtitle editing method according to the first aspect or any possible implementation of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising a computer program stored in a readable storage medium, from which at least one processor of a device reads and executes the computer program, causing the device to perform the subtitle editing method as described in any one of the embodiments of the first aspect.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
according to the scheme, the selected first caption component is determined by responding to the selected operation of the caption component area in the caption editing interface, then the action point corresponding to the selected operation is moved, positioning information of the action point in the caption editing interface is acquired in real time, further, the second caption component and the target insertion position are determined based on the latest acquired positioning information by responding to the release operation corresponding to the selected operation, text information in the first caption component is inserted into the target insertion position in the second caption component, and the new caption component is displayed. Therefore, the subtitle components to be combined can be moved based on the selected operation, the target insertion position is determined through the release operation corresponding to the selected operation, the new subtitle components are combined according to the target insertion position for display, no additional operation components are required to be introduced, engineering complexity is reduced, subtitle combining processing efficiency is improved, and scenes flexibly inserted into the designated positions can be supported.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flowchart illustrating a subtitle editing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating a display position adjustment step according to an exemplary embodiment.
Fig. 3a is a schematic diagram showing an example of a subtitle editing interface (before merging) according to an exemplary embodiment.
Fig. 3b is a schematic diagram illustrating an example subtitle editing interface (in merging) according to an exemplary embodiment.
Fig. 3c is a schematic diagram showing an example of a subtitle editing interface (after merging) according to an exemplary embodiment.
Fig. 4 is a flowchart illustrating another subtitle editing method according to an exemplary embodiment.
Fig. 5 is a block diagram illustrating a subtitle editing apparatus according to an exemplary embodiment.
Fig. 6 is an internal structural diagram of an electronic device, which is shown according to an exemplary embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure.
Fig. 1 is a flowchart illustrating a subtitle editing method according to an exemplary embodiment, which may be used in a computer device such as a terminal, for example, a video editing interface displayed by the terminal, as shown in fig. 1, the method including the following steps.
In step S110, in response to a selection operation of a subtitle component area acting in the subtitle editing interface, determining a selected first subtitle component;
the subtitle component displayed in the subtitle editing interface is associated with video content displayed in the video editing interface, for example, when a user edits the video through the video editing interface, the subtitle component displayed in the subtitle editing interface is a subtitle component to be added to the video.
In practical applications, at least two caption components may be displayed in the caption editing interface, and a selected first caption component may be determined by responding to a selection operation applied to a caption component area corresponding to a certain caption component, and may be inserted into another caption component when the caption components are combined.
Specifically, by responding to a preset action of the operation medium on the subtitle component area in the subtitle editing interface, a preset I/O interface may be invoked, which may be used to determine the first subtitle component selected by the preset action.
For example, when detecting that a user uses a mouse (i.e., an operation medium) to press (i.e., preset action) a subtitle component area corresponding to a certain subtitle component for a long time, a callback is triggered, that is, in response to a selection operation, the selected certain subtitle component is used as a first subtitle component, and further, in response to an operation medium moving event, a dragging function for the first subtitle component can be started.
In step S120, moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
the action point may be a virtual display corresponding to the operation medium, such as a cursor display of a mouse in a subtitle editing interface.
As an example, the positioning information may be a text information positioning position corresponding to a position where the action point is located in a text region of another subtitle component to be combined, for example, a current cursor position of a mouse is at a corresponding character in text information displayed in the text region.
In a specific implementation, an action point corresponding to the selected operation may be moved, for example, a user uses a mouse to move a corresponding cursor, so that the first subtitle component realizes a dragging function, and positioning information of the action point in a subtitle editing interface may be obtained in real time, for example, a preset I/O interface may be invoked in a process of moving the action point corresponding to the selected operation, and the I/O interface may be used to obtain positioning information of the action point in the subtitle editing interface in real time.
In an example, the terminal may identify that the current cursor of the mouse is positioned at a corresponding character in text information of another subtitle component to be combined based on an application program interface (e.g., field. Selection start) provided by the subtitle editing application platform.
In step S130, in response to a release operation corresponding to the selected operation, the second caption component and the target insertion position are determined based on the latest acquired positioning information, text information in the first caption component is inserted into the target insertion position in the second caption component, and a new caption component is displayed thereby.
The second subtitle component may be another subtitle component to be combined; the text information in the subtitle component may be text identified from the video to be edited, or text added by the user.
As an example, the target insertion position may be located at a character in text information displayed in a text region of the second subtitle component.
In practical application, the second caption component and the target insertion position can be determined based on the latest acquired positioning information in response to the release operation corresponding to the selected operation, so that the text information in the first caption component can be inserted into the target insertion position in the second caption component to obtain a new caption component, and the new caption component after the merging process can be displayed.
Specifically, a preset I/O interface may be invoked by responding to a preset release action corresponding to a selected operation, where the I/O interface may be configured to determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component thereby.
For example, after the user is detected to release the mouse, the mouse pressing operation is released, the latest acquired positioning information is determined based on an application program interface provided by the caption editing application platform, and then the second caption component to be combined and the target insertion position can be determined according to the latest acquired positioning information, so that the text information in the first caption component and the text information in the second caption component are combined by adopting the target insertion position, and a new combined caption component is obtained.
Compared with the traditional method that the subtitle combining operation is carried out through hooking and clicking, the technical scheme of the embodiment realizes the subtitle component selection, subtitle combining position determination and subtitle component combining processing in one action by the user based on the pressing, moving and loosening operations of the mouse in a dragging interaction mode, does not need to select and click buttons firstly by the user, optimizes the operation flow and can support accurate and flexible insertion position detection.
According to the subtitle editing method, the selected first subtitle component is determined by responding to the selected operation of the subtitle component area in the subtitle editing interface, then the action point corresponding to the selected operation is moved, positioning information of the action point in the subtitle editing interface is acquired in real time, further the second subtitle component and the target insertion position are determined based on the latest acquired positioning information by responding to the release operation corresponding to the selected operation, text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and the new subtitle component is displayed. Therefore, the subtitle components to be combined can be moved based on the selected operation, the target insertion position is determined through the release operation corresponding to the selected operation, the new subtitle components are combined according to the target insertion position for display, no additional operation components are required to be introduced, engineering complexity is reduced, subtitle combining processing efficiency is improved, and scenes flexibly inserted into the designated positions can be supported.
In an exemplary embodiment, as shown in fig. 2, after the step of determining the selected first subtitle component in response to the selected operation of the subtitle component area acting in the subtitle editing interface, the steps of:
in step S210, the first subtitle component is displayed as a floating component;
after determining the first subtitle component, the first subtitle component may be displayed as a floating component in a subtitle editing interface.
In step S220, during the movement of the action point corresponding to the selected operation, the display position of the floating assembly is adjusted in real time, so that the floating assembly moves following the movement of the action point.
After the first subtitle component is displayed as the floating component, the display position of the floating component can be adjusted in real time in the subtitle editing interface during the movement of the action point corresponding to the selected operation, so that the floating component moves along with the movement of the action point, for example, in the subtitle editing interface, when a mouse moves, the display position of the selected subtitle component can be adjusted to show the dragging effect.
According to the technical scheme, the first caption component is displayed as the floating component, and then the display position of the floating component is adjusted in real time in the process of moving the action point corresponding to the selected operation, so that the floating component moves along with the movement of the action point, the dragging effect of the first caption component displayed as the floating component along with the movement of the action point can be displayed based on the operation of moving the action point, and the visualization effect in the caption merging process is improved.
In an exemplary embodiment, after the step of determining the selected first subtitle component in response to the selection operation of the subtitle component area acting in the subtitle editing interface, the method further includes: and caching the text information in the first caption component, acquiring the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into a target insertion position in the second caption component, and displaying to obtain a new caption component.
In a specific implementation, after determining the first caption component, the text information in the first caption component may be cached, where the text information may include information such as caption text content, caption text font, caption text word size, caption text style, and so on, and further, when responding to the release operation, the text information in the first caption component may be obtained from the cache, the text information in the first caption component may be inserted into the target insertion position in the second caption component, and the new caption component may be obtained by displaying the text information.
According to the technical scheme, the text information in the first subtitle component is cached, so that the text information in the first subtitle component is obtained from the cache when the release operation is responded, the text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is obtained through display, the text information in the subtitle component to be combined can be called based on the cache, and the subtitle combining processing efficiency is improved.
In an exemplary embodiment, after the step of moving the action point corresponding to the selected operation to obtain the positioning information of the action point in the subtitle editing interface in real time, the method further includes: and caching the positioning information to acquire the positioning information of the last time of caching from the cache as the latest positioning information when responding to the release operation, and determining the second caption component and the target insertion position based on the latest positioning information.
In practical application, after the positioning information is obtained in real time, the obtained positioning information can be cached, and then the positioning information cached last time can be obtained from the cache as the latest positioning information when the release operation is responded, so that the second caption assembly and the target insertion position are determined based on the latest positioning information.
Specifically, when the mouse is moved, the subtitle component to be combined (i.e. the second subtitle component) and the subtitle combining position (i.e. the target inserting position) corresponding to the current cursor position of the mouse can be dynamically judged through an application program interface provided by the subtitle editing application platform, and can be put into a cache, so that when the mouse is released, the combination among the subtitle components can be realized according to the positioning information of the last cache through the application program interface.
In an example, according to the latest positioning information, a character sequence number corresponding to the position of the cursor in the last time of buffering can be obtained, and can be determined as the appointed position of the text to be merged and inserted subsequently.
According to the technical scheme, the positioning information is cached, so that when the release operation is responded, the positioning information cached for the last time is obtained from the cache as the latest positioning information, the second subtitle component and the target insertion position are determined based on the latest positioning information, the other subtitle component to be combined and the appointed position of the inserted text can be determined based on the cache, and the subtitle combining processing efficiency is improved.
In an exemplary embodiment, each caption component may have a corresponding time interval, and after the step of presenting the new caption component thereby, further comprises: displaying a new time interval at a corresponding position of the new subtitle component; the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
The time interval corresponding to each caption component is consistent with the time interval corresponding to the caption component in the video to be edited, for example, the time interval corresponding to the caption component may be the time interval for displaying the caption component in the video to be edited.
In a specific implementation, since each caption component may have a corresponding time interval, after determining the first caption component, the time interval corresponding to the first caption component may be cached, so that in the caption merging process, a new time interval may be determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component, and the new time interval may be used as the time interval corresponding to the new caption component, and may be displayed at the corresponding position of the new caption component.
According to the technical scheme, the new time interval is displayed at the corresponding position of the new subtitle component, and is determined according to the time interval corresponding to the first subtitle component and the time interval corresponding to the second subtitle component, so that the proper subtitle time span after merging can be ensured.
In an exemplary embodiment, the first caption component may be adjacent to the second caption component, and before the step of presenting the new time interval at the corresponding position of the new caption component, further comprising: determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component; and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain a new time interval.
In practical applications, the subtitle components to be combined may be two adjacent subtitle components from the same video, i.e. the first subtitle component may be adjacent to the second subtitle component, and the time span of the new subtitle component (i.e. the new time interval) may be calculated as follows:
the earliest starting time (i.e., earliest time point) in the two subtitles to be combined is taken as the starting time (i.e., starting time) of the new subtitle component, and the latest ending time (i.e., latest time point) in the two subtitles to be combined is taken as the ending time (i.e., ending time point) of the new subtitle component, so that the time span of the new subtitle component can contain the time spans of the two subtitles to be combined.
According to the technical scheme of the embodiment, the earliest time point and the latest time point are determined from the time interval corresponding to the first subtitle component and the time interval corresponding to the second subtitle component, the earliest time point is further taken as the starting time point, the latest time point is taken as the ending time point, a new time interval is obtained, the proper new time interval can be determined based on the earliest time point and the latest time point, and the time span of the combined subtitles can be ensured to comprise the time spans of the two subtitles to be combined.
In an exemplary embodiment, inserting text information in a first caption component into a target insertion location in a second caption component and exposing a new caption component therefrom, comprising: splitting text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying the new caption component containing the new text information.
The text information may be a text information, and the text information may be a text information.
In the process of merging the caption components, the text information in the second caption component can be split according to the target insertion position to obtain a first split text and a second split text, the first split text is positioned before the second split text in the text information of the second caption component, and then the first split text, the text information in the first caption component and the second split text can be spliced in sequence to obtain new text information, and the new caption component containing the new text information can be displayed.
In an example, character strings of two subtitle components to be combined may be extracted, and text information of the dragged subtitle component (i.e., text information in the first subtitle component) may be inserted at a specified position based on the target insertion position by using a character string splitting interface and a splicing interface, so as to obtain a new subtitle component containing new text information.
For example, by using a character string splitting interface, text information in the second subtitle component can be split into two character strings with the obtained target insertion position as a boundary, so as to obtain a first split text and a second split text, and further by using a character string splicing interface, the first split text can be spliced to a starting position of the text information in the first subtitle component, and the second split text can be spliced to an ending position of the text information in the first subtitle component, so as to obtain a brand new character string text (i.e., new text information).
According to the technical scheme, text information in a second subtitle component is split according to a target insertion position to obtain a first split text and a second split text, then the first split text, the text information in the first subtitle component and the second split text are spliced in sequence to obtain new text information, the new subtitle component containing the new text information is displayed, and compared with a traditional method, the method only aims at the subtitle data in an integral layer of the subtitle to carry out head-to-tail splicing, the method can merge the text information in the subtitle component based on a specified insertion position, and can support a scene flexibly inserted into the specified position.
In an exemplary embodiment, presenting the new subtitle component resulting therefrom includes: and deleting the first caption component and the second caption component in the caption editing interface, and displaying the rendered new caption component.
In practical application, a new subtitle component can be created according to the obtained new time interval and the new text information, and then the original first subtitle component and the second subtitle component can be deleted in the subtitle editing interface, and the new subtitle component is inserted for rendering and then displayed.
According to the technical scheme, the first subtitle component and the second subtitle component are deleted in the subtitle editing interface, and the rendered new subtitle component is displayed, so that the visualization effect in the subtitle merging process is improved.
In order that those skilled in the art will better understand the above steps, an embodiment of the present disclosure will be exemplarily described below by way of an example, but it should be understood that the embodiment of the present disclosure is not limited thereto.
As shown in fig. 3a, in the subtitle editing interface, two subtitle components to be combined are 1 (i.e., a first subtitle component) in fig. 3a and 2 (i.e., a second subtitle component) in fig. 3a, the two subtitle components to be combined are adjacent, and the first subtitle component is located above the adjacent second subtitle component.
When the mouse is moved to start dragging the first subtitle component, text information of the first subtitle component can be cached, as shown in fig. 3b, when the first subtitle component (1 in fig. 3 b) is dragged to a position where the second subtitle component (2 in fig. 3 b) is located, positioning information of a current cursor of the mouse in the text information displayed in a text area of the second subtitle component can be acquired in real time through an application program interface provided by the subtitle editing application platform and cached, and when the mouse is released, a designated position (namely a target insertion position, such as a cursor position in the middle of subtitle text content a and subtitle text content b in fig. 3 b) of a text to be subsequently merged can be determined according to the positioning information cached last time through the interface.
According to the target insertion position, the text information of the second caption component can be split into two character strings by using the target insertion position as a boundary through the character string splitting interface, and can be respectively marked as a segment A and a segment B, and then the segment A (such as caption text content a) can be spliced to the starting position (such as caption text content 1) of the text information of the first caption component, the segment B (such as caption text content b) can be spliced to the ending position of the text information of the first caption component through the character string splicing interface, and then a brand new character string text (such as caption text content a+caption text content 1+caption text content b in FIG. 3 c) is obtained, so that the first caption component is inserted into the appointed text position of the second caption component, and a new caption component (such as 1 in FIG. 3 c) is obtained.
Fig. 4 is a flowchart illustrating another subtitle editing method according to an exemplary embodiment, which is used in a computer device such as a terminal, as shown in fig. 4, and includes the following steps.
In step S410, in response to a selection operation of a subtitle component area acting in the subtitle editing interface, a selected first subtitle component is determined. In step S420, the text information in the first caption component is cached, so that when the release operation is responded, the text information in the first caption component is obtained from the cache, the text information in the first caption component is inserted into the target insertion position in the second caption component, and the new caption component is displayed. In step S430, the first subtitle component is displayed as a floating component. In step S440, the display position of the floating assembly is adjusted in real time during the movement of the action point corresponding to the selected operation, so that the floating assembly moves following the movement of the action point. In step S450, the action point corresponding to the selected operation is moved, and positioning information of the action point in the subtitle editing interface is obtained in real time. In step S460, the positioning information is cached, so that when the release operation is responded, the positioning information cached last time is obtained from the cache as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information. In step S470, in response to the release operation corresponding to the selected operation, a second subtitle component and a target insertion position are determined based on the latest acquired positioning information, text information in the first subtitle component is inserted into the target insertion position in the second subtitle component, and a new subtitle component is displayed thereby. In step S480, each caption component has a corresponding time interval, and a new time interval is displayed at a corresponding position of the new caption component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component. It should be noted that, the specific limitation of the above steps may be referred to the specific limitation of a subtitle editing method, which is not described herein.
It should be understood that, although the steps in the flowcharts of fig. 1, 2, and 4 are shown in order as indicated by the arrows, these steps are not necessarily performed in order as indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in fig. 1, 2, and 4 may include a plurality of steps or stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily sequential, but may be performed in turn or alternately with at least some of the other steps or stages.
Fig. 5 is a block diagram illustrating a subtitle editing apparatus according to an exemplary embodiment. Referring to fig. 5, the apparatus includes:
a caption component selection unit 501 configured to perform a selection operation in response to a caption component area acting in a caption editing interface, determining a selected first caption component;
a positioning information real-time obtaining unit 502 configured to perform moving the action point corresponding to the selected operation, and obtain positioning information of the action point in the subtitle editing interface in real time;
And a subtitle component merging unit 503 configured to perform a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display the new subtitle component thereby.
In one possible implementation manner, the subtitle editing apparatus further includes:
a floating component display unit specifically configured to perform display of the first subtitle component as a floating component;
and a display position adjustment unit specifically configured to perform real-time adjustment of the display position of the floating assembly during movement of the action point corresponding to the selected operation, so that the floating assembly moves following the movement of the action point.
In one possible implementation manner, the subtitle editing apparatus further includes:
the text information caching unit is specifically configured to perform caching of the text information in the first caption component, so as to obtain the text information in the first caption component from the cache when responding to the release operation, insert the text information in the first caption component into the target insertion position in the second caption component, and display the text information to obtain a new caption component.
In one possible implementation manner, the subtitle editing apparatus further includes:
and the positioning information caching unit is specifically configured to perform caching of the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second caption component and the target insertion position are determined based on the latest positioning information.
In one possible implementation manner, each caption component has a corresponding time interval, and the caption editing device further includes:
a new time interval display unit, specifically configured to perform displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
In one possible implementation manner, the first caption component is adjacent to the second caption component, and the caption editing device further includes:
an earliest time point and latest time point determining unit, specifically configured to perform determining an earliest time point and a latest time point from a time interval corresponding to the first caption component and a time interval corresponding to the second caption component;
The new time interval obtaining unit is specifically configured to obtain the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
In one possible implementation manner, the subtitle component merging unit 503 is specifically configured to perform splitting of the text information in the second subtitle component according to the target insertion position, so as to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
In one possible implementation manner, the subtitle component merging unit 503 is specifically further configured to delete the first subtitle component and the second subtitle component in the subtitle editing interface, and display the rendered new subtitle component.
The specific manner in which the various modules perform the operations in the apparatus of the above embodiments have been described in detail in connection with the embodiments of the method, and will not be described in detail herein.
Fig. 6 is a block diagram illustrating an electronic device 600 for subtitle editing according to an example embodiment. For example, the electronic device 600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 6, an electronic device 600 may include one or more of the following components: a processing component 602, a memory 604, a power component 606, a multimedia component 608, an audio component 610, an input/output (I/O) interface 612, a sensor component 614, and a communication component 616.
The processing component 602 generally controls overall operation of the electronic device 600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 602 may include one or more processors 620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 602 can include one or more modules that facilitate interaction between the processing component 602 and other components. For example, the processing component 602 may include a multimedia module to facilitate interaction between the multimedia component 608 and the processing component 602.
The memory 604 is configured to store various types of data to support operations at the electronic device 600. Examples of such data include instructions for any application or method operating on the electronic device 600, contact data, phonebook data, messages, pictures, video, and so forth. The memory 604 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic disk, optical disk, or graphene memory.
The power supply component 606 provides power to the various components of the electronic device 600. The power supply components 606 can include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 600.
The multimedia component 608 includes a screen between the electronic device 600 and the user that provides an output interface. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user. The touch panel includes one or more touch sensors to sense touches, swipes, and gestures on the touch panel. The touch sensor may sense not only the boundary of a touch or slide action, but also the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 608 includes a front camera and/or a rear camera. When the electronic device 600 is in an operational mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each front and rear camera may be a fixed optical lens system or have focal length and optical zoom capabilities.
The audio component 610 is configured to output and/or input audio signals. For example, the audio component 610 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 600 is in an operational mode, such as a call mode, a recording mode, and a speech recognition mode. The received audio signals may be further stored in the memory 604 or transmitted via the communication component 616. In some embodiments, audio component 610 further includes a speaker for outputting audio signals.
The I/O interface 612 provides an interface between the processing component 602 and peripheral interface modules, which may be a keyboard, click wheel, buttons, etc. These buttons may include, but are not limited to: homepage button, volume button, start button, and lock button.
The sensor assembly 614 includes one or more sensors for providing status assessment of various aspects of the electronic device 600. For example, the sensor assembly 614 may detect an on/off state of the electronic device 600, a relative positioning of the components, such as a display and keypad of the electronic device 600, the sensor assembly 614 may also detect a change in position of the electronic device 600 or an electronic device 600 component, the presence or absence of a user's contact with the electronic device 600, an orientation or acceleration/deceleration of the device 600, and a change in temperature of the electronic device 600. The sensor assembly 614 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact. The sensor assembly 614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 614 may also include an acceleration sensor, a gyroscopic sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 616 is configured to facilitate communication between the electronic device 600 and other devices, either wired or wireless. The electronic device 600 may access a wireless network based on a communication standard, such as WiFi, an operator network (e.g., 2G, 3G, 4G, or 5G), or a combination thereof. In one exemplary embodiment, the communication component 616 receives broadcast signals or broadcast-related information from an external broadcast management system via a broadcast channel. In one exemplary embodiment, the communication component 616 further includes a Near Field Communication (NFC) module to facilitate short range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, ultra Wideband (UWB) technology, bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), digital Signal Processors (DSPs), digital Signal Processing Devices (DSPDs), programmable Logic Devices (PLDs), field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic elements for executing the methods described above.
In an exemplary embodiment, a computer-readable storage medium is also provided, such as memory 604, including instructions executable by processor 620 of electronic device 600 to perform the above-described method. For example, the computer readable storage medium may be ROM, random Access Memory (RAM), CD-ROM, magnetic tape, floppy disk, optical data storage device, etc.
In an exemplary embodiment, a computer program product is also provided, comprising instructions executable by the processor 620 of the electronic device 600 to perform the above-described method.
It should be noted that the descriptions of the foregoing apparatus, the electronic device, the computer readable storage medium, the computer program product, and the like according to the method embodiments may further include other implementations, and the specific implementation may refer to the descriptions of the related method embodiments and are not described herein in detail.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (16)

1. A subtitle editing method, the method comprising:
determining a selected first caption component in response to a selection operation of a caption component area acting in a caption editing interface; the subtitle component displayed in the subtitle editing interface is associated with video content displayed in the video editing interface;
moving the action point corresponding to the selected operation, and acquiring positioning information of the action point in the subtitle editing interface in real time;
determining a second caption component and a target insertion position based on the latest acquired positioning information in response to a release operation corresponding to the selected operation, inserting text information in the first caption component into the target insertion position in the second caption component, and displaying the text information to obtain a new caption component;
wherein each of the caption components has a corresponding time interval, and after the step of presenting the new caption component, further comprises:
displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
2. The method of claim 1, further comprising, after the step of determining the selected first caption component in response to a selected operation of a caption component area acting in a caption editing interface:
displaying the first caption component as a floating component;
and adjusting the display position of the floating assembly in real time during the process of moving the action point corresponding to the selected operation so that the floating assembly moves along with the movement of the action point.
3. The method of claim 1, further comprising, after the step of determining the selected first caption component in response to a selected operation of a caption component area acting in a caption editing interface:
and caching the text information in the first caption component, acquiring the text information in the first caption component from the cache when responding to the release operation, inserting the text information in the first caption component into the target insertion position in the second caption component, and displaying the text information to obtain a new caption component.
4. The method of claim 1, further comprising, after the step of moving the action point corresponding to the selected operation to obtain, in real time, positioning information of the action point in the subtitle editing interface:
And caching the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second subtitle component and the target insertion position are determined based on the latest positioning information.
5. The method of claim 1, wherein the first caption component is adjacent to the second caption component, and prior to the step of displaying a new time interval at a corresponding location of the new caption component, further comprising:
determining the earliest time point and the latest time point from the time interval corresponding to the first caption component and the time interval corresponding to the second caption component;
and taking the earliest time point as a starting time point and the latest time point as an ending time point to obtain the new time interval.
6. The method of claim 1, wherein inserting text information in the first caption component into the target insertion location in the second caption component and exposing a new caption component therefrom, comprises:
splitting text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component;
And splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
7. The method according to any one of claims 1 to 6, wherein the presenting thereby results in a new subtitle component comprising:
and deleting the first subtitle component and the second subtitle component in the subtitle editing interface, and displaying the rendered new subtitle component.
8. A subtitle editing apparatus, comprising:
a caption component selecting unit configured to perform a selection operation in response to a caption component area acting in the caption editing interface, determining a selected first caption component; the subtitle component displayed in the subtitle editing interface is associated with video content displayed in the video editing interface;
the positioning information real-time acquisition unit is configured to execute moving of an action point corresponding to the selected operation and acquire positioning information of the action point in the subtitle editing interface in real time;
a subtitle component merging unit configured to perform a release operation corresponding to the selected operation, determine a second subtitle component and a target insertion position based on the latest acquired positioning information, insert text information in the first subtitle component into the target insertion position in the second subtitle component, and display a new subtitle component thereby;
Wherein each of the caption components has a corresponding time interval, the apparatus further comprising:
a new time interval display unit, specifically configured to perform displaying a new time interval at a corresponding position of the new subtitle component; and the new time interval is determined according to the time interval corresponding to the first caption component and the time interval corresponding to the second caption component.
9. The apparatus of claim 8, wherein the apparatus further comprises:
a floating component display unit specifically configured to perform display of the first subtitle component as a floating component;
and a display position adjustment unit specifically configured to perform real-time adjustment of the display position of the floating assembly during movement of the action point corresponding to the selected operation, so that the floating assembly moves following the movement of the action point.
10. The apparatus of claim 8, wherein the apparatus further comprises:
the text information caching unit is specifically configured to perform caching of the text information in the first caption component, so as to obtain the text information in the first caption component from the cache when responding to the release operation, insert the text information in the first caption component into the target insertion position in the second caption component, and display the text information to obtain a new caption component.
11. The apparatus of claim 8, wherein the apparatus further comprises:
and the positioning information caching unit is specifically configured to perform caching of the positioning information, so that when the release operation is responded, the positioning information cached last time is obtained from the cache and used as the latest positioning information, and the second caption component and the target insertion position are determined based on the latest positioning information.
12. The apparatus of claim 8, wherein the first caption component is adjacent to the second caption component, the apparatus further comprising:
an earliest time point and latest time point determining unit, specifically configured to perform determining an earliest time point and a latest time point from a time interval corresponding to the first caption component and a time interval corresponding to the second caption component;
the new time interval obtaining unit is specifically configured to obtain the new time interval by taking the earliest time point as a starting time point and the latest time point as an ending time point.
13. The apparatus according to claim 8, wherein the subtitle component merging unit is specifically configured to perform splitting of text information in the second subtitle component according to the target insertion position to obtain a first split text and a second split text; the first split text is before the second split text in the text information of the second subtitle component; and splicing the first split text, the text information in the first caption component and the second split text in sequence to obtain new text information, and displaying a new caption component containing the new text information.
14. The apparatus according to any one of claims 8 to 13, wherein the subtitle component merging unit is specifically further configured to execute deleting the first subtitle component and the second subtitle component in the subtitle editing interface and to present the new subtitle component after rendering.
15. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the subtitle editing method of any one of claims 1 to 7.
16. A computer readable storage medium, characterized in that instructions in the computer readable storage medium, when executed by a processor of an electronic device, enable the electronic device to perform the subtitle editing method according to any one of claims 1 to 7.
CN202110996900.6A 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium Active CN113905192B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110996900.6A CN113905192B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110996900.6A CN113905192B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113905192A CN113905192A (en) 2022-01-07
CN113905192B true CN113905192B (en) 2023-05-30

Family

ID=79187888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110996900.6A Active CN113905192B (en) 2021-08-27 2021-08-27 Subtitle editing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113905192B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114501159B (en) * 2022-01-24 2023-12-22 传神联合(北京)信息技术有限公司 Subtitle editing method and device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110067467A (en) * 2009-12-14 2011-06-22 엘지전자 주식회사 Mobile terminal and operating method thereof
CN107656693A (en) * 2013-02-28 2018-02-02 联想(北京)有限公司 A kind of method and device that cursor position is determined in touch-screen
CN108322800A (en) * 2017-01-18 2018-07-24 阿里巴巴集团控股有限公司 Caption information processing method and processing device

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5442742A (en) * 1990-12-21 1995-08-15 Apple Computer, Inc. Method and apparatus for the manipulation of text on a computer display screen
WO2013086710A1 (en) * 2011-12-14 2013-06-20 Nokia Corporation Methods, apparatuses and computer program products for managing different visual variants of objects via user interfaces
US11856315B2 (en) * 2017-09-29 2023-12-26 Apple Inc. Media editing application with anchored timeline for captions and subtitles
CN107967093B (en) * 2017-12-21 2020-01-31 维沃移动通信有限公司 multi-segment text copying method and mobile terminal
CN112261453A (en) * 2020-10-22 2021-01-22 北京小米移动软件有限公司 Method, device and storage medium for transmitting subtitle splicing map

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20110067467A (en) * 2009-12-14 2011-06-22 엘지전자 주식회사 Mobile terminal and operating method thereof
CN107656693A (en) * 2013-02-28 2018-02-02 联想(北京)有限公司 A kind of method and device that cursor position is determined in touch-screen
CN108322800A (en) * 2017-01-18 2018-07-24 阿里巴巴集团控股有限公司 Caption information processing method and processing device

Also Published As

Publication number Publication date
CN113905192A (en) 2022-01-07

Similar Documents

Publication Publication Date Title
CN105845124B (en) Audio processing method and device
CN110968364B (en) Method and device for adding shortcut plugins and intelligent device
CN113411516B (en) Video processing method, device, electronic equipment and storage medium
CN113065021B (en) Video preview method, apparatus, electronic device, storage medium and program product
CN111970561B (en) Video cover generation method, system, device, electronic equipment and storage medium
CN111736746A (en) Multimedia resource processing method and device, electronic equipment and storage medium
CN113157181B (en) Operation guiding method and device
CN111427449A (en) Interface display method, device and storage medium
CN113905192B (en) Subtitle editing method and device, electronic equipment and storage medium
CN112199552B (en) Video image display method and device, electronic equipment and storage medium
CN114153346A (en) Picture processing method and device, storage medium and electronic equipment
US11600300B2 (en) Method and device for generating dynamic image
CN113010157A (en) Code generation method and device
CN112764636A (en) Video processing method, video processing device, electronic equipment and computer-readable storage medium
CN113905267B (en) Subtitle editing method and device, electronic equipment and storage medium
CN113360708B (en) Video playing method and device, electronic equipment and storage medium
CN115065840A (en) Information processing method and device, electronic equipment and storage medium
CN114397990A (en) Image distribution method and device, electronic equipment and computer readable storage medium
CN111782110A (en) Screen capturing method and device, electronic equipment and storage medium
CN111538544B (en) Method and device for displaying configuration data, electronic equipment and storage medium
CN112000250B (en) Information processing method, device, electronic equipment and storage medium
CN112182455B (en) Page display method, device, electronic equipment and storage medium
CN113096695B (en) Contrast display method and device for contrast display
CN114697723B (en) Video generation method, device and medium
CN112817662A (en) Method and device for starting application program functional interface and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant