CN112261453A - Method, device and storage medium for transmitting subtitle splicing map - Google Patents

Method, device and storage medium for transmitting subtitle splicing map Download PDF

Info

Publication number
CN112261453A
CN112261453A CN202011137535.5A CN202011137535A CN112261453A CN 112261453 A CN112261453 A CN 112261453A CN 202011137535 A CN202011137535 A CN 202011137535A CN 112261453 A CN112261453 A CN 112261453A
Authority
CN
China
Prior art keywords
subtitle
timestamp
splicing map
splicing
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011137535.5A
Other languages
Chinese (zh)
Inventor
杜旭涛
刘鸣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN202011137535.5A priority Critical patent/CN112261453A/en
Publication of CN112261453A publication Critical patent/CN112261453A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4122Peripherals receiving signals from specially adapted client devices additional display device, e.g. video projector
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/41Structure of client; Structure of client peripherals
    • H04N21/4104Peripherals receiving signals from specially adapted client devices
    • H04N21/4126The peripheral being portable, e.g. PDAs or mobile phones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/436Interfacing a local distribution network, e.g. communicating with another STB or one or more peripheral devices inside the home
    • H04N21/4363Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network
    • H04N21/43637Adapting the video or multiplex stream to a specific local network, e.g. a IEEE 1394 or Bluetooth® network involving a wireless protocol, e.g. Bluetooth, RF or wireless LAN [IEEE 802.11]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/8547Content authoring involving timestamps for synchronizing content
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/278Subtitling

Abstract

The present disclosure provides a method, apparatus, and medium for transmitting a subtitle splicing map, the method including: responding to a starting time stamp determined by a playing device according to the time of receiving a first signal, generating a subtitle splicing map according to a playing video stream with the starting time stamp as a starting point, and sending the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device. In the disclosure, the mobile terminal can directly receive the generated subtitle splicing map from the playing device, and when a user watches video playing, the user can conveniently obtain the subtitle splicing map of the video clip which the user is interested in through the playing device, so that the user experience is improved.

Description

Method, device and storage medium for transmitting subtitle splicing map
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a method and an apparatus for transmitting a subtitle splicing map, and a storage medium.
Background
The mobile terminal can establish wireless connection with the television, can also capture a picture in a video played in the television, and sends a capture result to the mobile phone.
However, screenshot on a television and saving are often cumbersome, for example: during screenshot, the picture on the television can be captured only by double clicking a menu key, calling out a task bar and clicking a key corresponding to the screenshot function. And then the picture can be checked on the mobile phone only by scanning the two-dimensional code of the screenshot through the mobile phone, and the picture can be selectively stored in the mobile phone.
With the increase of user requirements, how to acquire a subtitle mosaic related to a video on a television is a technical problem to be solved.
Disclosure of Invention
To overcome the problems in the related art, a method, an apparatus, and a storage medium for transmitting a subtitle mosaic are provided.
According to a first aspect of embodiments herein, there is provided a method for transmitting a caption splicing map, applied to a mobile terminal, including:
responding to a starting time stamp determined by a playing device according to the time of receiving a first signal, generating a subtitle splicing map according to a playing video stream with the starting time stamp as a starting point, and sending the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device.
In an embodiment, the method further comprises performing at least one of:
saving the subtitle splicing map;
forwarding the subtitle splicing map;
and releasing the subtitle splicing map.
In an embodiment, the method further comprises:
sending a first signal to the playing terminal, triggering the playing device to determine an initial timestamp according to the time when the first signal is received, and generating a subtitle splicing map according to a playing video stream with the initial timestamp as a starting point.
In an embodiment, the method further comprises: sending a second signal to the playing terminal; the second signal is used for triggering the playing device to determine an end timestamp according to the time when the second signal is received;
wherein, generating a caption splicing map according to the playing video stream with the starting timestamp as a starting point comprises: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
In an embodiment, before receiving the subtitle mosaic from the playback device, the method further includes: and sending a third signal for indicating the playing equipment to send the subtitle splicing map, so that the playing equipment sends the subtitle splicing map to the mobile terminal after receiving the third signal.
In one embodiment, the generating a subtitle splicing map from a playing video stream starting from the start timestamp includes:
recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point;
extracting a plurality of video segments from the recorded video stream, wherein all image frames in each video segment contain the same subtitle information, extracting a representative image frame from each video segment respectively, and generating a subtitle splicing image according to the extracted representative image frames.
In one embodiment, the generating a subtitle splicing map from a playing video stream starting from the start timestamp includes:
extracting video segments in real time from a playing video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively;
and when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
In one embodiment, the generating a subtitle splicing map from a playing video stream starting from the start timestamp includes:
extracting video segments in real time from a played video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing image according to each extracted representative image frame;
and when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
In an embodiment, the first signal is further used to trigger the playback device to determine an end timestamp;
the determining an end time stamp comprises one of:
determining that the end timestamp is a timestamp corresponding to a fixed duration after the start timestamp;
and taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
In one embodiment, the extracting a representative image frame from each video segment includes one of the following methods:
selecting one image frame with highest definition from the image frames included in each video segment as a representative image frame;
determining image frames included by each video segment, determining caption area image blocks of each image frame, selecting a caption area image block with the maximum contrast between a non-caption pixel set and a caption pixel set from the caption area image blocks, and taking the image frame corresponding to the selected caption area image block as a representative image frame.
According to a second aspect of embodiments herein, there is provided an apparatus for transmitting a caption splicing map, applied to a mobile terminal, including:
the first receiving module is configured to respond to the fact that a playing device determines a starting time stamp according to the time when the first signal is received, generate a subtitle splicing map according to a playing video stream with the starting time stamp as a starting point, and send the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device.
In an embodiment, the apparatus further comprises at least one of the following modules:
a saving module configured to save the subtitle mosaic;
a forwarding module configured to forward the subtitle splicing map;
a publishing module configured to publish the subtitle mosaic.
In one embodiment, the apparatus further comprises:
the first sending module is configured to send a first signal to the playing terminal, trigger the playing device to determine a start timestamp according to a time when the first signal is received, and generate a subtitle splicing map according to a playing video stream with the start timestamp as a starting point.
In one embodiment, the apparatus further includes a second transmitting module configured to transmit a second signal to the cast terminal; the second signal is used for triggering the playing device to determine an end timestamp according to the time when the second signal is received;
wherein, generating a caption splicing map according to the playing video stream with the starting timestamp as a starting point comprises: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
In an embodiment, the apparatus further includes a third sending module configured to send a third signal for instructing the playback device to send the subtitle splicing map, so that the playback device sends the subtitle splicing map to the mobile terminal after receiving the third signal.
According to a third aspect of embodiments herein, there is provided an apparatus for transmitting a subtitle splicing map, which is applied to a playing device, and includes:
a second receiving module configured to receive the first signal;
a start timestamp determination module configured to determine a start timestamp from a time at which the first signal was received;
the subtitle splicing map generating module is configured to generate a subtitle splicing map according to a playing video stream with the starting timestamp as a starting point;
and the fourth sending module is configured to send the subtitle splicing map to at least one mobile terminal.
In one embodiment, the apparatus further comprises:
a third receiving module configured to receive a second signal indicating an end of generating the subtitle mosaic;
a first end timestamp determination module configured to determine an end timestamp from a time at which the second signal was received;
the subtitle mosaic generating module is further configured to generate a subtitle mosaic according to a playing video stream with the starting timestamp as a starting point and the ending timestamp as an ending point.
In one embodiment, the apparatus further comprises:
a fourth receiving module configured to receive a third signal for instructing to send the subtitle splicing map to at least one mobile terminal;
the fourth sending module is further configured to send the subtitle splicing map to at least one mobile terminal after the fourth receiving module receives the third signal.
In one embodiment, the subtitle splicing map generation module includes a first generation module;
the first generation module is configured to generate a subtitle splicing map from a playing video stream starting from the start timestamp using the following method:
recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point;
extracting a plurality of video segments from the recorded video stream, wherein all image frames in each video segment contain the same subtitle information, extracting a representative image frame from each video segment respectively, and generating a subtitle splicing image according to the extracted representative image frames.
In one embodiment, the subtitle splicing map generating module includes a second generating module;
the second generation module is configured to generate a subtitle splicing map from the played video stream starting from the start timestamp using the following method:
extracting video segments in real time from a playing video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively;
and when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
In one embodiment, the subtitle splicing map generating module includes a third generating module;
the third generation module is configured to generate a subtitle splicing map from the playing video stream starting from the start timestamp using the following method:
extracting video segments in real time from a played video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing image according to each extracted representative image frame;
and when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
In one embodiment, the apparatus further comprises: a second end timestamp determination module configured to determine an end timestamp using one of:
determining that the end timestamp is a timestamp corresponding to a fixed duration after the start timestamp;
and taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
According to a fourth aspect of embodiments herein, there is provided an image format conversion apparatus comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the method.
According to a fifth aspect of embodiments herein, there is provided a non-transitory computer readable storage medium having stored thereon executable instructions that, when executed by a processor, implement the steps of the method.
The technical solutions provided by the embodiments herein may include the following beneficial effects: the mobile terminal can directly receive the generated subtitle splicing map from the playing equipment, and when a user watches video playing, the user can conveniently obtain the subtitle splicing map of the video clip which the user is interested in through the playing equipment, so that the use experience of the user is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure.
Fig. 1 is a flow diagram illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 2 is a flow diagram illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 3 is a flow diagram illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 4 is a flow diagram illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 5 is a flowchart illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 6 is a schematic diagram illustrating a plurality of single frame images according to an exemplary embodiment.
Fig. 7 is a schematic diagram illustrating a subtitle splicing map according to an example embodiment.
Fig. 8 is a schematic diagram illustrating a subtitle splicing map according to an example embodiment.
Fig. 9 is a schematic diagram illustrating a subtitle splicing map according to an example embodiment.
Fig. 10 is a flowchart illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 11 is a flowchart illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 12 is a flowchart illustrating a method of transmitting a subtitle mosaic according to an example embodiment.
Fig. 13 is a block diagram illustrating an apparatus for transmitting a subtitle splicing map according to an example embodiment.
Fig. 14 is a block diagram illustrating an apparatus for transmitting a subtitle splicing map according to an example embodiment.
Fig. 15 is a block diagram illustrating an apparatus for transmitting a subtitle splicing map according to an example embodiment.
Fig. 16 is a block diagram illustrating an apparatus for transmitting a subtitle splicing map according to an example embodiment.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects herein, as detailed in the appended claims.
In reality, there are scenarios: when a video playing device (e.g., a television) plays a video (e.g., a tv drama), a topic scenario is seen, and the scenario on the video playing device is to be stored into a mobile terminal (e.g., a mobile phone) to be shared with family and friends, or even a picture displayed by the video playing device is to be recorded to form a short video, and the short video is stored into the mobile terminal to be shared. The technical scheme can only capture a single picture, and the operation is very inconvenient. And a long plot cannot be captured to form a caption splicing picture, if the caption splicing picture is required to be obtained, a plurality of single pictures need to be captured, and then the plurality of single pictures are sequentially stored in the mobile phone, so that the experience is extremely poor and very complicated.
The embodiment of the disclosure provides a method for transmitting a subtitle splicing map. The method is applied to the mobile terminal.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method of transmitting a subtitle mosaic according to an exemplary embodiment. As shown in fig. 1, the method includes:
step S11: responding to a starting time stamp determined by a playing device according to the time of receiving a first signal, generating a subtitle splicing map according to a playing video stream with the starting time stamp as a starting point, and sending the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device.
Wherein, the receiving the subtitle mosaic from the playback device in step S11 includes: and receiving the subtitle splicing map through wired connection or wireless connection with a playing device.
In this embodiment, the playing device generates the subtitle splicing map according to the video played in real time, the mobile terminal can directly receive the generated subtitle splicing map from the playing device, and when a user watches video playing, the user can conveniently obtain the subtitle splicing map of the video clip in which the user is interested through the playing device, so that the user experience is improved.
The embodiment of the present disclosure provides a method for transmitting a subtitle mosaic, where the method includes the method shown in fig. 1, and as shown in fig. 2, after step S11, further includes step S12: performing at least one of:
saving the subtitle splicing map;
forwarding the subtitle splicing map;
and releasing the subtitle splicing map.
In one example, after receiving the subtitle splicing map from the playing device, the mobile terminal stores the subtitle splicing map to a designated position for storing the subtitle splicing map. The designated location is an album or gallery, etc., to simplify the user's operation.
In one example, after receiving the subtitle mosaic from the playing device, the mobile terminal forwards the subtitle mosaic to friends of the user through the social application, so that the user and the friends can conveniently communicate with each other.
In one example, after receiving the subtitle mosaic from the playing device, the mobile terminal publishes the subtitle mosaic through a social application so that other people can look up the subtitle mosaic to add social interest.
In this embodiment, after the mobile terminal directly receives the generated subtitle mosaic from the playing device, the subtitle mosaic can be directly stored, forwarded or released, and different use requirements of the user are met.
The embodiment of the present disclosure provides a method for transmitting a subtitle splicing map, where the method includes the method shown in fig. 1, and a manner in which a playback device receives a first signal includes one of:
firstly, receiving a trigger signal of a first setting key on the playing equipment.
For example: receiving a trigger signal of a key for increasing display brightness on a playing device
And secondly, receiving a touch signal of a first setting control on the playing equipment.
For example: when the playing device is a device with a touch screen, a touch signal of a setting control on the playing device is received.
And thirdly, receiving a trigger signal of at least one set key on the remote controller matched with the playing equipment.
For example: such as a signal that a menu key and a volume-up key on the remote controller are simultaneously pressed, or a signal that a menu key and a volume-down key on the remote controller are simultaneously pressed, etc.
And fourthly, receiving a signal which is sent by the remote controller matched with the playing equipment and responds to the NFC module to detect other NFC modules.
For example: when the mobile terminal with the NFC module touches the remote controller with the NFC module, the remote controller with the NFC module sends a first signal to the playing module after detecting other NFC modules.
In one embodiment, the method further comprises: and the playing equipment receives a second signal, and the second signal is used for triggering the playing equipment to determine an ending timestamp according to the time of receiving the second signal. The manner in which the playing device receives the second signal includes one of the following:
firstly, receiving a trigger signal of a second setting key on the playing equipment. The second setting key is the same as or different from the first setting key.
And secondly, receiving a touch signal of a second setting control on the playing equipment. The second setting control is the same as or different from the first setting control.
And thirdly, receiving a trigger signal of at least one set key on the remote controller matched with the playing equipment.
The at least one setting key is the same as or different from the at least one setting key corresponding to the first signal.
For example: when the at least one setting key is the same as the at least one setting key corresponding to the first signal, the triggering signal corresponding to the first signal is a signal that the menu key and the volume increasing key on the remote controller are pressed simultaneously, and the triggering signal corresponding to the second signal is a signal that the menu key and the volume increasing key on the remote controller are released simultaneously.
For example: when the at least one setting key is different from the at least one setting key corresponding to the first signal, the triggering signal corresponding to the first signal is a signal that the menu key and the volume increasing key on the remote controller are pressed simultaneously, and the triggering signal corresponding to the second signal is a signal that the menu key and the volume decreasing key on the remote controller are pressed simultaneously.
And fourthly, receiving a signal which is sent by the remote controller matched with the playing equipment and responds to the NFC module to detect other NFC modules.
In one embodiment, the method further comprises: and the playing equipment receives a third signal, and the third signal is used for triggering the playing equipment to send the subtitle splicing map according to the received third signal.
The manner of receiving the third signal by the playing device includes one of the following:
firstly, receiving a trigger signal of a third setting key on the playing equipment. The third setting key is the same as the first setting key, or the third setting key is the same as the second setting key, or the third setting key is different from the first setting key and the second setting key.
And secondly, receiving a touch signal of a third setting control on the playing equipment. The third setting control is the same as the first setting control, or the third setting control is the same as the second setting control, or the third setting control is different from the first setting control and the second setting control.
And thirdly, receiving a trigger signal of at least one set key on the remote controller matched with the playing equipment.
The at least one setting key is the same as the at least one setting key corresponding to the first signal, or the at least one setting key corresponding to the second signal, or both the at least one setting key corresponding to the first signal and the at least one setting key corresponding to the second signal are different.
And fourthly, receiving a signal which is sent by the remote controller matched with the playing equipment and responds to the NFC module to detect other NFC modules.
The embodiment of the present disclosure provides a method for transmitting a subtitle mosaic, where the method includes the method shown in fig. 1, and as shown in fig. 3, before step S11, the method further includes step S10-1: and sending a first signal to the playing terminal, so as to trigger the playing device to determine a starting timestamp according to the moment of receiving the first signal, and generating a subtitle splicing map according to a playing video stream with the starting timestamp as a starting point.
In an embodiment, the first signal is triggered by a touch signal, where the touch signal is a control for instructing to send the first signal on a control interface of the playback device of the mobile terminal.
In this embodiment, the mobile terminal determines a starting time of playing a video stream for generating a subtitle splicing map, and the user sends a first signal to the playing device through the mobile terminal to indicate the starting time, so that the user obtains the subtitle splicing map corresponding to the video segment in which the user is interested.
The embodiment of the present disclosure provides a method for transmitting a subtitle splicing map, where the method includes the method shown in fig. 3, and as shown in fig. 4, after step S10-1 and before step S11, the method further includes: step S10-2, sending a second signal to the playing terminal; the second signal is used for triggering the playing device to determine an end timestamp according to the time when the second signal is received;
wherein, generating a caption splicing map according to the playing video stream with the starting timestamp as a starting point comprises: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
Step S11 specifically includes, in response to the playback device determining a start time stamp according to a time at which the first signal is received, determining an end time stamp according to a time at which the second signal is received, generating a subtitle splicing map according to a playback video stream that starts from the start time stamp and ends at the end time stamp, and sending the subtitle splicing map to the mobile terminal; receiving the subtitle splicing map from the playback device.
The method of transmitting the first signal and the second signal is described in various ways as follows:
the first method is as follows: different touch modes are performed on the same touch object.
For example: the mobile terminal is a mobile phone, and the playing device is a television. A first control is arranged on an interactive interface used for controlling the playing equipment on the mobile phone and used for sending a first signal and a second signal.
The user clicks a first control in a control interface of the mobile phone, the mobile phone sends a first signal for indicating the start of generating the subtitle splicing map, the time when the television receives the first signal is 6:00, and the start timestamp is determined to be 6: 00.
And the user double clicks a first control in a control interface of the mobile phone, the mobile phone sends a second signal for indicating the end of generating the subtitle splicing map, the moment when the television receives the second signal is 6:05, and the end timestamp is determined to be 6: 05.
The second method comprises the following steps: the same touch manner is performed for different touch objects.
For example: the mobile terminal is a mobile phone, and the playing device is a television. A first control is arranged on an interactive interface used for controlling the playing equipment on the mobile phone and used for sending a first signal, and a second control is arranged on the interactive interface and used for sending a second signal.
The user clicks a first control in a control interface of the mobile phone, the mobile phone sends a first signal for indicating the start of generating the subtitle splicing map, the time when the television receives the first signal is 6:00, and the starting timestamp is determined to be 6: 00.
And clicking a second control in a control interface of the mobile phone by the user, sending a second signal for indicating the end of generating the subtitle splicing map by the mobile phone, wherein the moment when the television receives the second signal is 6:05, and determining the end timestamp to be 6: 05.
In the embodiment, the mobile terminal sends a first signal to the playing device, and the playing device determines a starting timestamp according to the first signal; the mobile terminal sends a second signal to the playing device, and the playing device determines an ending timestamp according to the first signal. And then generating a subtitle splicing map according to the played video stream with the starting timestamp as the starting point and the ending timestamp as the ending point, and finally sending the subtitle splicing map to the mobile terminal. By the method, a user can control the playing equipment to obtain a relatively accurate subtitle splicing map of the video clip of interest by simple operation, and user experience is improved.
The embodiment of the present disclosure provides a method for transmitting a subtitle splicing map, where the method includes the method shown in fig. 4, and as shown in fig. 5, after step S10-2 and before step S11, the method further includes: step S10-3, sending a third signal for instructing the playback device to send the subtitle splicing map, so that the playback device sends the subtitle splicing map to the mobile terminal after receiving the third signal.
Step S11 specifically includes, in response to the playback device determining a start time stamp according to a time at which the first signal is received, determining an end time stamp according to a time at which the second signal is received, generating a subtitle splicing map according to a playback video stream that starts from the start time stamp and ends at the end time stamp, and sending the subtitle splicing map to the mobile terminal after receiving the third signal; receiving the subtitle splicing map from the playback device.
The setting manner of the third signal is similar to that of the first signal and the second signal, and is not described herein again.
In one example, the mobile terminal is a mobile phone, and the playing device is a television.
A first control is arranged on an interactive interface used for controlling the playing equipment on the mobile phone and used for sending a first signal, a second control is arranged for sending a second signal, and a third control is arranged for sending a third signal.
And clicking a first control in a control interface of the mobile phone by the user, sending a first signal for indicating the start of generating the subtitle splicing map by the mobile phone, and if the moment when the television receives the first signal is 6:00, setting the starting timestamp to be 6: 00.
And clicking a second control in a control interface of the mobile phone by the user, sending a second signal for indicating the start of generating the subtitle splicing map by the mobile phone, and if the moment when the television receives the second signal is 6:05, judging that the ending timestamp is 6: 05.
And the user clicks a third control in a control interface of the mobile phone, the mobile phone sends a third signal for indicating sending of the subtitle splicing map, and the television sends the generated subtitle splicing map to the mobile phone after receiving the third signal.
In the embodiment, the mobile terminal controls the starting point and the ending point of the video clip for generating the subtitle splicing map and the opportunity of the playing device for sending the subtitle splicing map, and the user controls the video clip according to the use requirement to meet the use requirement of the user.
The embodiment of the present disclosure provides a method for transmitting a caption splicing map, where the method is directed to an improvement in fig. 1 that a caption splicing map is generated according to a playing video stream starting from the start timestamp, and the improvement in generating the caption splicing map according to the playing video stream starting from the start timestamp includes one of the following three ways:
the first method is as follows:
recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point; and generating a subtitle splicing picture according to the recorded video stream. Wherein, generating a subtitle splicing map according to the recorded video stream comprises: and extracting a plurality of representative image frames from the recorded video stream, wherein each representative image frame contains different subtitle information, and generating a subtitle splicing image according to the representative image frames.
And secondly, extracting video segments in real time from the played video stream with the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively. And when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
And thirdly, extracting video segments in real time from the played video stream with the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing map according to the extracted representative image frames. And when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
The first signal in the first, second, and third modes may be used to trigger the playback device to determine the start timestamp, and may also be used to trigger the playback device to determine the end timestamp.
Determining the end-time-stamp includes one of:
the first is to determine that the ending timestamp is a timestamp corresponding to a fixed duration after the starting timestamp.
For example, the fixed duration is 3 minutes, and the time when the playing device receives the first signal is 9: 00, automatically determining the end timestamp to be 9: 03.
and the second method comprises the following steps: and taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
For example, the number of the cells is set to 8. The moment when the playing device receives the first signal is 9: and 00, continuously extracting video segments, wherein each video segment corresponds to one piece of subtitle information, and determining an ending time stamp when the number of the extracted video segments reaches 8.
In the first, second, and third modes, generating the subtitle mosaic from the representative image frame includes:
combining the extracted representative image frames into a caption splicing picture in a long-graph form in an end-to-end connection mode;
or, the first image frame in the extracted representative image frames is used as a top image block, and the image blocks in the subtitle area in other representative image frames are sequentially spliced below the top image block to form a subtitle splicing map.
In one example, as shown in fig. 6, the number of the determined representative image frames is 4, and the 4 representative image frames are combined into a caption splicing map in a long graph form in an end-to-end manner, and the caption splicing map is shown in fig. 7.
In another example, as shown in fig. 6, the number of the determined representative image frames is 4, the first image frame of the 4 representative image frames is taken as a top image block, and the subtitle region image blocks in the subsequent 3 representative image frames are sequentially spliced below the top image block to form a subtitle splicing map, which is shown in fig. 8.
In another example, fig. 9 shows a subtitle mosaic for a segment of a television episode generated by a television.
The subtitle area image blocks are image blocks of an image frame comprising a subtitle area, wherein the height of the subtitle area image blocks is greater than that of the subtitle area, and the difference between the subtitle area image blocks and the subtitle area height is smaller than a first set value; the length of the caption image block is greater than the length of the caption area, and the difference value between the length of the caption image block and the length of the caption area is less than a second set value; or the length of the subtitle image block is the length of the image frame.
The first setting value may be a specific numerical value, or a numerical value determined by multiplying the current subtitle area height by a setting ratio, where the setting ratio is, for example, 0.1. The second setting value may be a specific value, or a value determined by multiplying the current subtitle area height by a setting ratio, for example, 0.1. Or the length of the subtitle image block is the length of the image frame.
In the first, second and third modes, extracting a representative image frame from each video segment respectively includes one of the following methods:
the method comprises the following steps: one image frame with the highest definition is selected from the image frames included in each video segment as a representative image frame.
The method 1 ensures that the representative image frame is clear enough, ensures that the finally generated subtitle splicing image has higher definition and is convenient to view.
In one example, one image frame with the highest definition is selected from image frames included in each video segment as a representative image frame, and the extracted representative image frames are combined into a caption splicing map in a long-graph form in an end-to-end connection manner, so as to ensure that the definition of the generated caption splicing map is higher, and certainly, the definition of the caption in the caption splicing map can also be ensured to be higher.
The method 2 comprises the following steps: determining image frames included by each video segment, determining caption area image blocks of each image frame, selecting a caption area image block with the maximum contrast between a non-caption pixel set and a caption pixel set from the caption area image blocks, and taking the image frame corresponding to the selected caption area image block as a representative image frame.
The method 2 can ensure that the difference between the caption in the finally generated caption splicing picture and the background picture is obvious and is convenient to check.
In one example, image frames included in each video segment are determined, subtitle region image blocks of each image frame are determined, one subtitle region image block with the maximum contrast between a non-subtitle pixel set and a subtitle pixel set is selected from the subtitle region image blocks, the image frame corresponding to the selected subtitle region image block is used as a representative image frame, the first image frame in the extracted representative image frames is used as a top image block, and subtitle region image blocks in other representative image frames are sequentially spliced to the lower portion of the top image block to form a subtitle splicing image, so that the subtitle in the subtitle splicing image is obviously different from a background image.
The embodiment of the disclosure provides a method for transmitting a subtitle splicing map, which is applied to a playing terminal, wherein the playing terminal is a television, a mobile phone, a tablet computer and the like.
Referring to fig. 10, fig. 10 is a flowchart illustrating a method of generating a subtitle mosaic according to an exemplary embodiment. As shown in fig. 10, the method includes:
step S101, receiving a first signal;
step S102, determining a starting time stamp according to the time when the first signal is received;
step S103, generating a subtitle splicing map according to the playing video stream taking the starting timestamp as a starting point;
and step S104, sending the subtitle splicing map to at least one mobile terminal.
In an embodiment, in step S104, the subtitle mosaic may be simultaneously sent to at least two mobile terminals, so that multiple mobile terminals obtain the same subtitle mosaic at the same time.
In the embodiment, in the process of playing the real-time video, after receiving the first signal, the playing device determines the start timestamp according to the first signal, automatically generates the subtitle splicing map, and sends the generated subtitle splicing map to the mobile terminal, so that the intelligent degree of the playing device is improved, when a user watches the video playing, the user can simply operate the playing device to obtain the subtitle splicing map of the video clip which is interested by the user, and the user experience is improved.
The embodiment of the present disclosure provides a method for transmitting a subtitle splicing map, including the method shown in fig. 10, as shown in fig. 11, the method further includes, between step 102 and step 103, step S102-1: and receiving a second signal, wherein the second signal is used for triggering the playing equipment to determine an end timestamp according to the time when the second signal is received.
In step S103, generating a subtitle splicing map according to the played video stream starting from the start timestamp, including: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
The embodiment of the present disclosure provides a method for transmitting a subtitle splicing map, including the method shown in fig. 11, as shown in fig. 12, and further including, between step S103 and step S104, step S103-1: and receiving a third signal, wherein the third signal is used for instructing the playing equipment to send the subtitle splicing map.
The embodiment of the present disclosure provides a method for transmitting a caption splicing map, where the method is directed to an improvement in fig. 10, 11, and 12 that a caption splicing map is generated according to a playing video stream starting from a start timestamp, and the improvement in generating a caption splicing map according to a playing video stream starting from a start timestamp includes one of the following three ways:
the first method is as follows: recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point; and generating a subtitle splicing picture according to the recorded video stream.
Wherein, generating a subtitle splicing map according to the recorded video stream comprises: and extracting a plurality of representative image frames from the recorded video stream, wherein each representative image frame contains different subtitle information, and generating a subtitle splicing image according to the representative image frames.
And secondly, extracting video segments in real time from the played video stream with the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively. And when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
And thirdly, extracting video segments in real time from the played video stream with the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing map according to the extracted representative image frames. And when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
The first signal in the first, second, and third modes may be used to trigger the playback device to determine the start timestamp, and may also be used to trigger the playback device to determine the end timestamp.
Determining the end-time-stamp includes one of:
and determining that the ending time stamp is a time stamp corresponding to a fixed time length after the starting time stamp.
For example, the fixed duration is 3 minutes, and the time when the playing device receives the first signal is 9: 00, automatically determining the end timestamp to be 9: 03.
and secondly, taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
For example, the number of the cells is set to 8. The moment when the playing device receives the first signal is 9: and 00, continuously extracting video segments, wherein each video segment corresponds to one piece of subtitle information, and determining an ending time stamp when the number of the extracted video segments reaches 8.
In the first, second, and third modes, generating the subtitle mosaic from the representative image frame includes:
combining the extracted representative image frames into a caption splicing picture in a long-graph form in an end-to-end connection mode;
or, the first image frame in the extracted representative image frames is used as a top image block, and the image blocks in the subtitle area in other representative image frames are sequentially spliced below the top image block to form a subtitle splicing map.
In one example, as shown in fig. 6, the number of the determined representative image frames is 4, and the 4 representative image frames are combined into a caption splicing map in a long graph form in an end-to-end manner, and the caption splicing map is shown in fig. 7.
In another example, as shown in fig. 6, the number of the determined representative image frames is 4, the first image frame of the 4 representative image frames is taken as a top image block, and the subtitle region image blocks of the subsequent 3 representative image frames are sequentially spliced below the top image block to form a subtitle splicing map, which is shown in fig. 8.
In another example, fig. 9 shows a subtitle mosaic for a segment of a television episode generated by a television.
The subtitle area image blocks are image blocks of an image frame comprising a subtitle area, wherein the height of the subtitle area image blocks is greater than that of the subtitle area, and the difference between the subtitle area image blocks and the subtitle area height is smaller than a first set value; the length of the caption image block is greater than the length of the caption area, and the difference value between the length of the caption image block and the length of the caption area is less than a second set value; or the length of the subtitle image block is the length of the image frame.
The first setting value may be a specific numerical value, or a numerical value determined by multiplying the current subtitle area height by a setting ratio, where the setting ratio is, for example, 0.1. The second setting value may be a specific value, or a value determined by multiplying the current subtitle area height by a setting ratio, for example, 0.1. Or the length of the subtitle image block is the length of the image frame.
In the first, second and third modes, extracting a representative image frame from each video segment respectively includes one of the following methods:
the method comprises the following steps: one image frame with the highest definition is selected from the image frames included in each video segment as a representative image frame.
The method 1 ensures that the representative image frame is clear enough, ensures that the finally generated subtitle splicing image has higher definition and is convenient to view.
In one example, one image frame with the highest definition is selected from image frames included in each video segment as a representative image frame, and the extracted representative image frames are combined into a caption splicing map in a long-graph form in an end-to-end connection manner, so as to ensure that the definition of the generated caption splicing map is higher, and certainly, the definition of the caption in the caption splicing map can also be ensured to be higher.
The method 2 comprises the following steps: determining image frames included by each video segment, determining caption area image blocks of each image frame, selecting a caption area image block with the maximum contrast between a non-caption pixel set and a caption pixel set from the caption area image blocks, and taking the image frame corresponding to the selected caption area image block as a representative image frame.
The method 2 can ensure that the difference between the caption in the finally generated caption splicing picture and the background picture is obvious and is convenient to check.
In one example, image frames included in each video segment are determined, subtitle region image blocks of each image frame are determined, one subtitle region image block with the maximum contrast between a non-subtitle pixel set and a subtitle pixel set is selected from the subtitle region image blocks, the image frame corresponding to the selected subtitle region image block is used as a representative image frame, the first image frame in the extracted representative image frames is used as a top image block, and subtitle region image blocks in other representative image frames are sequentially spliced to the lower portion of the top image block to form a subtitle splicing image, so that the subtitle in the subtitle splicing image is obviously different from a background image.
The embodiment of the disclosure provides a method for transmitting a subtitle splicing map, which is applied between a mobile terminal and a playing device. The method comprises the following steps:
step 1, a mobile terminal sends a first signal for indicating the start of generating a subtitle splicing map to a playing device;
step 2, the playing device receives a first signal which is sent from the mobile terminal and used for indicating the start of generating a subtitle splicing map;
step 3, the playing device determines a revelation time stamp according to the first signal and generates a subtitle splicing diagram according to a playing video stream taking the real time stamp as a starting point;
step 4, the playing device sends the generated subtitle splicing map to the mobile terminal;
and 5, the mobile terminal receives the subtitle splicing map from the playing equipment.
In this embodiment, the mobile terminal sends a first signal to the playback device, the playback device determines a start timestamp according to the first signal, generates a subtitle splicing map according to a playback video stream that starts from the start timestamp, and sends the subtitle splicing map to the mobile terminal. By the method, the user can control the playing device to obtain the subtitle splicing map of the video clip which is interested by the user through simple operation, and the user experience is improved.
The embodiment of the disclosure provides a device for transmitting a subtitle splicing map, which is applied to a mobile terminal, wherein the mobile terminal is a mobile phone, a computer, a tablet computer and the like.
Referring to fig. 13, fig. 13 is a block diagram illustrating an apparatus for transmitting a subtitle mosaic according to an exemplary embodiment. As shown in fig. 13, this apparatus includes:
a first receiving module 1301, configured to, in response to a play device determining a start timestamp according to a time when the first signal is received, generate a subtitle splicing map according to a play video stream starting from the start timestamp, and send the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device.
An apparatus for transmitting a subtitle splicing map is provided in an embodiment of the present disclosure, including the apparatus shown in fig. 13, as shown in fig. 14, further including at least one of the following modules:
a saving module 1302 configured to save the subtitle mosaic;
a forwarding module 1303 configured to forward the subtitle mosaic;
a publishing module 1304 configured to publish the subtitle mosaic.
The embodiment of the present disclosure provides an apparatus for transmitting a subtitle splicing map, including the apparatus shown in fig. 13, where the apparatus further includes:
the first sending module is configured to send a first signal to the playing terminal, trigger the playing device to determine a start timestamp according to a time when the first signal is received, and generate a subtitle splicing map according to a playing video stream with the start timestamp as a starting point.
The apparatus for transmitting a subtitle splicing map provided in the embodiments of the present disclosure includes the apparatus shown in fig. 13, and further includes a second sending module configured to send a second signal to the playback terminal; the second signal is used for triggering the playing device to determine an end timestamp according to the time when the second signal is received;
wherein, generating a caption splicing map according to the playing video stream with the starting timestamp as a starting point comprises: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
The apparatus for transmitting a caption splicing map provided in the embodiments of the present disclosure includes the apparatus shown in fig. 13, and the apparatus further includes a third sending module configured to send a third signal for instructing the playback device to send the caption splicing map, so that the playback device sends the caption splicing map to the mobile terminal after receiving the third signal.
The embodiment of the disclosure provides a device for transmitting a subtitle splicing chart, which is applied to playing equipment, wherein the playing equipment is a television, a mobile phone, a computer, a tablet computer and the like.
Referring to fig. 15, fig. 15 is a block diagram illustrating an apparatus for transmitting a subtitle mosaic according to an exemplary embodiment. As shown in fig. 15, this apparatus includes:
a second receiving module 1501 configured to receive the first signal;
a start timestamp determination module 1502 configured to determine a start timestamp from a time at which the first signal was received;
a subtitle splicing map generating module 1503 configured to generate a subtitle splicing map from a playing video stream starting from the start timestamp;
a fourth sending module 1504 configured to send the subtitle mosaic to at least one mobile terminal.
An apparatus for transmitting a subtitle splicing map is provided in an embodiment of the present disclosure, including the apparatus shown in fig. 15, and the apparatus further includes:
a third receiving module configured to receive a second signal indicating an end of generating the subtitle mosaic;
a first end timestamp determination module configured to determine an end timestamp from a time at which the second signal was received;
the subtitle mosaic generating module is further configured to generate a subtitle mosaic according to a playing video stream with the starting timestamp as a starting point and the ending timestamp as an ending point.
The embodiment of the present disclosure provides an apparatus for transmitting a subtitle splicing map, including the apparatus shown in fig. 15, where the apparatus further includes:
a fourth receiving module configured to receive a third signal for instructing to send the subtitle splicing map to at least one mobile terminal;
the fourth sending module is further configured to send the subtitle splicing map to at least one mobile terminal after the fourth receiving module receives the third signal.
The embodiment of the present disclosure provides an apparatus for transmitting a subtitle splicing map, including the apparatus shown in fig. 15, where the subtitle splicing map generating module includes a first generating module;
the first generation module is configured to generate a subtitle splicing map from a playing video stream starting from the start timestamp using the following method:
recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point;
and generating a subtitle splicing picture according to the recorded video stream.
The embodiment of the present disclosure provides an apparatus for transmitting a subtitle splicing map, including the apparatus shown in fig. 15, where the subtitle splicing map generating module includes a second generating module;
the second generation module is configured to generate a subtitle splicing map from the played video stream starting from the start timestamp using the following method:
extracting video segments in real time from a playing video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively;
and when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
The embodiment of the present disclosure provides an apparatus for transmitting a subtitle splicing map, including the apparatus shown in fig. 15, where the subtitle splicing map generating module includes a third generating module;
the third generation module is configured to generate a subtitle splicing map from the playing video stream starting from the start timestamp using the following method:
extracting video segments in real time from a played video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing image according to each extracted representative image frame;
and when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
An apparatus for transmitting a subtitle splicing map is provided in an embodiment of the present disclosure, including the apparatus shown in fig. 15, and the apparatus further includes: a second end timestamp determination module configured to determine an end timestamp using one of:
determining that the end timestamp is a timestamp corresponding to a fixed duration after the start timestamp;
and taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
The embodiment of the present disclosure provides an apparatus for transmitting a caption splicing map, including:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the method.
A non-transitory computer readable storage medium having stored thereon executable instructions that, when executed by a processor, perform the steps of the method is provided in embodiments of the present disclosure.
Fig. 16 is a block diagram illustrating an apparatus 1600 for transmitting a subtitle splicing map according to an example embodiment. For example, the apparatus 1600 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 16, apparatus 1600 may include one or more of the following components: processing component 1602, memory 1604, power component 1606, multimedia component 1608, audio component 1610, input/output (I/O) interface 1612, sensor component 1614, and communications component 1616.
The processing component 1602 generally controls overall operation of the device 1600, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing component 1602 may include one or more processors 1620 to execute instructions to perform all or part of the steps of the methods described above. Further, the processing component 1602 can include one or more modules that facilitate interaction between the processing component 1602 and other components. For example, the processing component 1602 can include a multimedia module to facilitate interaction between the multimedia component 1608 and the processing component 1602.
The memory 1604 is configured to store various types of data to support operation at the device 1600. Examples of such data include instructions for any application or method operating on device 1600, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 1604 may be implemented by any type of volatile or non-volatile memory device or combination thereof, such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disk.
Power component 1606 provides power to the various components of device 1600. Power components 1606 may include a power management system, one or more power sources, and other components associated with generating, managing, and distributing power for device 1600.
The multimedia component 1608 includes a screen that provides an output interface between the device 1600 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 1608 comprises a front-facing camera and/or a rear-facing camera. The front-facing camera and/or the back-facing camera may receive external multimedia data when device 1600 is in an operational mode, such as a capture mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 1610 is configured to output and/or input an audio signal. For example, audio component 1610 includes a Microphone (MIC) configured to receive external audio signals when apparatus 1600 is in an operational mode, such as a call mode, recording mode, and voice recognition mode. The received audio signal may further be stored in the memory 1604 or transmitted via the communications component 1616. In some embodiments, audio component 1610 further includes a speaker for outputting audio signals.
The I/O interface 1612 provides an interface between the processing component 1602 and peripheral interface modules, such as keyboards, click wheels, buttons, and the like. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
Sensor assembly 1614 includes one or more sensors for providing status assessment of various aspects to device 1600. For example, sensor assembly 1614 can detect an open/closed state of device 1600, the relative positioning of components, such as a display and keypad of apparatus 1600, a change in position of apparatus 1600 or a component of apparatus 1600, the presence or absence of user contact with apparatus 1600, orientation or acceleration/deceleration of apparatus 1600, and a change in temperature of apparatus 1600. The sensor assembly 1614 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 1614 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 1614 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communications component 1616 is configured to facilitate communications between the apparatus 1600 and other devices in a wired or wireless manner. The device 1600 may access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof. In an exemplary embodiment, the communication component 1616 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communications component 1616 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the apparatus 1600 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 1604 comprising instructions, executable by the processor 1620 of the apparatus 1600 to perform the above-described method is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
Other embodiments of the invention herein will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This application is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles herein and including such departures from the present disclosure as come within known or customary practice in the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings, and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (24)

1. A method for transmitting a caption splicing map is applied to a mobile terminal, and is characterized by comprising the following steps:
responding to a starting time stamp determined by a playing device according to the time of receiving a first signal, generating a subtitle splicing map according to a playing video stream with the starting time stamp as a starting point, and sending the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device.
2. The method of claim 1,
the method further comprises performing at least one of:
saving the subtitle splicing map;
forwarding the subtitle splicing map;
and releasing the subtitle splicing map.
3. The method of claim 1,
the method further comprises the following steps:
sending a first signal to the playing terminal, triggering the playing device to determine an initial timestamp according to the time when the first signal is received, and generating a subtitle splicing map according to a playing video stream with the initial timestamp as a starting point.
4. The method of claim 1,
the method further comprises the following steps: sending a second signal to the playing terminal; the second signal is used for triggering the playing device to determine an end timestamp according to the time when the second signal is received;
wherein, generating a caption splicing map according to the playing video stream with the starting timestamp as a starting point comprises: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
5. The method of claim 1,
before receiving the subtitle mosaic from the playing device, the method further includes: and sending a third signal for indicating the playing equipment to send the subtitle splicing map, so that the playing equipment sends the subtitle splicing map to the mobile terminal after receiving the third signal.
6. The method of claim 1,
the generating of the subtitle splicing map according to the played video stream with the start timestamp as a starting point includes:
recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point;
extracting a plurality of video segments from the recorded video stream, wherein all image frames in each video segment contain the same subtitle information, extracting a representative image frame from each video segment respectively, and generating a subtitle splicing image according to the extracted representative image frames.
7. The method of claim 1,
the generating of the subtitle splicing map according to the played video stream with the start timestamp as a starting point includes:
extracting video segments in real time from a playing video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively;
and when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
8. The method of claim 1,
the generating of the subtitle splicing map according to the played video stream with the start timestamp as a starting point includes:
extracting video segments in real time from a played video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing image according to each extracted representative image frame;
and when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
9. The method of claim 6, 7 or 8,
the first signal is also used for triggering the playing equipment to determine an end timestamp;
the determining an end time stamp comprises one of:
determining that the end timestamp is a timestamp corresponding to a fixed duration after the start timestamp;
and taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
10. The method of claim 7 or 8,
the extracting a representative image frame from each video segment comprises one of the following methods:
selecting one image frame with highest definition from the image frames included in each video segment as a representative image frame;
determining image frames included by each video segment, determining caption area image blocks of each image frame, selecting a caption area image block with the maximum contrast between a non-caption pixel set and a caption pixel set from the caption area image blocks, and taking the image frame corresponding to the selected caption area image block as a representative image frame.
11. An apparatus for transmitting a caption splicing map, applied to a mobile terminal, comprising:
the first receiving module is configured to respond to the fact that a playing device determines a starting time stamp according to the time when the first signal is received, generate a subtitle splicing map according to a playing video stream with the starting time stamp as a starting point, and send the subtitle splicing map to a mobile terminal; receiving the subtitle splicing map from the playback device.
12. The apparatus of claim 11,
the apparatus further comprises at least one of the following modules:
a saving module configured to save the subtitle mosaic;
a forwarding module configured to forward the subtitle splicing map;
a publishing module configured to publish the subtitle mosaic.
13. The apparatus of claim 11,
the device further comprises:
the first sending module is configured to send a first signal to the playing terminal, trigger the playing device to determine a start timestamp according to a time when the first signal is received, and generate a subtitle splicing map according to a playing video stream with the start timestamp as a starting point.
14. The apparatus of claim 11,
the device also comprises a second sending module configured to send a second signal to the play terminal; the second signal is used for triggering the playing device to determine an end timestamp according to the time when the second signal is received;
wherein, generating a caption splicing map according to the playing video stream with the starting timestamp as a starting point comprises: and generating a subtitle splicing map according to the playing video stream taking the starting time stamp as a starting point and the ending time stamp as an ending point.
15. The apparatus of claim 11,
the apparatus further includes a third sending module configured to send a third signal for instructing the playback device to send the subtitle splicing map, so that the playback device sends the subtitle splicing map to the mobile terminal after receiving the third signal.
16. An apparatus for transmitting a subtitle splicing map, applied to a playback device, comprising:
a second receiving module configured to receive the first signal;
a start timestamp determination module configured to determine a start timestamp from a time at which the first signal was received;
the subtitle splicing map generating module is configured to generate a subtitle splicing map according to a playing video stream with the starting timestamp as a starting point;
and the fourth sending module is configured to send the subtitle splicing map to at least one mobile terminal.
17. The apparatus of claim 16,
the device further comprises:
a third receiving module configured to receive a second signal indicating an end of generating the subtitle mosaic;
a first end timestamp determination module configured to determine an end timestamp from a time at which the second signal was received;
the subtitle mosaic generating module is further configured to generate a subtitle mosaic according to a playing video stream with the starting timestamp as a starting point and the ending timestamp as an ending point.
18. The apparatus of claim 16,
the device further comprises:
a fourth receiving module configured to receive a third signal for instructing to send the subtitle splicing map to at least one mobile terminal;
the fourth sending module is further configured to send the subtitle splicing map to at least one mobile terminal after the fourth receiving module receives the third signal.
19. The apparatus of claim 16,
the subtitle splicing map generation module comprises a first generation module;
the first generation module is configured to generate a subtitle splicing map from a playing video stream starting from the start timestamp using the following method:
recording a playing video stream taking the starting time stamp as a starting point and taking the ending time stamp as an end point;
extracting a plurality of video segments from the recorded video stream, wherein all image frames in each video segment contain the same subtitle information, extracting a representative image frame from each video segment respectively, and generating a subtitle splicing image according to the extracted representative image frames.
20. The apparatus of claim 16,
the subtitle splicing map generation module comprises a second generation module;
the second generation module is configured to generate a subtitle splicing map from the played video stream starting from the start timestamp using the following method:
extracting video segments in real time from a playing video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, and extracting a representative image frame from each video segment respectively;
and when the ending time stamp is determined, generating a subtitle splicing picture according to the extracted representative image frames.
21. The apparatus of claim 16,
the subtitle splicing map generation module comprises a third generation module;
the third generation module is configured to generate a subtitle splicing map from the playing video stream starting from the start timestamp using the following method:
extracting video segments in real time from a played video stream taking the starting timestamp as a starting point, wherein all image frames in each video segment contain the same subtitle information, respectively extracting one representative image frame from the extracted video segments, and generating a subtitle splicing image according to each extracted representative image frame;
and when the ending timestamp is determined, taking the subtitle splicing map generated for the last time as the subtitle splicing map generated finally.
22. The apparatus of claim 19, 20 or 21,
the device further comprises: a second end timestamp determination module configured to determine an end timestamp using one of:
determining that the end timestamp is a timestamp corresponding to a fixed duration after the start timestamp;
and taking the moment when the number of the video segments extracted in real time from the playing video stream taking the starting timestamp as the starting point reaches the set number as an ending timestamp.
23. An apparatus for transmitting a caption splicing map, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to execute executable instructions in the memory to implement the steps of the method of any one of claims 1 to 10.
24. A non-transitory computer readable storage medium having stored thereon executable instructions, wherein the executable instructions, when executed by a processor, implement the steps of the method of any one of claims 1 to 10.
CN202011137535.5A 2020-10-22 2020-10-22 Method, device and storage medium for transmitting subtitle splicing map Pending CN112261453A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011137535.5A CN112261453A (en) 2020-10-22 2020-10-22 Method, device and storage medium for transmitting subtitle splicing map

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011137535.5A CN112261453A (en) 2020-10-22 2020-10-22 Method, device and storage medium for transmitting subtitle splicing map

Publications (1)

Publication Number Publication Date
CN112261453A true CN112261453A (en) 2021-01-22

Family

ID=74265079

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011137535.5A Pending CN112261453A (en) 2020-10-22 2020-10-22 Method, device and storage medium for transmitting subtitle splicing map

Country Status (1)

Country Link
CN (1) CN112261453A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905192A (en) * 2021-08-27 2022-01-07 北京达佳互联信息技术有限公司 Subtitle editing method and device, electronic equipment and storage medium
CN114063863A (en) * 2021-11-29 2022-02-18 维沃移动通信有限公司 Video processing method and device and electronic equipment

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369372A (en) * 2013-07-17 2013-10-23 广州珠江数码集团有限公司 Live television screen capturing system and live television screen capturing method
CN103634683A (en) * 2013-11-29 2014-03-12 乐视致新电子科技(天津)有限公司 Screen capturing method and device for intelligent televisions
CN106028092A (en) * 2016-05-25 2016-10-12 天脉聚源(北京)传媒科技有限公司 Television screenshot sharing method and device
CN106162359A (en) * 2016-07-21 2016-11-23 乐视控股(北京)有限公司 A kind of video pictures portion intercepts method and device
CN109729420A (en) * 2017-10-27 2019-05-07 腾讯科技(深圳)有限公司 Image processing method and device, mobile terminal and computer readable storage medium
CN110502117A (en) * 2019-08-26 2019-11-26 三星电子(中国)研发中心 Screenshot method and electric terminal in electric terminal
CN112929745A (en) * 2018-12-18 2021-06-08 腾讯科技(深圳)有限公司 Video data processing method, device, computer readable storage medium and equipment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103369372A (en) * 2013-07-17 2013-10-23 广州珠江数码集团有限公司 Live television screen capturing system and live television screen capturing method
CN103634683A (en) * 2013-11-29 2014-03-12 乐视致新电子科技(天津)有限公司 Screen capturing method and device for intelligent televisions
CN106028092A (en) * 2016-05-25 2016-10-12 天脉聚源(北京)传媒科技有限公司 Television screenshot sharing method and device
CN106162359A (en) * 2016-07-21 2016-11-23 乐视控股(北京)有限公司 A kind of video pictures portion intercepts method and device
CN109729420A (en) * 2017-10-27 2019-05-07 腾讯科技(深圳)有限公司 Image processing method and device, mobile terminal and computer readable storage medium
CN112929745A (en) * 2018-12-18 2021-06-08 腾讯科技(深圳)有限公司 Video data processing method, device, computer readable storage medium and equipment
CN110502117A (en) * 2019-08-26 2019-11-26 三星电子(中国)研发中心 Screenshot method and electric terminal in electric terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113905192A (en) * 2021-08-27 2022-01-07 北京达佳互联信息技术有限公司 Subtitle editing method and device, electronic equipment and storage medium
CN114063863A (en) * 2021-11-29 2022-02-18 维沃移动通信有限公司 Video processing method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN106791893B (en) Video live broadcasting method and device
CN106911961B (en) Multimedia data playing method and device
US20170178289A1 (en) Method, device and computer-readable storage medium for video display
EP3113482B1 (en) Method and apparatus for obtaining video content
EP2978234A1 (en) Method and apparatus for sharing video information
EP3264774B1 (en) Live broadcasting method and device for live broadcasting
CN109451341B (en) Video playing method, video playing device, electronic equipment and storage medium
EP3796317A1 (en) Video processing method, video playing method, devices and storage medium
US9652823B2 (en) Method and terminal device for controlling display of video image
EP3147802B1 (en) Method and apparatus for processing information
EP2985980A1 (en) Method and device for playing stream media data
CN114009003A (en) Image acquisition method, device, equipment and storage medium
CN110719530A (en) Video playing method and device, electronic equipment and storage medium
CN112291631A (en) Information acquisition method, device, terminal and storage medium
CN107885016B (en) Holographic projection method and device
CN112261453A (en) Method, device and storage medium for transmitting subtitle splicing map
CN107105311B (en) Live broadcasting method and device
CN111614910B (en) File generation method and device, electronic equipment and storage medium
CN108650412B (en) Display method, display device and computer readable storage medium
CN106354464B (en) Information display method and device
CN111343510B (en) Information processing method, information processing device, electronic equipment and storage medium
CN114399306A (en) Virtual resource distribution method and device, electronic equipment and storage medium
CN110213531B (en) Monitoring video processing method and device
CN111538447A (en) Information display method, device, equipment and storage medium
CN112506393B (en) Icon display method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination