CN106454547B - real-time caption broadcasting method and system - Google Patents
real-time caption broadcasting method and system Download PDFInfo
- Publication number
- CN106454547B CN106454547B CN201510491214.8A CN201510491214A CN106454547B CN 106454547 B CN106454547 B CN 106454547B CN 201510491214 A CN201510491214 A CN 201510491214A CN 106454547 B CN106454547 B CN 106454547B
- Authority
- CN
- China
- Prior art keywords
- media stream
- subtitle
- stream
- segments
- segment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/488—Data services, e.g. news ticker
- H04N21/4884—Data services, e.g. news ticker for displaying subtitles
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/4302—Content synchronisation processes, e.g. decoder synchronisation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/83—Generation or processing of protective or descriptive data associated with content; Content structuring
- H04N21/845—Structuring of content, e.g. decomposing content into time segments
- H04N21/8456—Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
The invention discloses real-time subtitle playing method and system, the method comprises the steps of segmenting a media stream to generate a plurality of media stream segments, extracting audio information of each of the plurality of media stream segments, generating subtitle streams according to the audio information, storing the plurality of subtitle streams, receiving a playing request sent by a terminal, wherein the playing request is used for searching a subtitle stream corresponding to a streaming media segment, and sending a th subtitle stream and a th streaming media segment to the terminal when determining that the subtitle stream is stored so that the terminal can analyze an th subtitle stream to obtain a th subtitle content and enable the th subtitle content and the th media stream segment to be played synchronously.
Description
Technical Field
The invention relates to the field of multimedia communication, in particular to real-time caption broadcasting methods and systems.
Background
In recent years, with the rapid development of broadband network construction and media technology in China, streaming media services have gradually become the most representative application in the Internet, and the media distribution method based on the segmentation idea has the advantages of quick start, adaptive code rate switching, good user experience and the like, and is widely applied in .
At present, a streaming media technology is a relatively mature media technology, is widely applied to internet information services such as electronic commerce, news distribution, live video, video on demand, video conference, instant messaging and the like, in order to bring richer experience to users, besides providing high-quality video services to users, subtitle services are also provided, for on-demand media, workers have enough time to perform post processing on videos and add subtitles, for live broadcast with fixed or unchanged subtitle display positions, such as body color numbers, match ratios and the like, subtitle information at fixed positions can be updated in real time based on a subtitle template, but for live broadcast with more scope, subtitles cannot be generated in real time, for live broadcast on-site with relatively disordered conditions or audiences with hearing impairment, accurate information cannot be obtained by audiences, and user experience satisfaction is greatly reduced.
Disclosure of Invention
The invention aims to solve the defects in the prior art, and provides methods for generating a subtitle file by extracting audio information of a streaming media segment in a live broadcast process, so that a terminal synchronously plays subtitle content and the streaming media segment.
To achieve the above object, in , the present invention provides real-time subtitle broadcasting methods, including the steps of:
segmenting the media stream to generate a plurality of media stream segments;
extracting audio information of each of a plurality of media stream segments;
generating a subtitle stream according to the audio information, wherein the subtitle stream comprises a subtitle file and index information corresponding to the subtitle file;
storing a plurality of subtitle streams;
receiving a playing request sent by a terminal, wherein the playing request is used for searching a th subtitle stream corresponding to an th streaming media segment;
and when the th subtitle stream is determined to be stored, sending the th subtitle stream and the th streaming media segment to the terminal so that the terminal can analyze the th subtitle stream to obtain th subtitle content, and synchronously playing the th subtitle content and the th streaming media segment.
Preferably, the method further comprises deleting the stored th subtitle stream when the th subtitle stream is transmitted.
Preferably, segmenting the media stream into a plurality of media stream segments further comprises:
determining media stream fragments of the plurality of media stream fragments to be th media stream fragment;
when media stream fragments in the multiple media stream fragments are media stream fragments, index information corresponding to media stream fragments is generated.
Preferably, segmenting the media stream into a plurality of media stream segments further comprises:
when media stream fragments in the plurality of media stream fragments are the Nth media stream fragment, determining that a subtitle stream corresponding to the (N-1) th media stream fragment is stored, wherein N is an integer greater than 1;
and when the subtitle stream corresponding to the (N-1) th segment of the media stream is stored, generating index information corresponding to the (N) th segment of the media stream.
In a second aspect, the present invention provides real-time subtitle broadcasting methods, including the following steps:
sending a playing request to the server, wherein the playing request is used for searching th subtitle stream corresponding to th streaming media segment;
receiving a th subtitle stream corresponding to the th streaming media segment and the th streaming media segment sent by the server;
and (3) resolving the th subtitle stream to obtain th subtitle content, and enabling the th subtitle content and the th media stream segment to be played synchronously.
In a third aspect, the invention provides kinds of servers, which comprise an encoding module, a subtitle generating module and a processing module;
the encoding module is used for segmenting the media stream to generate a plurality of media stream segments;
the subtitle generating module is used for extracting the audio information of each media stream segment in the plurality of media stream segments; generating a subtitle stream according to the audio information, wherein the subtitle stream comprises a subtitle file and index information corresponding to the subtitle file;
the processing module is used for storing a plurality of subtitle streams;
the processing module is further configured to receive a play request sent by the terminal, and send the th media stream segment and the th subtitle stream corresponding to the th media stream segment to the terminal according to the play request.
Preferably, the processing module is further configured to delete the stored th subtitle stream when the th subtitle stream is sent.
Preferably, the encoding module is further configured to:
determining media stream fragments of the plurality of media stream fragments to be th media stream fragment;
when media stream fragments in the multiple media stream fragments are media stream fragments, index information corresponding to media stream fragments is generated.
Preferably, the encoding module is further configured to:
when media stream fragments in the plurality of media stream fragments are the Nth media stream fragment, determining that a subtitle stream corresponding to the (N-1) th media stream fragment is stored, wherein N is an integer greater than 1;
and when the subtitle stream corresponding to the (N-1) th segment of the media stream is stored, generating index information corresponding to the (N) th segment of the media stream.
In a fourth aspect, the invention provides kinds of systems, which includes a terminal and a server;
the terminal includes: the device comprises a sending module, a receiving module, an analysis module and a display module;
the sending module is used for sending a playing request to the server, wherein the playing request is used for searching th subtitle stream corresponding to th streaming media segment;
the receiving module is used for receiving the th streaming media segment sent by the server and the th subtitle stream corresponding to the th media stream segment;
the analysis module is used for analyzing th subtitle stream to obtain th subtitle content;
the display module is used for synchronously playing th subtitle content and th media stream segment.
The invention generates the subtitle stream corresponding to each streaming media segment in real time by extracting the audio information of the streaming media segment in the live broadcast process, thereby enabling the terminal to synchronously play the subtitle content and the streaming media segment. Meanwhile, the invention separately transmits the subtitle stream and the streaming media segment, so that the subtitle stream can be transmitted only in a simple text format, and the method is simple and efficient; and the structure of the caption is not limited, so that the caption can be easily expanded, and the user experience is greatly improved.
Drawings
Fig. 1 is a schematic process diagram of real-time subtitle playback methods according to an embodiment of the present invention;
fig. 2 is a flowchart of methods for generating multiple media stream segments according to an embodiment of the present invention;
FIG. 3 is a flow chart of a method for installing applications according to an embodiment of the present invention;
fig. 4 is a structural block diagram of real-time caption broadcasting systems according to an embodiment of the present invention.
Detailed Description
In order to make the technical solutions and advantages of the embodiments of the present invention more clear, the technical solutions of the present invention are described in further detail in through the accompanying drawings and the embodiments.
Fig. 1 is a flowchart of real-time subtitle playing methods provided by an embodiment of the present invention, and as shown in fig. 1, the method includes:
step 110, segmenting the media stream to generate a plurality of media stream segments.
Specifically, as shown in fig. 2, step 110, segmenting the media stream to generate a plurality of media stream segments, further includes:
step 111, determining media stream segments of the plurality of media stream segments as segment media stream segments.
When media stream fragments among the plurality of media stream fragments are th media stream fragment, executing step 113;
when media stream segments of the plurality of media stream segments are nth media stream segments, where N is an integer greater than 1, step 112 is executed.
And 112, determining that the subtitle stream corresponding to the (N-1) th segment of the media stream segment is stored, wherein N is an integer greater than 1.
When the storage of the subtitle stream corresponding to the N-1 th segment of the media stream segment is finished, executing step 113;
if the storage of the subtitle stream corresponding to the segment N-1 is not completed, the waiting is suspended until the storage of the subtitle stream corresponding to the segment N-1 is completed, and then step 113 is executed.
And 113, generating index information corresponding to the nth segment of the media stream segment.
Step 120, extracting audio information of each media stream segment of the plurality of media stream segments.
If no media stream segment is generated, the wait is suspended.
Step 130, generating a subtitle stream according to the audio information, wherein the subtitle stream includes a subtitle file and index information corresponding to the subtitle file.
Specifically, when the subtitle is generated, the corresponding relation between the relative time of the audio to be recorded and the content of the current subtitle and the sequence number of the current audio segment in the whole live media segment are required to be recorded, and the sequence number of the media segment is used as a distinguishing characteristic to name the subtitle file. The index information points to a storage location of the subtitle file so that the terminal can correctly request the subtitle file.
Step 140, storing a plurality of subtitle streams.
When storing the N-th section of subtitle stream, adding the newly generated index information to the tail of the original index file, and deleting the front end of the index file and the requested segment index information.
In step 145, the terminal sends a play request to the server, where the play request is used to search for the th subtitle stream corresponding to the th streaming media segment.
Alternatively, the play request may include requesting th index file, requesting th subtitle file according to th index file information.
And 150, receiving a play request sent by the terminal, wherein the play request is used for searching the th subtitle stream corresponding to the th streaming media segment.
When it is determined that the th subtitle stream is stored, step 160.
After receiving the request failure information, the terminal increases the frequency of requesting a subtitle index file from the server, and continues to request the index file until the request is successful or the request is overtime, and then enters the request process of the next paragraph.
Step 165, sending th subtitle stream and th streaming media segment to the terminal, so that the terminal can analyze th subtitle stream to obtain th subtitle content, and synchronously playing th subtitle content and th streaming media segment.
Step 170, receiving a th subtitle stream corresponding to the th streaming media segment and the th streaming media segment sent by the server.
Step 180, the th subtitle stream is analyzed to obtain th subtitle content.
Specifically, as shown in fig. 3, the specific steps of parsing the th subtitle stream are as follows:
step 181, parsing the subtitle format.
If the subtitle format is correct, go to step 182; and if the subtitle format is incorrect, stopping operation.
Step 182, the subtitle file is parsed.
Specifically, the caption file name is analyzed, and segment sequence number information n is extracted; analyzing the serial number of the subtitle file, wherein the serial number is the sequence of the current subtitle in the subtitle file; the relative display time of a subtitle is analyzed, and the relative display time of a subtitle with a certain ordinal number is set to be t1 and t 2.
Step 183, generate absolute display timestamp.
Absolute display time T1=(n-1)·d+t1In the same way, T2=(n-1)·d+t2Where d is the media stream segment size in seconds.
If the information parsed in step 182 is not sufficient to generate an absolute timestamp, the operation stops.
And step 184, updating the subtitle display time.
Specifically, the relative display time is replaced with the absolute display time.
Step 185, generating subtitle content.
Step 190, synchronously playing th subtitle content and th media stream segment.
Firstly, the time position information of media stream segment is obtained, the position information is converted into the playing time value with the unit of second, then the current playing time value is compared with the absolute display time value in the caption file, and when the playing time value falls into the absolute display time of a certain serial number, the caption content under the current serial number is displayed.
And step 195, deleting the stored th subtitle stream when the th subtitle stream is sent.
The embodiment of the invention generates the subtitle stream corresponding to each streaming media segment in real time by extracting the audio information of the streaming media segment in the live broadcast process, thereby enabling the terminal to synchronously play the subtitle content and the streaming media segment. Meanwhile, the embodiment of the invention separately transmits the subtitle stream and the streaming media segment, so that the subtitle stream can be transmitted only in a simple text format, and the method is simple and efficient; and the structure of the caption is not limited, the caption can be easily expanded, and the user experience is greatly improved
In a second aspect, fig. 4 is a block diagram illustrating a structure of real-time caption broadcasting systems according to an embodiment of the present invention, and as shown in fig. 4, the system includes a server 40 and a terminal 50.
The server 40 includes: an encoding module 41, a subtitle generating module 42, and a processing module 43.
The encoding module 41 is configured to segment the media stream to generate a plurality of media stream segments.
The subtitle generating module 42 is configured to extract audio information of each of the plurality of media stream segments; and generating a subtitle stream according to the audio information, wherein the subtitle stream comprises a subtitle file and index information corresponding to the subtitle file.
The processing module 43 is configured to store a plurality of subtitle streams.
The processing module 43 is further configured to receive a play request sent by the terminal 50, and send a th subtitle stream corresponding to the th media stream segment and the th media stream segment to the terminal 50 according to the play request.
Optionally, the processing module 43 is further configured to delete the stored th subtitle stream when the th subtitle stream is sent.
In particular, the encoding module 41 is further configured to:
determining media stream fragments of the plurality of media stream fragments to be th media stream fragment;
when media stream fragments in the multiple media stream fragments are media stream fragments, index information corresponding to media stream fragments is generated.
In particular, the encoding module 41 is further configured to:
when media stream fragments in the plurality of media stream fragments are the Nth media stream fragment, determining that a subtitle stream corresponding to the (N-1) th media stream fragment is stored, wherein N is an integer greater than 1;
and when the subtitle stream corresponding to the (N-1) th segment of the media stream is stored, generating index information corresponding to the (N) th segment of the media stream.
The terminal 50 includes: a transmitting module 51, a receiving module 52, an analyzing module 53 and a display module 54.
The sending module 51 is configured to send a play request to the server 40, where the play request is used to find the th subtitle stream corresponding to the th streaming media segment.
The receiving module 52 is configured to receive the th streaming media segment sent by the server 40 and the th subtitle stream corresponding to the th streaming media segment.
The parsing module 53 is configured to parse the th subtitle stream to obtain th subtitle content.
The display module 54 is used for synchronously playing th subtitle content and th media stream segment.
The embodiment of the invention generates the subtitle stream corresponding to each streaming media segment in real time by extracting the audio information of the streaming media segment in the live broadcast process, thereby enabling the terminal to synchronously play the subtitle content and the streaming media segment. Meanwhile, the embodiment of the invention separately transmits the subtitle stream and the streaming media segment, so that the subtitle stream can be transmitted only in a simple text format, and the method is simple and efficient; and the structure of the caption is not limited, so that the caption can be easily expanded, and the user experience is greatly improved.
it should also be further appreciated that the exemplary elements and algorithm steps described in connection with the embodiments disclosed herein can be implemented as electronic hardware, computer software, or combinations of both, and that the exemplary components and steps have been described above generally in terms of functionality for clarity of illustrating interchangeability of hardware and software.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, a software module executed by a processor, or a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments, objects, technical solutions and advantages of the present invention have been described in , it should be understood that the above-mentioned embodiments are only illustrative and not intended to limit the scope of the present invention, and any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (8)
1, A real-time caption broadcasting method, characterized in that, the method includes:
the method comprises the steps of segmenting media streams to generate a plurality of media stream segments, wherein media stream segments in the media stream segments are determined to be media stream segments, when media stream segments in the media stream segments are media stream segments, index information corresponding to media stream segments is generated, when media stream segments in the media stream segments are Nth media stream segments, subtitle streams corresponding to the Nth-1 th media stream segments are determined to be stored, wherein N is an integer larger than 1, and when the subtitle streams corresponding to the Nth-1 th media stream segments are stored, the index information corresponding to the Nth media stream segments is generated;
extracting audio information of each of the plurality of media stream segments;
generating a subtitle stream according to the audio information, wherein the subtitle stream comprises a subtitle file and index information corresponding to the subtitle file;
when the N-th section of subtitle stream is stored, newly generated index information is added to the tail of the original index file, and the forefront index information which is requested in the index file is deleted;
receiving a playing request sent by a terminal, wherein the playing request is used for searching subtitle streams corresponding to -th media stream fragments;
when the th subtitle stream is determined to be stored, the th subtitle stream and the th media stream segment are sent to the terminal, so that the terminal can analyze the th subtitle stream to obtain th subtitle content, and the th subtitle content and the th media stream segment are played synchronously.
2. The method of claim 1, further comprising deleting the stored th subtitle stream when the th subtitle stream is transmitted.
3, A real-time caption broadcasting method, characterized in that the method comprises:
sending a playing request to a server, wherein the playing request is used for searching th subtitle stream corresponding to th media stream segment, the th media stream segment is media stream segments of a plurality of media stream segments, when media stream segments of the plurality of media stream segments are th media stream segments, generating index information corresponding to th media stream segment, when media stream segments of the plurality of media stream segments are Nth media stream segments, determining that subtitle stream corresponding to the N-1 th media stream segment is stored, wherein N is an integer greater than 1, and when subtitle stream corresponding to the N-1 th media stream segment is stored, generating index information corresponding to the N-1 th media stream segment;
receiving a th media stream segment sent by the server and a th subtitle stream corresponding to the th media stream segment, wherein, when the server stores the th subtitle stream, index files are newly created and index information is added, when the server stores the N-th subtitle stream, newly generated index information is added to the tail of an original index file, and simultaneously the index information of the segment which is the forefront of the index file and has been requested is deleted, and simultaneously, the subtitle file is stored to the position appointed by the index and the subtitle file corresponding to the forefront index of the index file is deleted;
and analyzing the th subtitle stream to obtain th subtitle content, and enabling the th subtitle content and the th media stream segment to be played synchronously.
4, kinds of servers, which are characterized by comprising an encoding module, a subtitle generating module and a processing module;
the encoding module is used for segmenting a media stream to generate a plurality of media stream segments, wherein media stream segments in the media stream segments are determined to be media stream segments, when media stream segments in the media stream segments are media stream segments, index information corresponding to media stream segments is generated, when media stream segments in the media stream segments are Nth media stream segments, subtitle streams corresponding to the Nth-1 media stream segments are determined to be stored, wherein N is an integer larger than 1, and when the subtitle streams corresponding to the Nth-1 media stream segments are stored, the index information corresponding to the Nth media stream segments is generated;
the subtitle generating module is used for extracting audio information of each media stream segment in the plurality of media stream segments; generating a subtitle stream according to the audio information, wherein the subtitle stream comprises a subtitle file and index information corresponding to the subtitle file;
the processing module is used for storing a plurality of subtitle streams, wherein index files are newly created and index information is added when th subtitle streams are stored, newly generated index information is added to the tail of an original index file when the Nth subtitle stream is stored, and the forefront index information which is requested in the index file is deleted at the same time;
the processing module is further configured to receive a play request sent by a terminal, and send the th media stream segment and the th subtitle stream corresponding to the th media stream segment to the terminal according to the play request.
5. The server according to claim 4,
the processing module is further configured to delete the stored th subtitle stream when the th subtitle stream is sent.
6. The server of claim 4, wherein the encoding module is further configured to:
determining media stream segments of the plurality of media stream segments to be segment media stream segments;
when media stream fragments in the plurality of media stream fragments are media stream fragments, generating index information corresponding to the media stream fragment.
7. The server of claim 4, wherein the encoding module is further configured to:
when media stream fragments in the plurality of media stream fragments are the Nth media stream fragment, determining that a subtitle stream corresponding to the (N-1) th media stream fragment is stored, wherein N is an integer greater than 1;
and when the subtitle stream corresponding to the N-1 th segment of the media stream is stored, generating index information corresponding to the N-1 th segment of the media stream.
A real-time subtitling system of the kind 8, , characterized in that the system comprises a terminal and a server according to any of claims 4-7 and ;
the terminal includes: the device comprises a sending module, a receiving module, an analysis module and a display module;
the sending module is configured to send a play request to a server, where the play request is used to search a th subtitle stream corresponding to an th media stream segment;
the receiving module is configured to receive the th media stream segment sent by the server and a th subtitle stream corresponding to the th media stream segment;
the analysis module is used for analyzing the th subtitle stream to obtain th subtitle content;
the display module is used for synchronously playing the th subtitle content and the th media stream segment.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510491214.8A CN106454547B (en) | 2015-08-11 | 2015-08-11 | real-time caption broadcasting method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510491214.8A CN106454547B (en) | 2015-08-11 | 2015-08-11 | real-time caption broadcasting method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106454547A CN106454547A (en) | 2017-02-22 |
CN106454547B true CN106454547B (en) | 2020-01-31 |
Family
ID=58093718
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510491214.8A Active CN106454547B (en) | 2015-08-11 | 2015-08-11 | real-time caption broadcasting method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106454547B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111955013B (en) * | 2018-04-04 | 2023-03-03 | 诺基私人有限公司 | Method and system for facilitating interactions during real-time streaming events |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101382937A (en) * | 2008-07-01 | 2009-03-11 | 深圳先进技术研究院 | Multimedia resource processing method based on speech recognition and on-line teaching system thereof |
CN101540847A (en) * | 2008-03-21 | 2009-09-23 | 株式会社康巴思 | Caption creation system and caption creation method |
CN102802044A (en) * | 2012-06-29 | 2012-11-28 | 华为终端有限公司 | Video processing method, terminal and subtitle server |
CN103297709A (en) * | 2013-06-19 | 2013-09-11 | 江苏华音信息科技有限公司 | Device for adding Chinese subtitles to Chinese audio video data |
CN103544978A (en) * | 2013-11-07 | 2014-01-29 | 上海斐讯数据通信技术有限公司 | Multimedia file manufacturing and playing method and intelligent terminal |
CN103561217A (en) * | 2013-10-14 | 2014-02-05 | 深圳创维数字技术股份有限公司 | Method and terminal for generating captions |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8914534B2 (en) * | 2011-01-05 | 2014-12-16 | Sonic Ip, Inc. | Systems and methods for adaptive bitrate streaming of media stored in matroska container files using hypertext transfer protocol |
-
2015
- 2015-08-11 CN CN201510491214.8A patent/CN106454547B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101540847A (en) * | 2008-03-21 | 2009-09-23 | 株式会社康巴思 | Caption creation system and caption creation method |
CN101382937A (en) * | 2008-07-01 | 2009-03-11 | 深圳先进技术研究院 | Multimedia resource processing method based on speech recognition and on-line teaching system thereof |
CN102802044A (en) * | 2012-06-29 | 2012-11-28 | 华为终端有限公司 | Video processing method, terminal and subtitle server |
CN103297709A (en) * | 2013-06-19 | 2013-09-11 | 江苏华音信息科技有限公司 | Device for adding Chinese subtitles to Chinese audio video data |
CN103561217A (en) * | 2013-10-14 | 2014-02-05 | 深圳创维数字技术股份有限公司 | Method and terminal for generating captions |
CN103544978A (en) * | 2013-11-07 | 2014-01-29 | 上海斐讯数据通信技术有限公司 | Multimedia file manufacturing and playing method and intelligent terminal |
Also Published As
Publication number | Publication date |
---|---|
CN106454547A (en) | 2017-02-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103069769B (en) | For the special-effect mode transmitted through the network crossfire of decoded video data | |
US8516119B2 (en) | Systems and methods for determining attributes of media items accessed via a personal media broadcaster | |
CN108184135B (en) | Subtitle generating method and device, storage medium and electronic terminal | |
EP2901631B1 (en) | Enriching broadcast media related electronic messaging | |
CN104902343B (en) | A kind of method, server and the terminal of transmission and playing audio-video and message | |
CN105721811A (en) | Live video recording method and system | |
CN112954434B (en) | Subtitle processing method, system, electronic device and storage medium | |
US20230421867A1 (en) | Systems and methods for summarizing missed portions of storylines | |
CN107659538A (en) | A kind of method and apparatus of Video processing | |
CN109168020A (en) | Method for processing video frequency, device, calculating equipment and storage medium based on live streaming | |
CN105530523B (en) | A kind of service implementation method and equipment | |
US11843813B2 (en) | Content-modification system with probability-based selection feature | |
CN114514753B (en) | Use of watermarks for controlling discarding dynamic content modifications | |
CN114402572A (en) | Using in-band metadata as a basis for accessing reference fingerprints to facilitate content-related actions | |
CN112929730A (en) | Bullet screen processing method and device, electronic equipment, storage medium and system | |
CN115623264A (en) | Live stream subtitle processing method and device and live stream playing method and device | |
US20160196631A1 (en) | Hybrid Automatic Content Recognition and Watermarking | |
CN106658150B (en) | Method and device for realizing review processing | |
CN112565820B (en) | Video news splitting method and device | |
TW201225669A (en) | System and method for synchronizing with multimedia broadcast program and computer program product thereof | |
CN106454547B (en) | real-time caption broadcasting method and system | |
WO2020247833A1 (en) | Content-modification system with content-presentation device grouping feature | |
CN110475121B (en) | Video data processing method and device and related equipment | |
WO2022253079A1 (en) | Hls stream-based subtitle display method and device | |
CN109218772A (en) | Smart television information-pushing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
TR01 | Transfer of patent right | ||
TR01 | Transfer of patent right |
Effective date of registration: 20210804 Address after: Room 1601, 16th floor, East Tower, Ximei building, No. 6, Changchun Road, high tech Industrial Development Zone, Zhengzhou, Henan 450001 Patentee after: Zhengzhou xinrand Network Technology Co.,Ltd. Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES |