CN106454547A - Real-time subtitle playing method and real-time subtitle playing system - Google Patents

Real-time subtitle playing method and real-time subtitle playing system Download PDF

Info

Publication number
CN106454547A
CN106454547A CN201510491214.8A CN201510491214A CN106454547A CN 106454547 A CN106454547 A CN 106454547A CN 201510491214 A CN201510491214 A CN 201510491214A CN 106454547 A CN106454547 A CN 106454547A
Authority
CN
China
Prior art keywords
media stream
fragment
caption
stream
media
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510491214.8A
Other languages
Chinese (zh)
Other versions
CN106454547B (en
Inventor
朱小勇
耿立宏
郭志川
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Xinrand Network Technology Co ltd
Original Assignee
Institute of Acoustics CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Acoustics CAS filed Critical Institute of Acoustics CAS
Priority to CN201510491214.8A priority Critical patent/CN106454547B/en
Publication of CN106454547A publication Critical patent/CN106454547A/en
Application granted granted Critical
Publication of CN106454547B publication Critical patent/CN106454547B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/488Data services, e.g. news ticker
    • H04N21/4884Data services, e.g. news ticker for displaying subtitles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/4302Content synchronisation processes, e.g. decoder synchronisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention discloses a real-time subtitle playing method and a real-time subtitle playing system. The real-time subtitle playing method comprises steps that a media stream segment is divided into a plurality of media stream fragments; the audio information of each of the media stream fragments is extracted; subtitle streams are generated according to the audio information, and comprise subtitle files and index information corresponding to the subtitle files; the plurality of subtitle streams are stored; a playing request transmitted by a terminal is received, and the playing request is used to search a first subtitle stream corresponding to a first media stream fragment; when the first subtitle stream is stored, the first subtitle stream and the first media stream fragment are transmitted to the terminal, and then the terminal is used to parse the first subtle stream to acquire first subtitle content, and then the first subtitle content and the first media stream fragment are played synchronously. The subtitle content and the media stream fragments are synchronously played during direct broadcast.

Description

A kind of real-time subtitle broadcasting method and system
Technical field
The present invention relates to field of multimedia communication, more particularly, to a kind of real-time subtitle broadcasting method and system.
Background technology
In recent years, developing rapidly with China's deployment of broadband network and medium technique, streaming media service is It is increasingly becoming in the Internet most representational application.Had soon based on the media distribution method of segmentation thought The advantages of speed starts, self-adaption code rate switches, Consumer's Experience is good, is used widely.
At present, stream media technology has been the medium technique of comparative maturity, be widely used in ecommerce, On the Internet Information Services such as news briefing, net cast, video request program, video conference and instant messaging. In order to bring more rich experience to user, in addition to the Video service providing a user with high-quality, typically also Captions service is provided.For on-demand media, staff has time enough that video is done with later stage process, Plus captions;Captions displaying location is fixed or changes little live, such as sports lottery ticket number, match ratio Grade, can caption information based on subtitle template real-time update fixed position;But for more extensive Live, captions cannot generate in real time.For live scene more chaotic in the case of or have dysaudia Spectators, do not have the live of captions, and spectators will be led to cannot to get accurate information, satisfaction of users To substantially reduce.
Content of the invention
The invention aims to solution deficiencies of the prior art, provide one kind by straight Streaming Media fragment during broadcasting carries out audio information, thus generating subtitle file, makes terminal synchronous Caption playing content and the method for Streaming Media fragment.
For achieving the above object, in a first aspect, the invention provides a kind of real-time subtitle broadcasting method, being somebody's turn to do Method comprises the following steps:
Media Stream segmentation is generated multiple Media Stream fragments;
Extract the audio-frequency information of each the Media Stream fragment in multiple Media Stream fragments;
Caption stream is generated according to audio-frequency information, wherein, caption stream includes, subtitle file and subtitle file pair The index information answered;
Store multiple caption stream;
The playing request that receiving terminal sends, playing request is used for searching first-class media fragment corresponding the One caption stream;
When determining that the first caption stream stores, send the first caption stream and first-class media fragment to terminal, So that terminal parses the first caption stream, obtain the first caption content, make the first caption content and the first media Flow segment sync is play.
Preferably, the method also includes:When the first caption stream is sent, by the first caption stream of storage Deleted.
Preferably, Media Stream segmentation is generated multiple Media Stream fragments, also include:
Determine that one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment;
When one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment, generate the The corresponding index information of one section of Media Stream fragment.
Preferably, Media Stream segmentation is generated multiple Media Stream fragments, also include:
When one of multiple Media Stream fragments Media Stream fragment is N section Media Stream fragment, determine the The corresponding caption stream of N-1 section Media Stream fragment stores, and wherein, N is the integer more than 1;
When the corresponding caption stream of N-1 section Media Stream fragment stores, generate N section Media Stream fragment Corresponding index information.
Second aspect, the invention provides a kind of real-time subtitle broadcasting method, the method comprises the following steps:
Send playing request to server, playing request is used for searching first-class media fragment corresponding first Caption stream;
First-class media fragment and corresponding first caption stream of the first Media Stream fragment that the reception server sends;
Parse the first caption stream, obtain the first caption content, make the first caption content and the first media flow Segment sync is play.
The third aspect, the invention provides a kind of server, this server includes:Coding module, captions Generation module, processing module;
Coding module is used for for Media Stream segmentation generating multiple Media Stream fragments;
Captions generation module is used for extracting the audio-frequency information of each the Media Stream fragment in multiple Media Stream fragments; And, caption stream is generated according to audio-frequency information, wherein, caption stream includes, subtitle file and subtitle file Corresponding index information;
Processing module is used for storing multiple caption stream;
Processing module is additionally operable to the playing request of receiving terminal transmission, and according to playing request, by the first matchmaker Body flow section and corresponding first caption stream of the first Media Stream fragment send to terminal.
Preferably, processing module is additionally operable to, when first-class caption stream is sent, by the first word of storage Curtain stream is deleted.
Preferably, coding module is additionally operable to:
Determine that one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment;
When one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment, generate the The corresponding index information of one section of Media Stream fragment.
Preferably, coding module is additionally operable to:
When one of multiple Media Stream fragments Media Stream fragment is N section Media Stream fragment, determine the The corresponding caption stream of N-1 section Media Stream fragment stores, and wherein, N is the integer more than 1;
When the corresponding caption stream of N-1 section Media Stream fragment stores, generate N section Media Stream fragment Corresponding index information.
Fourth aspect, the invention provides a kind of system, this system includes:Terminal server;
This terminal includes:Sending module, receiver module, parsing module and display module;
Sending module is used for sending playing request to server, and playing request is used for searching the first Streaming Media piece Corresponding first caption stream of section;
Receiver module is used for first-class media fragment and the first Media Stream fragment correspondence that the reception server sends The first caption stream;
Parsing module is used for parsing the first caption stream, obtains the first caption content;
Display module is used for synchronous broadcasting the first caption content and the first Media Stream fragment.
The present invention, by carrying out audio information to the Streaming Media fragment during live, generates every in real time The corresponding caption stream of individual Streaming Media fragment, so that terminal synchronization caption playing content and Streaming Media fragment. Meanwhile, caption stream is separately transmitted by the present invention with Streaming Media fragment, makes the caption stream only need to be with plain text lattice Formula can be transmitted, simply efficiently;And the structure of captions is unrestricted, can extend easily, pole The earth improves user experience.
Brief description
Fig. 1 is a kind of schematic process figure of real-time subtitle broadcasting method provided in an embodiment of the present invention;
Fig. 2 is a kind of method flow diagram generating multiple Media Stream fragments provided in an embodiment of the present invention;
Fig. 3 is a kind of method flow diagram of set up applications provided in an embodiment of the present invention;
Fig. 4 is a kind of structural diagram of real-time caption broadcasting system provided in an embodiment of the present invention.
Specific embodiment
For making becoming apparent from of the technical scheme of the embodiment of the present invention and advantage expression, below by accompanying drawing and Embodiment, is described in further detail to technical scheme.
Fig. 1 is a kind of real-time subtitle broadcasting method flow chart provided in an embodiment of the present invention, as shown in figure 1, The method includes:
Step 110, Media Stream segmentation is generated multiple Media Stream fragments.
Specifically, as shown in Fig. 2 step 110, Media Stream segmentation is generated multiple Media Stream fragments, also Including:
Step 111, determines that one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment.
When one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment, execute step Rapid 113;
When one of the plurality of Media Stream fragment Media Stream fragment is N section Media Stream fragment, its In, N is the integer more than 1, execution step 112.
Step 112, determines that the corresponding caption stream of N-1 section Media Stream fragment has stored, wherein, N is big In 1 integer.
When the corresponding caption stream of N-1 section Media Stream fragment has stored and has finished, execution step 113;
If the corresponding caption stream of N-1 section Media Stream fragment does not store finishing, hanging up and waiting, until the The corresponding caption stream storage of N-1 section Media Stream fragment finishes, then execution step 113.
Step 113, generates the corresponding index information of N section Media Stream fragment.
Step 120, extracts the audio-frequency information of each the Media Stream fragment in multiple Media Stream fragments.
If it should be noted that when no Media Stream fragment generates, hanging up and wait.
Step 130, generates caption stream according to audio-frequency information, wherein, caption stream includes, subtitle file and word The curtain corresponding index information of file.
Specifically, need to record when generating captions the corresponding relation of audio frequency relative time and current subtitle content with And sequence number in whole live media fragment for the present video fragment, and distinguished with this media fragment serial number It is characterized as that this subtitle file is named.Index information points to the storage location of subtitle file, enables terminal correct Ask this subtitle file.
Step 140, stores multiple caption stream.
Specifically, when storing first paragraph caption stream, a newly-built index file, and add index information. When storing N section caption stream, newly-generated index information is added to former index file end, simultaneously Delete index file in foremost, the fragment index information of requested mistake.Meanwhile, subtitle file is deposited To the position specified of index, and delete and in index file, index corresponding subtitle file foremost.
Step 145, terminal to server sends playing request, and playing request is used for searching the first Streaming Media piece Corresponding first caption stream of section.
Alternatively, playing request can include:Ask the first index file, believed according to the first index file Breath request the first subtitle file.
Step 150, the playing request that receiving terminal sends, playing request is used for searching first-class media fragment Corresponding first caption stream.
Step 160, when determining that the first caption stream stores.
It should be noted that when the first caption stream does not store, then refusing this request.Terminal receives request and loses After losing information, the frequency to server request captions index file will be increased, continues to ask this index file, Until entering the request process of next section after request success or time-out.
Step 165, sends the first caption stream and first-class media fragment to terminal, so that terminal parsing first Caption stream, obtains the first caption content, so that the first caption content and the first Media Stream segment sync is play.
Step 170, the first-class media fragment that the reception server sends and the first Media Stream fragment corresponding the One caption stream.
Step 180, parses the first caption stream, obtains the first caption content.
Specifically, as shown in figure 3, parsing the first caption stream comprises the following steps that:
Step 181, parses subtitling format.
If subtitling format is correct, execution step 182;If subtitling format is incorrect, stop operation.
Step 182, parses subtitle file.
Specifically, parse subtitle file name, extract section serial number information n;Parsing subtitle file sequence number, this sequence Number for current subtitle in subtitle file sequence;The Relative display times of parsing captions, if certain sequence number word The Relative display times of curtain are t1 and t2.
Step 183, generates absolute Presentation Time Stamp.
Definitely display time T1=(n-1) d+t1, in the same manner, T2=(n-1) d+t2, wherein, d is Media Stream Clip size, the unit second.
If the information of parsing is not enough to generate absolute timestamp in step 182, stop operation.
Step 184, updates the Subtitle Demonstration time.
Specifically, Relative display times are replaced with the absolute display time.
Step 185, generates caption content.
Step 190, synchronous broadcasting the first caption content and the first Media Stream fragment.
Obtain the time location information of the first Media Stream fragment first, this positional information is switched to the second as list The reproduction time value of position, then makees to compare with definitely display time value in subtitle file with current play time value Relatively, when this reproduction time value falls into certain sequence number and definitely shows in the time, captions under display current sequence number Content.
Step 195, when first-class caption stream is sent, the first caption stream of storage is deleted.
The embodiment of the present invention by audio information is carried out to the Streaming Media fragment during live, in real time Generate each corresponding caption stream of Streaming Media fragment, so that terminal synchronization caption playing content and Streaming Media Fragment.Meanwhile, caption stream is separately transmitted by the embodiment of the present invention with Streaming Media fragment, so that caption stream is only needed Can be transmitted with plain text form, simply efficiently;And the structure of captions is unrestricted, permissible Extend easily, greatly improve user experience
Second aspect, Fig. 4 is a kind of the structural of real-time caption broadcasting system provided in an embodiment of the present invention Block diagram, as shown in figure 4, this system includes:Server 40 and terminal 50.
This server 40 includes:Coding module 41, captions generation module 42, processing module 43.
Coding module 41 is used for for Media Stream segmentation generating multiple Media Stream fragments.
Captions generation module 42 is used for extracting the audio frequency letter of each the Media Stream fragment in multiple Media Stream fragments Breath;And, caption stream is generated according to audio-frequency information, wherein, caption stream includes, subtitle file and captions The corresponding index information of file.
Processing module 43 is used for storing multiple caption stream.
Processing module 43 is additionally operable to the playing request of receiving terminal 50 transmission, and according to playing request, will First Media Stream fragment and corresponding first caption stream of the first Media Stream fragment send to terminal 50.
Alternatively, processing module 43 is additionally operable to, when first-class caption stream is sent, by the first of storage Caption stream is deleted.
Specifically, coding module 41 is additionally operable to:
Determine that one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment;
When one of multiple Media Stream fragments Media Stream fragment is first paragraph Media Stream fragment, generate the The corresponding index information of one section of Media Stream fragment.
Specifically, coding module 41 is additionally operable to:
When one of multiple Media Stream fragments Media Stream fragment is N section Media Stream fragment, determine the The corresponding caption stream of N-1 section Media Stream fragment stores, and wherein, N is the integer more than 1;
When the corresponding caption stream of N-1 section Media Stream fragment stores, generate N section Media Stream fragment Corresponding index information.
This terminal 50 includes:Sending module 51, receiver module 52, parsing module 53 and display module 54.
Sending module 51 is used for sending playing request to server 40, and playing request is used for searching first-class Corresponding first caption stream of media fragment.
Receiver module 52 is used for first-class media fragment and the first media flow that the reception server 40 sends Corresponding first caption stream of section.
Parsing module 53 is used for parsing the first caption stream, obtains the first caption content.
Display module 54 is used for synchronous broadcasting the first caption content and the first Media Stream fragment.
The embodiment of the present invention by audio information is carried out to the Streaming Media fragment during live, in real time Generate each corresponding caption stream of Streaming Media fragment, so that terminal synchronization caption playing content and Streaming Media Fragment.Meanwhile, caption stream is separately transmitted by the embodiment of the present invention with Streaming Media fragment, so that caption stream is only needed Can be transmitted with plain text form, simply efficiently;And the structure of captions is unrestricted, permissible Extend easily, greatly improve user experience.
Professional should further appreciate that, in conjunction with the embodiments described herein description The unit of each example and algorithm steps, can be come with electronic hardware, computer software or the combination of the two Realize, in order to clearly demonstrate the interchangeability of hardware and software, in the above description according to function Generally describe composition and the step of each example.These functions are come with hardware or software mode actually Execution, the application-specific depending on technical scheme and design constraint.Professional and technical personnel can be to every Individual specific application using different methods to realize described function, but this realize it is not considered that Beyond the scope of this invention.
The step of the method in conjunction with the embodiments described herein description or algorithm can use hardware, process The software module of device execution, or the combination of the two is implementing.Software module can be placed in random access memory (RAM), internal memory, read only memory (ROM), electrically programmable ROM, electrically erasable ROM, Known any other form in depositor, hard disk, moveable magnetic disc, CD-ROM or technical field Storage medium in.
Above-described specific embodiment, is carried out to the purpose of the present invention, technical scheme and beneficial effect Further describe, be should be understood that the specific embodiment that the foregoing is only the present invention and , the protection domain being not intended to limit the present invention, all within the spirit and principles in the present invention, done Any modification, equivalent substitution and improvement etc., should be included within the scope of the present invention.

Claims (10)

1. a kind of real-time subtitle broadcasting method is it is characterised in that the method includes:
Media Stream segmentation is generated multiple Media Stream fragments;
Extract the audio-frequency information of each the Media Stream fragment in the plurality of Media Stream fragment;
Caption stream is generated according to described audio-frequency information, wherein, described caption stream includes, subtitle file and the corresponding index information of described subtitle file;
Store multiple described caption stream;
The playing request that receiving terminal sends, described playing request is used for searching corresponding first caption stream of first-class media fragment;
When described first caption stream of determination stores, send described first caption stream and described first-class media fragment to described terminal, so that described terminal parses described first caption stream, obtain the first caption content, so that described first caption content and described first Media Stream segment sync is play.
2. method according to claim 1 is it is characterised in that methods described also includes:When described first-class caption stream is sent, described first caption stream of storage is deleted.
3. method according to claim 1 and 2, it is characterised in that described generate multiple Media Stream fragments by Media Stream segmentation, also includes:
Determine that one of the plurality of Media Stream fragment Media Stream fragment is first paragraph Media Stream fragment;
When one of the plurality of Media Stream fragment Media Stream fragment is first paragraph Media Stream fragment, generate the corresponding index information of described first paragraph Media Stream fragment.
4. method according to claim 3, it is characterised in that described generate multiple Media Stream fragments by Media Stream segmentation, also includes:
When one of the plurality of Media Stream fragment Media Stream fragment is N section Media Stream fragment, determine that the corresponding caption stream of N-1 section Media Stream fragment has stored, wherein, N is the integer more than 1;
When the corresponding caption stream of described N-1 section Media Stream fragment stores, generate the corresponding index information of described N section Media Stream fragment.
5. a kind of real-time subtitle broadcasting method is it is characterised in that methods described includes:
Send playing request to server, described playing request is used for searching corresponding first caption stream of first-class media fragment;
Receive described first-class media fragment and corresponding first caption stream of described first Media Stream fragment that described server sends;
Parse described first caption stream, obtain the first caption content, so that described first caption content and described first Media Stream segment sync is play.
6. a kind of server is it is characterised in that described server includes:Coding module, captions generation module, processing module;
Described coding module is used for for Media Stream segmentation generating multiple Media Stream fragments;
Described captions generation module is used for extracting the audio-frequency information of each the Media Stream fragment in the plurality of Media Stream fragment;And, caption stream is generated according to described audio-frequency information, wherein, described caption stream includes, subtitle file and the corresponding index information of described subtitle file;
Described processing module is used for storing multiple described caption stream;
Described processing module is additionally operable to receive the playing request that described terminal sends, and according to described playing request, described first Media Stream fragment and corresponding first caption stream of described first Media Stream fragment is sent to terminal.
7. server according to claim 6 it is characterised in that
Described processing module is additionally operable to, and when described first-class caption stream is sent, described first caption stream of storage is deleted.
8. the server according to claim 6 or 7 is it is characterised in that described coding module is additionally operable to:
Determine that one of the plurality of Media Stream fragment Media Stream fragment is first paragraph Media Stream fragment;
When one of the plurality of Media Stream fragment Media Stream fragment is first paragraph Media Stream fragment, generate the corresponding index information of described first paragraph Media Stream fragment.
9. server according to claim 8 is it is characterised in that described coding module is additionally operable to:
When one of the plurality of Media Stream fragment Media Stream fragment is N section Media Stream fragment, determine that the corresponding caption stream of N-1 section Media Stream fragment has stored, wherein, N is the integer more than 1;
When the corresponding caption stream of described N-1 section Media Stream fragment stores, generate the corresponding index information of described N section Media Stream fragment.
10. a kind of real-time caption broadcasting system is it is characterised in that described system includes:Terminal and described server as arbitrary in claim 6-9;
Described terminal includes:Sending module, receiver module, parsing module and display module;
Described sending module is used for sending playing request to server, and described playing request is used for searching corresponding first caption stream of first-class media fragment;
Described receiver module is used for receiving described first-class media fragment and corresponding first caption stream of described first Media Stream fragment that described server sends;
Described parsing module is used for parsing described first caption stream, obtains the first caption content;
Described display module is used for described first caption content of synchronous broadcasting and described first Media Stream fragment.
CN201510491214.8A 2015-08-11 2015-08-11 real-time caption broadcasting method and system Active CN106454547B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510491214.8A CN106454547B (en) 2015-08-11 2015-08-11 real-time caption broadcasting method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510491214.8A CN106454547B (en) 2015-08-11 2015-08-11 real-time caption broadcasting method and system

Publications (2)

Publication Number Publication Date
CN106454547A true CN106454547A (en) 2017-02-22
CN106454547B CN106454547B (en) 2020-01-31

Family

ID=58093718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510491214.8A Active CN106454547B (en) 2015-08-11 2015-08-11 real-time caption broadcasting method and system

Country Status (1)

Country Link
CN (1) CN106454547B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019194742A1 (en) * 2018-04-04 2019-10-10 Nooggi Pte Ltd A method and system for promoting interaction during live streaming events

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101382937A (en) * 2008-07-01 2009-03-11 深圳先进技术研究院 Multimedia resource processing method based on speech recognition and on-line teaching system thereof
CN101540847A (en) * 2008-03-21 2009-09-23 株式会社康巴思 Caption producing system and caption producing method
US20120170642A1 (en) * 2011-01-05 2012-07-05 Rovi Technologies Corporation Systems and methods for encoding trick play streams for performing smooth visual search of media encoded for adaptive bitrate streaming via hypertext transfer protocol
CN102802044A (en) * 2012-06-29 2012-11-28 华为终端有限公司 Video processing method, terminal and subtitle server
CN103297709A (en) * 2013-06-19 2013-09-11 江苏华音信息科技有限公司 Device for adding Chinese subtitles to Chinese audio video data
CN103544978A (en) * 2013-11-07 2014-01-29 上海斐讯数据通信技术有限公司 Multimedia file manufacturing and playing method and intelligent terminal
CN103561217A (en) * 2013-10-14 2014-02-05 深圳创维数字技术股份有限公司 Method and terminal for generating captions

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101540847A (en) * 2008-03-21 2009-09-23 株式会社康巴思 Caption producing system and caption producing method
CN101382937A (en) * 2008-07-01 2009-03-11 深圳先进技术研究院 Multimedia resource processing method based on speech recognition and on-line teaching system thereof
US20120170642A1 (en) * 2011-01-05 2012-07-05 Rovi Technologies Corporation Systems and methods for encoding trick play streams for performing smooth visual search of media encoded for adaptive bitrate streaming via hypertext transfer protocol
CN102802044A (en) * 2012-06-29 2012-11-28 华为终端有限公司 Video processing method, terminal and subtitle server
CN103297709A (en) * 2013-06-19 2013-09-11 江苏华音信息科技有限公司 Device for adding Chinese subtitles to Chinese audio video data
CN103561217A (en) * 2013-10-14 2014-02-05 深圳创维数字技术股份有限公司 Method and terminal for generating captions
CN103544978A (en) * 2013-11-07 2014-01-29 上海斐讯数据通信技术有限公司 Multimedia file manufacturing and playing method and intelligent terminal

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019194742A1 (en) * 2018-04-04 2019-10-10 Nooggi Pte Ltd A method and system for promoting interaction during live streaming events
US11277674B2 (en) 2018-04-04 2022-03-15 Nooggi Pte Ltd Method and system for promoting interaction during live streaming events

Also Published As

Publication number Publication date
CN106454547B (en) 2020-01-31

Similar Documents

Publication Publication Date Title
CN108401192B (en) Video stream processing method and device, computer equipment and storage medium
CN109168078B (en) Video definition switching method and device
CN108566558B (en) Video stream processing method and device, computer equipment and storage medium
EP3334175A1 (en) Streaming media and caption instant synchronization displaying and matching processing method, device and system
CN103069769B (en) For the special-effect mode transmitted through the network crossfire of decoded video data
CN102713883B (en) Audio segmentation is carried out with the compulsory frame sign of codec
CN106993239B (en) Information display method in live broadcast process
CN106454493B (en) Currently playing TV program information querying method and smart television
US10981056B2 (en) Methods and systems for determining a reaction time for a response and synchronizing user interface(s) with content being rendered
US20150007218A1 (en) Method and apparatus for frame accurate advertisement insertion
CN109089127A (en) A kind of video-splicing method, apparatus, equipment and medium
JP2007528144A (en) Method and apparatus for generating and detecting a fingerprint functioning as a trigger marker in a multimedia signal
CN103763581A (en) Method and system for achieving back view of live program
KR101883018B1 (en) Method and device for providing supplementary content in 3d communication system
CN110049370A (en) Metadata associated with currently playing TV programme is identified using audio stream
CN101232612A (en) Assistant medium playing method triggered based on video contents
CN106792114A (en) The changing method and device of captions
CN106851326A (en) A kind of playing method and device
CN110519627B (en) Audio data synchronization method and device
CN112929730A (en) Bullet screen processing method and device, electronic equipment, storage medium and system
CN109525852B (en) Live video stream processing method, device and system and computer readable storage medium
JP6948934B2 (en) Content processing systems, terminals, and programs
TW201225669A (en) System and method for synchronizing with multimedia broadcast program and computer program product thereof
CN113099282B (en) Data processing method, device and equipment
CN106454547A (en) Real-time subtitle playing method and real-time subtitle playing system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20210804

Address after: Room 1601, 16th floor, East Tower, Ximei building, No. 6, Changchun Road, high tech Industrial Development Zone, Zhengzhou, Henan 450001

Patentee after: Zhengzhou xinrand Network Technology Co.,Ltd.

Address before: 100190, No. 21 West Fourth Ring Road, Beijing, Haidian District

Patentee before: INSTITUTE OF ACOUSTICS, CHINESE ACADEMY OF SCIENCES

TR01 Transfer of patent right