CN103024517A - Method for synchronously playing streaming media audios and videos based on parallel processing - Google Patents

Method for synchronously playing streaming media audios and videos based on parallel processing Download PDF

Info

Publication number
CN103024517A
CN103024517A CN2012105460549A CN201210546054A CN103024517A CN 103024517 A CN103024517 A CN 103024517A CN 2012105460549 A CN2012105460549 A CN 2012105460549A CN 201210546054 A CN201210546054 A CN 201210546054A CN 103024517 A CN103024517 A CN 103024517A
Authority
CN
China
Prior art keywords
video
timestamp
audio
formation
audio frequency
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2012105460549A
Other languages
Chinese (zh)
Inventor
吴小伟
刘念林
李汶隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Jiuzhou Electric Group Co Ltd
Original Assignee
Sichuan Jiuzhou Electric Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Jiuzhou Electric Group Co Ltd filed Critical Sichuan Jiuzhou Electric Group Co Ltd
Priority to CN2012105460549A priority Critical patent/CN103024517A/en
Publication of CN103024517A publication Critical patent/CN103024517A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a method for synchronously playing streaming media audios and videos based on parallel processing, and belongs to the field of audio and video image processing. By dividing the playing process of streaming media audio and video data from one or more network service points or local existing streaming media audio and video data into the steps of audio and video acquisition, audio and video splitting, video decoding, audio decoding, time stamp error analysis, video rendering, audio playing and the like, based on a split time stamp, whether video playing after audio playing and video rendering meets playing conditions or not is judged, accordingly, the audios and the videos are synchronously played, and the problem of mismatching of video decoding speed and rendering speed is solved. The steps can be operated on the same server of a multiprocessor or different servers connected through a network, so that parallel processing in terms of time and/or space can be realized by means of parallel starting operation.

Description

A kind of stream medium audio and video synchronous broadcast method based on parallel processing
Technical field
The present invention relates to the stream medium audio and video image processing field, particularly a kind of player method of stream medium audio and video.
Background technology
It is Motion JPEG that the video encoding and decoding standard of present main flow has MJPEG(, a kind of video compression format, wherein each two field picture all uses respectively the JPEG coding, also can be referred to as MJPE), H264 etc. adopt traditional hybrid coding structure, this structure adopts time and spatial prediction, conversion, quantize and entropy coding method, the frame of video coding is made as different frame types: the I frame, the B frame, the P frame, this will cause that video decode speed is different different because of frame type, and the frame per second of playing up of video is fixed, thereby has produced video decode speed and the unmatched phenomenon of rendering speed.
Because audio/video flow requires to play synchronously, and the Voice ﹠ Video data are asynchronous decoding and output, if do not adopt synchronous output controlling mechanism, will cause audio frequency and video to play nonsynchronous phenomenon.
Summary of the invention
The objective of the invention is for the decoding speed that exists in the present stream medium audio and video playing process and rendering speed does not mate and/or audio frequency and video are play asynchronous problem, a kind of streaming media video synchronous broadcast method based on parallel processing is provided.
In order to realize purpose of the present invention, adopt following technical scheme:
A kind of stream medium audio and video synchronous broadcast method based on parallel processing, the parallel processing audio frequency and video are obtained in time or on the space, audio frequency and video fractionation, video decode, audio decoder, timestamp error analysis, Video Rendering and audio frequency play step, and described timestamp error is that the sound between video decode timestamp and the audio decoder timestamp is looked the timestamp error.
Make up between buffer empty before the described audio frequency and video obtaining step, comprise the formation of audio frequency and video mixing buffering area between described buffer empty , the video frequency decoding buffer zone formation
Figure 2012105460549100002DEST_PATH_IMAGE004
, the audio decoder buffer formation
Figure 2012105460549100002DEST_PATH_IMAGE006
, the formation of Video Rendering buffering area
Figure 2012105460549100002DEST_PATH_IMAGE008
, the formation of video decode timestamp buffering area
Figure 2012105460549100002DEST_PATH_IMAGE010
, the formation of audio frequency play buffer
Figure 2012105460549100002DEST_PATH_IMAGE012
, the formation of audio decoder timestamp buffering area
Figure 2012105460549100002DEST_PATH_IMAGE014
, the formation of audio frequency and video error time stamp buffering area
Figure 2012105460549100002DEST_PATH_IMAGE016
The audio frequency and video blended data bag that described audio frequency and video obtaining step will get access to from the Streaming Media source inserts successively the audio frequency and video that make up in advance and mixes the buffering area formation
Figure 837109DEST_PATH_IMAGE002
Described audio frequency and video splitting step is mixed the buffering area formation from audio frequency and video Take out successively audio frequency and video blended data bag, utilize audio frequency and video to split algorithm audio frequency and video blended data bag is carried out the audio frequency and video fractionation, extraction video flowing, sound are looked stream, are split rear timestamp
Figure 2012105460549100002DEST_PATH_IMAGE018
With video flowing and timestamp Form the video decode node and insert the video frequency decoding buffer zone formation that makes up in advance With audio stream and timestamp
Figure 805885DEST_PATH_IMAGE018
Form the audio decoder node and insert the audio decoder buffer formation that makes up in advance
Figure 628347DEST_PATH_IMAGE006
Described video decode step is from the video frequency decoding buffer zone formation Take out successively the video decode node, video flowing in the node is decoded, obtain decoded video flowing, video decode timestamp With the rear timestamp of video flowing and fractionation
Figure 717450DEST_PATH_IMAGE018
Form the Video Rendering node and insert the Video Rendering buffering area formation that makes up in advance
Figure 386328DEST_PATH_IMAGE008
With the video decode timestamp
Figure 566643DEST_PATH_IMAGE020
With timestamp after the fractionation
Figure 320972DEST_PATH_IMAGE018
Form video decode timestamp node and insert the video decode timestamp buffering area formation that makes up in advance
Figure 995667DEST_PATH_IMAGE010
Described audio decoder step is from the audio decoder buffer formation
Figure 519052DEST_PATH_IMAGE006
Take out successively the audio decoder node, node sound intermediate frequency stream is decoded, obtain decoded audio stream, audio decoder timestamp
Figure 2012105460549100002DEST_PATH_IMAGE022
With the rear timestamp of audio stream and fractionation
Figure 574995DEST_PATH_IMAGE018
Form the audio frequency broadcast nodes and insert the audio frequency play buffer formation that makes up in advance
Figure 816621DEST_PATH_IMAGE012
With the audio decoder timestamp
Figure 216378DEST_PATH_IMAGE022
With timestamp after the fractionation
Figure 859849DEST_PATH_IMAGE018
Form audio decoder timestamp node and insert the audio decoder timestamp buffering area formation that makes up in advance
Figure 886361DEST_PATH_IMAGE014
Described timestamp error analysis step is to split rear timestamp
Figure 552965DEST_PATH_IMAGE018
For keyword in the video frequency decoding buffer zone formation
Figure 818731DEST_PATH_IMAGE010
Search video decode timestamp corresponding node information; Again to split rear timestamp For keyword in the audio decoder buffer formation
Figure 26038DEST_PATH_IMAGE014
Search audio decoder timestamp corresponding node information; By the video decode timestamp
Figure 665092DEST_PATH_IMAGE020
With the audio decoder timestamp
Figure 485281DEST_PATH_IMAGE022
Obtain sound apparent error timestamp =
Figure 962398DEST_PATH_IMAGE020
-
Figure 590432DEST_PATH_IMAGE022
With sound apparent error timestamp
Figure 966050DEST_PATH_IMAGE024
With timestamp after the fractionation
Figure 573618DEST_PATH_IMAGE018
Form audio frequency and video error time stamp node and insert the audio frequency and video error time stamp buffering area formation that makes up in advance
Figure 46187DEST_PATH_IMAGE016
Described Video Rendering step is successively from the video frequency decoding buffer zone formation
Figure 97320DEST_PATH_IMAGE004
Take out the Video Rendering node, obtain video flowing and split rear timestamp
Figure 710966DEST_PATH_IMAGE018
Again according to timestamp after splitting
Figure 935274DEST_PATH_IMAGE018
From the formation of audio frequency and video error time stamp buffering area
Figure 934454DEST_PATH_IMAGE016
Take out sound apparent error timestamp node, obtain the audio frequency and video error time
Figure 405756DEST_PATH_IMAGE024
Judge the audio frequency and video error time
Figure 755966DEST_PATH_IMAGE024
0 whether set up, if set up then play immediately, otherwise just wait for
Figure 783964DEST_PATH_IMAGE024
Play again.
Described audio frequency is play step successively from the audio decoder buffer formation
Figure 668611DEST_PATH_IMAGE006
Take out the audio frequency broadcast nodes, obtain audio stream and split rear timestamp
Figure 795967DEST_PATH_IMAGE018
Again according to timestamp after splitting
Figure 882741DEST_PATH_IMAGE018
From the formation of audio frequency and video error time stamp buffering area Take out sound apparent error timestamp node, obtain the audio frequency and video error time
Figure 688203DEST_PATH_IMAGE024
Judge the audio frequency and video error time
Figure 737192DEST_PATH_IMAGE024
0 whether set up, if set up then wait for
Figure 124311DEST_PATH_IMAGE024
Play again, otherwise just play immediately.
The present invention adopts the method for above-mentioned steps, with whole playing process, be decomposed into several steps, can form different processing modules, by these processing modules being operated on the server of same multiprocessor, or operate on the different server that network links to each other, thereby can the realization time parallel processing on the upper and/or space; By the structure between buffer empty, can make in the formation of audio, video data between buffer empty that needs each step process and sequentially carry out, like this when certain step when data are processed, other step is after handling the data that this step is associated, the data of other parts be can continue to process, thereby parallel, the efficiently broadcast of fluid sound video guaranteed; Split rear timestamp, video decode timestamp and audio decoder timestamp by obtaining, and the audio frequency and video error time between the judgement rear two and the relation between preset value or the setup parameter, determine whether to carry out the broadcast of audio frequency and/or video, particularly by splitting rear timestamp as benchmark, by its data inserting, taking-up data, judgement data, thereby realize better can playing synchronously based on the Voice ﹠ Video of timestamp after the same fractionation, and then guarantee that whole stream medium audio and video can play synchronously.
Description of drawings
Fig. 1 is the sequential chart of the stream medium audio and video synchronous broadcast method based on parallel computation of the present invention.
Fig. 2 is the flow chart of the stream medium audio and video synchronous broadcast method based on parallel computation of the present invention.
 
Embodiment
Below in conjunction with Figure of description, further describe the present invention by embodiment.
As shown in Figure 1, the present invention will come from the playing process of one or more network service points or local existing stream medium audio and video data, being divided into audio frequency and video obtains, audio frequency and video split, video decode, audio decoder, the timestamp error analysis, several steps such as Video Rendering and audio frequency broadcast, the process of processing each step can form respectively corresponding processing module by computer, these modules may operate on the server of same multiprocessor, also may operate on the continuous different server of network, operate by parallel starting like this, make these processing procedures can realize walking abreast on the while parallel and/or space in time, it is the same time, each step of simultaneously parallel processing of the different processor of different server or same server realizes temporal parallel processing; Equally, different server can link to each other by network and is positioned at different places, under the parallel starting operation, realizes parallel processing on the space of different places or position.
As shown in Figure 2, the detailed process of above steps is as follows:
1, makes up between buffer empty
By making up following buffering area troop, comprise the formation of audio frequency and video mixing buffering area between described buffer empty
Figure 431796DEST_PATH_IMAGE002
, the video frequency decoding buffer zone formation
Figure 509342DEST_PATH_IMAGE004
, the audio decoder buffer formation
Figure 244080DEST_PATH_IMAGE006
, the formation of Video Rendering buffering area
Figure 852916DEST_PATH_IMAGE008
, the formation of video decode timestamp buffering area
Figure 711894DEST_PATH_IMAGE010
, the formation of audio frequency play buffer
Figure 660258DEST_PATH_IMAGE012
, the formation of audio decoder timestamp buffering area , the formation of audio frequency and video error time stamp buffering area
Figure 911297DEST_PATH_IMAGE016
For the element in the interior individual queue between buffer empty, adopt the FIFO mode to process each element node in the buffering area.
The buffering area formation both can be distributed on same the server, also can be distributed on the different server that network links to each other, but provide unified data-interface to all parallel processing modules; The buffering area spatial server receives each parallel processing module and processes to the access request between buffer empty and to it, access request comprises: data inserting request, taking-up request of data, data search request, the data inserting request is the head of the queue that data is inserted into specified queue, take out request of data and be from the tail of the queue of specified queue and take out data, the data search request is to search the element node and this node is taken out from formation in specified queue with nominal key.
Each parallel processing module is by the element node in mode (data inserting request, taking-up request of data, data search request) the operation buffer formation that sends request to the buffering area spatial server; Generate first request data package, the data item that request data package comprises before sending request: queued name, action type (take out, insert, search), element node.
The element node is according to the actual conditions of each buffering area formation and difference.
Audio frequency and video are mixed the buffering area formation
Figure 560584DEST_PATH_IMAGE002
The data item that comprises of node element: the audio frequency and video mixed flow
The video frequency decoding buffer zone formation The data item that comprises of node element: video flowing before the decoding
Figure 2012105460549100002DEST_PATH_IMAGE028
, split after timestamp
Figure 377679DEST_PATH_IMAGE018
The audio decoder buffer formation
Figure 898790DEST_PATH_IMAGE006
The data item that comprises of node element: audio stream before the decoding
Figure 2012105460549100002DEST_PATH_IMAGE030
, split after timestamp
Figure 839851DEST_PATH_IMAGE018
The formation of Video Rendering buffering area The data item that comprises of node element: decoding rear video stream
Figure 2012105460549100002DEST_PATH_IMAGE032
, split after timestamp
Figure 259517DEST_PATH_IMAGE018
The formation of video decode timestamp buffering area
Figure 330241DEST_PATH_IMAGE010
The data item that comprises of node element: the video decode timestamp
Figure 321331DEST_PATH_IMAGE020
With timestamp after the fractionation
The formation of audio frequency play buffer
Figure 330186DEST_PATH_IMAGE012
The data item that comprises of node element: audio stream after the decoding
Figure 2012105460549100002DEST_PATH_IMAGE034
, the audio decoder timestamp
Figure 560310DEST_PATH_IMAGE022
The formation of audio decoder timestamp buffering area
Figure 417408DEST_PATH_IMAGE014
The data item that comprises of node element: the audio decoder timestamp , split after timestamp
Figure 950206DEST_PATH_IMAGE018
The formation of audio frequency and video error time stamp buffering area
Figure 995523DEST_PATH_IMAGE016
The data item that comprises of node element: sound apparent error timestamp
Figure 390732DEST_PATH_IMAGE024
With timestamp
Figure 877208DEST_PATH_IMAGE018
The data item of above-mentioned each node element, the step that part can be explained below obtains or is inserted in the corresponding buffering area formation.
2, audio frequency and video are obtained
The audio frequency and video blended data bag that the audio frequency and video obtaining step will get access to from the Streaming Media source inserts successively the audio frequency and video that make up in advance and mixes the buffering area formation The Streaming Media source can be one or more network service points or local existing files in stream media.
3, audio frequency and video split
The audio frequency and video splitting step is mixed the buffering area formation from audio frequency and video
Figure 917769DEST_PATH_IMAGE002
Take out successively audio frequency and video blended data bag, utilize audio frequency and video to split algorithm audio frequency and video blended data bag is carried out the audio frequency and video fractionation, extraction video flowing, sound are looked stream, are split rear timestamp
Figure 116669DEST_PATH_IMAGE018
With video flowing and timestamp Form the video decode node and insert the video frequency decoding buffer zone formation that makes up in advance
Figure 716595DEST_PATH_IMAGE004
With audio stream and timestamp Form the audio decoder node and insert the audio decoder buffer formation that makes up in advance
Figure 394887DEST_PATH_IMAGE006
The processing method that audio frequency and video split has TS stream to split algorithm and PS stream splits algorithm, and these algorithms are mainly for MPEG-PS, MPEG-TS; MPEG-PS(Program Stream) is mainly used in the fixedly audio frequency and video mixed flow of duration that has of storage, has packet header, system's head, PES to wrap 3 parts and consist of; MPEG-TS is mainly used in the audio frequency and video mixed flow of real-time transmission, and it is to adopt subpackage to send, and each bag comprises packet header, load.TS stream method for dividing and processing mainly is to resolve according to TS stream header packet information to obtain audio stream, video flowing; PS stream method for dividing and processing mainly is to resolve and obtain audio stream, video flowing according to PS stream header packet information, system's header.
4, video decode
The video decode step is from the video frequency decoding buffer zone formation
Figure 918272DEST_PATH_IMAGE004
Take out successively (removing method sees that the preamble of this specification is described) video decode node, video flowing in the node is decoded, obtain decoded video flowing, video decode timestamp
Figure 285799DEST_PATH_IMAGE020
With the rear timestamp of video flowing and fractionation
Figure 261846DEST_PATH_IMAGE018
Form the Video Rendering node and insert the Video Rendering buffering area formation that makes up in advance
Figure 490964DEST_PATH_IMAGE008
With the video decode timestamp
Figure 134435DEST_PATH_IMAGE020
With timestamp after the fractionation
Figure 407284DEST_PATH_IMAGE018
Form video decode timestamp node and insert the video decode timestamp buffering area formation that makes up in advance
Figure 323157DEST_PATH_IMAGE010
Video decode mainly is that video flowing is decompressed; Usually change MJPE form code stream or the circulation of H264 form type code into rgb format code stream or YVU form code stream.
5, audio decoder
The audio decoder step is from the audio decoder buffer formation
Figure 401971DEST_PATH_IMAGE006
Take out successively the audio decoder node, node sound intermediate frequency stream is decoded, obtain decoded audio stream, audio decoder timestamp
Figure 899949DEST_PATH_IMAGE022
With the rear timestamp of audio stream and fractionation
Figure 343699DEST_PATH_IMAGE018
Form the audio frequency broadcast nodes and insert the audio frequency play buffer formation that makes up in advance With the audio decoder timestamp
Figure 122049DEST_PATH_IMAGE022
With timestamp after the fractionation
Figure 474533DEST_PATH_IMAGE018
Form audio decoder timestamp node and insert the audio decoder timestamp buffering area formation that makes up in advance
Figure 354764DEST_PATH_IMAGE014
6, timestamp error analysis
Timestamp error analysis step is to split rear timestamp
Figure 792699DEST_PATH_IMAGE018
For keyword in the video frequency decoding buffer zone formation
Figure 400266DEST_PATH_IMAGE010
Search video decode timestamp corresponding node information; Again to split rear timestamp For keyword in the audio decoder buffer formation
Figure 346805DEST_PATH_IMAGE014
Search audio decoder timestamp corresponding node information; By the video decode timestamp
Figure 209719DEST_PATH_IMAGE020
With the audio decoder timestamp
Figure 620977DEST_PATH_IMAGE022
Obtain sound apparent error timestamp
Figure 948054DEST_PATH_IMAGE024
= - With sound apparent error timestamp
Figure 30520DEST_PATH_IMAGE024
With timestamp after the fractionation
Figure 149785DEST_PATH_IMAGE018
Form audio frequency and video error time stamp node and insert the audio frequency and video error time stamp buffering area formation that makes up in advance
7, Video Rendering
The Video Rendering step is successively from the video frequency decoding buffer zone formation
Figure 691811DEST_PATH_IMAGE004
Take out the Video Rendering node, obtain video flowing and split rear timestamp Again according to timestamp after splitting
Figure 231694DEST_PATH_IMAGE018
From the formation of audio frequency and video error time stamp buffering area
Figure 546263DEST_PATH_IMAGE016
Take out sound apparent error timestamp node, obtain the audio frequency and video error time
Figure 933382DEST_PATH_IMAGE024
Judge the audio frequency and video error time
Figure 975287DEST_PATH_IMAGE024
0 whether set up, if set up then play immediately, otherwise just wait for
Figure 131462DEST_PATH_IMAGE024
Play again.
Among the other embodiment, can also according to actual needs, set the judgement of non-zero and consider parameter, be i.e. the audio frequency and video error time
Figure 115467DEST_PATH_IMAGE024
And when the magnitude relationship between the preset value or other logical relation are set up, then judge and to play immediately, otherwise play again when setting up with regard to wait condition.
8, audio frequency is play
Audio frequency is play step successively from the audio decoder buffer formation Take out the audio frequency broadcast nodes, obtain audio stream and split rear timestamp
Figure 835479DEST_PATH_IMAGE018
Again according to timestamp after splitting
Figure 846160DEST_PATH_IMAGE018
From the formation of audio frequency and video error time stamp buffering area
Figure 814116DEST_PATH_IMAGE016
Take out sound apparent error timestamp node, obtain the audio frequency and video error time
Figure 601593DEST_PATH_IMAGE024
Judge the audio frequency and video error time
Figure 47618DEST_PATH_IMAGE024
0 whether set up, if set up then wait for
Figure 116068DEST_PATH_IMAGE024
Play again, otherwise just play immediately.
Equally, among the other embodiment, can also according to actual needs, set the judgement of non-zero and consider parameter, be i.e. the audio frequency and video error time
Figure 254926DEST_PATH_IMAGE024
And when the magnitude relationship between the preset value or other logical relation are set up, then judge and to play immediately, otherwise play again when setting up with regard to wait condition.
As seen, by above steps, based on timestamp after splitting
Figure 838354DEST_PATH_IMAGE018
As the reference benchmark, audio frequency and video blended data bag to Streaming Media inserts successively the buffering area formation, splits successively, searches successively, the successively operation such as taking-up, and video playback is behind Video Rendering, obtains based on timestamp after splitting and judges the audio frequency and video error time
Figure 275020DEST_PATH_IMAGE024
Whether satisfy play pre-conditioned, if satisfy (setting up for condition as getting in one embodiment of the present of invention greater than zero) then play, otherwise just continue to wait for until satisfy just broadcast; And the audio frequency broadcast is obtained based on timestamp after splitting equally and judge the audio frequency and video error time Whether satisfy play pre-conditioned, if satisfy (setting up for condition as getting in one embodiment of the present of invention greater than zero) then play, otherwise just continue to wait for until satisfy just broadcast.
Like this, because above steps can be in time and/or parallel processing on the space, each node element after the fractionation, simultaneously parallel processing, needn't wait for, as long as just can play continuously based on timestamp after the identical fractionation, particularly above-mentioned video playback is the judgement of playing again condition behind Video Rendering, efficiently solve the matching problem between video decode speed and the rendering speed, and audio frequency play with Video Rendering after video playback all be based on the fractionation of identity element node after timestamp determine whether the condition of playing, the broadcast of the Voice ﹠ Video of identity element node is reached synchronously, and need not to adopt synchronous output control of the prior art.
?

Claims (9)

1. stream medium audio and video synchronous broadcast method based on parallel processing, it is characterized in that, the parallel processing audio frequency and video are obtained on time or on the space, audio frequency and video fractionation, video decode, audio decoder, timestamp error analysis, Video Rendering and audio frequency play step, and described timestamp error is that the sound between video decode timestamp and the audio decoder timestamp is looked the timestamp error.
2. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 1, it is characterized in that, make up between buffer empty before the described audio frequency and video obtaining step, comprise between described buffer empty comprising the formation of audio frequency and video mixing buffering area between buffer empty
Figure 2012105460549100001DEST_PATH_IMAGE002
, the video frequency decoding buffer zone formation , the audio decoder buffer formation
Figure 2012105460549100001DEST_PATH_IMAGE006
, the formation of Video Rendering buffering area
Figure 2012105460549100001DEST_PATH_IMAGE008
, the formation of video decode timestamp buffering area , the formation of audio frequency play buffer
Figure 2012105460549100001DEST_PATH_IMAGE012
, the formation of audio decoder timestamp buffering area
Figure 2012105460549100001DEST_PATH_IMAGE014
, the formation of audio frequency and video error time stamp buffering area
Figure 2012105460549100001DEST_PATH_IMAGE016
3. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 1 or 2, it is characterized in that, the audio frequency and video blended data bag that described audio frequency and video obtaining step will get access to from the Streaming Media source inserts successively the audio frequency and video that make up in advance and mixes the buffering area formation
4. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 3 is characterized in that, described audio frequency and video splitting step is mixed the buffering area formation from audio frequency and video
Figure 317512DEST_PATH_IMAGE002
Take out successively audio frequency and video blended data bag, utilize audio frequency and video to split algorithm audio frequency and video blended data bag is carried out the audio frequency and video fractionation, extraction video flowing, sound are looked stream, are split rear timestamp
Figure 2012105460549100001DEST_PATH_IMAGE018
With video flowing and timestamp
Figure 284200DEST_PATH_IMAGE018
Form the video decode node and insert the video frequency decoding buffer zone formation that makes up in advance
Figure 617092DEST_PATH_IMAGE004
With audio stream and timestamp
Figure 122810DEST_PATH_IMAGE018
Form the audio decoder node and insert the audio decoder buffer formation that makes up in advance
Figure 945272DEST_PATH_IMAGE006
5. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 4 is characterized in that, described video decode step is from the video frequency decoding buffer zone formation
Figure 149989DEST_PATH_IMAGE004
Take out successively the video decode node, video flowing in the node is decoded, obtain decoded video flowing, video decode timestamp With the rear timestamp of video flowing and fractionation Form the Video Rendering node and insert the Video Rendering buffering area formation that makes up in advance
Figure 204718DEST_PATH_IMAGE008
With the video decode timestamp
Figure 135765DEST_PATH_IMAGE020
With timestamp after the fractionation
Figure 890095DEST_PATH_IMAGE018
Form video decode timestamp node and insert the video decode timestamp buffering area formation that makes up in advance
Figure 315522DEST_PATH_IMAGE010
6. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 5 is characterized in that, described audio decoder step is from the audio decoder buffer formation Take out successively the audio decoder node, node sound intermediate frequency stream is decoded, obtain decoded audio stream, audio decoder timestamp
Figure 2012105460549100001DEST_PATH_IMAGE022
With the rear timestamp of audio stream and fractionation
Figure 206435DEST_PATH_IMAGE018
Form the audio frequency broadcast nodes and insert the audio frequency play buffer formation that makes up in advance
Figure 448060DEST_PATH_IMAGE012
With the audio decoder timestamp With timestamp after the fractionation
Figure 553605DEST_PATH_IMAGE018
Form audio decoder timestamp node and insert the audio decoder timestamp buffering area formation that makes up in advance
Figure 888772DEST_PATH_IMAGE014
7. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 6 is characterized in that, described timestamp error analysis step is to split rear timestamp
Figure 555376DEST_PATH_IMAGE018
For keyword in the video frequency decoding buffer zone formation
Figure 634191DEST_PATH_IMAGE010
Search video decode timestamp corresponding node information; Again to split rear timestamp
Figure 817654DEST_PATH_IMAGE018
For keyword in the audio decoder buffer formation Search audio decoder timestamp corresponding node information; By the video decode timestamp
Figure 539939DEST_PATH_IMAGE020
With the audio decoder timestamp
Figure 360128DEST_PATH_IMAGE022
Obtain sound apparent error timestamp
Figure 2012105460549100001DEST_PATH_IMAGE024
= -
Figure 842111DEST_PATH_IMAGE022
With sound apparent error timestamp
Figure 280045DEST_PATH_IMAGE024
With timestamp after the fractionation
Figure 638345DEST_PATH_IMAGE018
Form audio frequency and video error time stamp node and insert the audio frequency and video error time stamp buffering area formation that makes up in advance
Figure 110915DEST_PATH_IMAGE016
8. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 7 is characterized in that, described Video Rendering step is successively from the video frequency decoding buffer zone formation
Figure 912780DEST_PATH_IMAGE004
Take out the Video Rendering node, obtain video flowing and split rear timestamp
Figure 838011DEST_PATH_IMAGE018
Again according to timestamp after splitting From the formation of audio frequency and video error time stamp buffering area
Figure 61499DEST_PATH_IMAGE016
Take out sound apparent error timestamp node, obtain the audio frequency and video error time Judge the audio frequency and video error time
Figure 945327DEST_PATH_IMAGE024
Whether set up with the magnitude relationship of preset value, if set up then immediately broadcast, otherwise just wait for Play again.
9. a kind of stream medium audio and video synchronous broadcast method based on parallel processing as claimed in claim 8 is characterized in that, described audio frequency is play step successively from the audio decoder buffer formation
Figure 92592DEST_PATH_IMAGE006
Take out the audio frequency broadcast nodes, obtain audio stream and split rear timestamp
Figure 282265DEST_PATH_IMAGE018
Again according to timestamp after splitting
Figure 861714DEST_PATH_IMAGE018
From the formation of audio frequency and video error time stamp buffering area
Figure 427824DEST_PATH_IMAGE016
Take out sound apparent error timestamp node, obtain the audio frequency and video error time Judge the audio frequency and video error time
Figure 27750DEST_PATH_IMAGE024
Whether set up with the magnitude relationship of preset value, if set up then wait
Figure 601820DEST_PATH_IMAGE024
Play again, otherwise just play immediately.
CN2012105460549A 2012-12-17 2012-12-17 Method for synchronously playing streaming media audios and videos based on parallel processing Pending CN103024517A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2012105460549A CN103024517A (en) 2012-12-17 2012-12-17 Method for synchronously playing streaming media audios and videos based on parallel processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2012105460549A CN103024517A (en) 2012-12-17 2012-12-17 Method for synchronously playing streaming media audios and videos based on parallel processing

Publications (1)

Publication Number Publication Date
CN103024517A true CN103024517A (en) 2013-04-03

Family

ID=47972570

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2012105460549A Pending CN103024517A (en) 2012-12-17 2012-12-17 Method for synchronously playing streaming media audios and videos based on parallel processing

Country Status (1)

Country Link
CN (1) CN103024517A (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125493A (en) * 2013-04-24 2014-10-29 鸿富锦精密工业(深圳)有限公司 Audio-video synchronization system and method
CN106385525A (en) * 2016-09-07 2017-02-08 天脉聚源(北京)传媒科技有限公司 Video play method and device
CN106600656A (en) * 2016-11-24 2017-04-26 合肥中科云巢科技有限公司 Graphic rendering method and device
CN107277614A (en) * 2017-06-27 2017-10-20 深圳市爱培科技术股份有限公司 Audio and video remote player method, storage device and the mobile terminal of drive recorder
CN107562003A (en) * 2016-06-30 2018-01-09 欧姆龙株式会社 Image processing apparatus, image processing method and image processing program
CN108924631A (en) * 2018-06-27 2018-11-30 杭州叙简科技股份有限公司 A kind of video recording generation method shunting storage based on audio-video
CN109168059A (en) * 2018-10-17 2019-01-08 上海赛连信息科技有限公司 A kind of labial synchronization method playing audio & video respectively on different devices
CN110072137A (en) * 2019-04-26 2019-07-30 湖南琴岛网络传媒科技有限公司 A kind of data transmission method and transmitting device of net cast
CN110087146A (en) * 2019-06-06 2019-08-02 成都德尚视云科技有限公司 The method and system that analysis and rendering to video file synchronize
CN114286149A (en) * 2021-12-31 2022-04-05 广东博华超高清创新中心有限公司 Method and system for synchronously rendering audio and video across equipment and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0987904A2 (en) * 1998-08-31 2000-03-22 Lucent Technologies Inc. Method and apparatus for adaptive synchronization of digital video and audio playback
CN101466044A (en) * 2007-12-19 2009-06-24 康佳集团股份有限公司 Method and system for synchronously playing stream medium audio and video
CN101902649A (en) * 2010-07-15 2010-12-01 浙江工业大学 Audio-video synchronization control method based on H.264 standard
CN101984672A (en) * 2010-11-03 2011-03-09 深圳芯邦科技股份有限公司 Method and device for multi-thread video and audio synchronous control

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0987904A2 (en) * 1998-08-31 2000-03-22 Lucent Technologies Inc. Method and apparatus for adaptive synchronization of digital video and audio playback
CN101466044A (en) * 2007-12-19 2009-06-24 康佳集团股份有限公司 Method and system for synchronously playing stream medium audio and video
CN101902649A (en) * 2010-07-15 2010-12-01 浙江工业大学 Audio-video synchronization control method based on H.264 standard
CN101984672A (en) * 2010-11-03 2011-03-09 深圳芯邦科技股份有限公司 Method and device for multi-thread video and audio synchronous control

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104125493A (en) * 2013-04-24 2014-10-29 鸿富锦精密工业(深圳)有限公司 Audio-video synchronization system and method
CN107562003A (en) * 2016-06-30 2018-01-09 欧姆龙株式会社 Image processing apparatus, image processing method and image processing program
CN106385525A (en) * 2016-09-07 2017-02-08 天脉聚源(北京)传媒科技有限公司 Video play method and device
CN106600656A (en) * 2016-11-24 2017-04-26 合肥中科云巢科技有限公司 Graphic rendering method and device
CN107277614A (en) * 2017-06-27 2017-10-20 深圳市爱培科技术股份有限公司 Audio and video remote player method, storage device and the mobile terminal of drive recorder
CN108924631B (en) * 2018-06-27 2021-07-06 杭州叙简科技股份有限公司 Video generation method based on audio and video shunt storage
CN108924631A (en) * 2018-06-27 2018-11-30 杭州叙简科技股份有限公司 A kind of video recording generation method shunting storage based on audio-video
CN109168059A (en) * 2018-10-17 2019-01-08 上海赛连信息科技有限公司 A kind of labial synchronization method playing audio & video respectively on different devices
CN109168059B (en) * 2018-10-17 2021-06-18 上海赛连信息科技有限公司 Lip sound synchronization method for respectively playing audio and video on different devices
CN110072137B (en) * 2019-04-26 2021-06-08 湖南琴岛网络传媒科技有限公司 Data transmission method and device for live video
CN110072137A (en) * 2019-04-26 2019-07-30 湖南琴岛网络传媒科技有限公司 A kind of data transmission method and transmitting device of net cast
CN110087146A (en) * 2019-06-06 2019-08-02 成都德尚视云科技有限公司 The method and system that analysis and rendering to video file synchronize
CN110087146B (en) * 2019-06-06 2021-05-04 成都德尚视云科技有限公司 Method and system for synchronizing analysis and rendering of video file
CN114286149A (en) * 2021-12-31 2022-04-05 广东博华超高清创新中心有限公司 Method and system for synchronously rendering audio and video across equipment and system
CN114286149B (en) * 2021-12-31 2023-07-07 广东博华超高清创新中心有限公司 Audio and video synchronous rendering method and system of cross-equipment and system

Similar Documents

Publication Publication Date Title
CN103024517A (en) Method for synchronously playing streaming media audios and videos based on parallel processing
US11265562B2 (en) Transmitting method and receiving method
JP6793231B2 (en) Reception method
EP3096526B1 (en) Communication apparatus, communication data generation method, and communication data processing method
EP2997736B1 (en) Adaptive streaming transcoder synchronization
EP2721814B1 (en) Method and apparatus for transmitting/receiving media contents in multimedia system
CN102037731B (en) Signalling and extraction in compressed video of pictures belonging to interdependency tiers
KR101701182B1 (en) A method for recovering content streamed into chunk
US9510028B2 (en) Adaptive video transcoding based on parallel chunked log analysis
US8325821B1 (en) Video transcoder stream multiplexing systems and methods
CN106605409B (en) Transmission device, reception device, transmission method, and reception method
JPWO2012096372A1 (en) Content playback apparatus, content playback method, distribution system, content playback program, recording medium, and data structure
US9800880B2 (en) Configurable transcoder and methods for use therewith
CN110519635B (en) Audio and video media stream converging method and system of wireless cluster system
WO2014193996A2 (en) Network video streaming with trick play based on separate trick play files
EP3096533B1 (en) Communication apparatus, communication data generation method, and communication data processing method
CA2786812A1 (en) Method and arrangement for supporting playout of content
US20130219073A1 (en) Adaptive display streams
EP3096524B1 (en) Communication apparatus, communication data generation method, and communication data processing method
CN108122558B (en) Real-time capacity conversion implementation method and device for LATM AAC audio stream
JP2022075740A (en) Transmission method, reception method, transmission device, and reception device
CN104079975A (en) Image processing device, image processing method, and computer program
JP2019220974A (en) Decoder
RU2698779C2 (en) Transmission device, transmission method, receiving device and reception method
EP3096525B1 (en) Communication apparatus, communication data generation method, and communication data processing method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20130403