CN101188770A - An audio and video synchronization output method for multi-process control - Google Patents

An audio and video synchronization output method for multi-process control Download PDF

Info

Publication number
CN101188770A
CN101188770A CNA2007101724161A CN200710172416A CN101188770A CN 101188770 A CN101188770 A CN 101188770A CN A2007101724161 A CNA2007101724161 A CN A2007101724161A CN 200710172416 A CN200710172416 A CN 200710172416A CN 101188770 A CN101188770 A CN 101188770A
Authority
CN
China
Prior art keywords
audio
video
data
frame
pts
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CNA2007101724161A
Other languages
Chinese (zh)
Inventor
张钰
于玥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Central Academy of SVA Group Co Ltd
Original Assignee
Central Academy of SVA Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Central Academy of SVA Group Co Ltd filed Critical Central Academy of SVA Group Co Ltd
Priority to CNA2007101724161A priority Critical patent/CN101188770A/en
Publication of CN101188770A publication Critical patent/CN101188770A/en
Pending legal-status Critical Current

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention provides an audio / video synchronous output method for the multi-process control. Five progresses of system layer de-multiplexing, video decoding, audio decoding, video synchronous output and audio synchronous output are required to be established in the method. A transport stream is de-multiplexed and unpacked into an audio / video basic stream and time information in the system layer de-multiplexing progress; the audio basic stream and the video basic stream are respectively transported to the video decoding progress and the audio decoding progress to be decoded, and the time information is used for updating the clock of a local system; the time information and data storage space information after being decoded in the video decoding progress and the audio decoding progress are transmitted together to the video synchronous output progress and the audio synchronous output progress. According to the time information of the data after being decoded, after being compared with a local clock system, the data which is according with an output time is chosen to be output in the video synchronous output progress and the audio synchronous output progress. A large capacity buffering area is not required in the method to buffer a coding data and an image audio / video frame after being decoded, and the memory resource of the system can be saved; the invention is developed on the basis of an embedded operating system, and the realization complexity is low.

Description

A kind of audio-visual synchronization output intent of multi-process control
Technical field
The invention belongs to the digital audio/video technical field, relate in particular to a kind of audio-visual synchronization output intent of multi-process control.
Background technology
Synchronous output that very important technology is exactly audio frequency and video in the digital TV field, because popular video encoding standard all adopts hybrid coding/decoding process, this code encoding/decoding mode utilize frame of video in time with the space on redundancy, take prediction, conversion, the coded system of quantification and entropy coding, frame of video is encoded to different frame types: infra-frame prediction frame (I frame), single directional prediction frame (P frame) and bi-directional predicted frames (B frame), so just cause video data encoder temporal inconsistent in transmission and decode procedure, and audio ﹠ video is to separate coding and transmission, but when playing, require to export broadcast synchronously, if do not take reasonable control method, be easy to cause audio frequency and video to export nonsynchronous phenomenon.For addressing this is that, general mode is to adopt bigger buffering area to cushion coded data and decoded graphics/audio frame, rely on interior the floating of data queue memory of buffering area to overcome this problem, but it is like this very big for the storage resource consumption of system, and for audio frequency and video separately control its decoding and output relatively the difficulty, cut between the two to leave and, though video and audio frequency can normally export separately, both be difficult to synchronously be controlled effectively.
Summary of the invention
In order to overcome the above problems, the invention provides a kind of audio-visual synchronization output intent of multi-process control, described method is utilized PCR (the Program Clock Reference that stipulates in the message queue mechanism of operating system and the MPEG-2 system layer, program clock reference) and PTS (Presentation Time Stamp, show timestamp) wait and realize that audio frequency and video export for the regulation of audio frequency and video according to PTS under unified system time, to reach the purpose of synchronous output.
In order to realize above purpose, the invention provides a kind of audio-visual synchronization output intent of multi-process control, by an operating system audio frequency and video transport stream is carried out multi-process and handle and export, described system comprises a buffer circle and a frame buffer zone, and described method realizes by following steps:
Step 1, under operating system, set up five processes, be respectively system layer demultiplexing process, video decode process, audio decoder process, audio video synchronization output process and audio sync output process;
Step 2, system layer demultiplexing process are finished looking the demultiplexing of audio transport stream packet, then transmit flow data are unpacked into video-frequency basic flow and audio frequency flows substantially, and temporal information; Video-frequency basic flow and audio frequency flow substantially and will send the video decode process respectively to and the audio decoder process is decoded, and temporal information is used to upgrade the local system clock;
Step 3, audio decoder process and video decode process are decoded, and its time information is sent to audio video synchronization output process and audio sync output process with decoded data space information;
Step 4, audio video synchronization output process and audio sync output process compare the data of selecting to meet output time in the back according to the temporal information of decoded data and system clock and export.
Described video ES stream of above step 2 and audio ES stream filter according to PID (Program Identification, program identifier) and obtain, and are stored in the buffer circle.Described renewal local system clock is realized by following steps: (a) PID according to PCR seeks the PCR data in stream of the video ES behind demultiplexing and the audio ES stream; (b) find the PCR data after, judge whether local PCR exists, if there is no, then fiducial value and the expanding value with the PCR that finds stores respectively; If exist, then calculate current PCR data according to PCR data that exist and system time; (c) the current PCR data that calculate of comparison step (b) and the PCR data of obtaining from step (a) when both differences surpass the threshold value of system's regulation, are just upgraded the local system clock.
Further, described local system clock value interrupts adding up according to clock timer, and timer produces an interruption for per 10 milliseconds, interrupts clocking value at every turn and adds 90, when system clock value during more than or equal to the maximum of PCR, makes zero and restarts the timing that adds up.
The implementation procedure of the described video decode process of above step 3 is as follows: (a) the video decode process blocking sends to the message queue of video decode process in the demultiplexing process, activates by the message queue that sends; (b) obtain current decode time according to PCR of system and local clock; (c) the PTS search according to each ES packet meets the ES data of decode time in video ES data buffer circle, if found the ES data that meet decode time, then gives decoder decode; If can not find then continue to be blocked on the message queue, the packet compliance with system decode time that carries up to new message; (d) the video original frame data that obtains after the decoding is stored in the frame buffer zone, and the PTS data that this buffer zone address and this frame carry are kept in the video PTS array of the overall situation, and pass to the audio video synchronization output process.
The implementation procedure of the described audio decoder process of above step 3 is as follows: (a) the audio decoder process blocking sends to the message queue of video decode process in the demultiplexing process, activates by the message queue that sends; (b) obtain current decode time according to PCR of system and local clock; (c) the PTS search according to each ES packet meets the ES data of decode time in audio ES data buffer circle, if found the ES data that meet decode time, then gives decoder decode; If can not find then continue to be blocked on the message queue, the packet compliance with system decode time that carries up to new message; (d) audio frequency PCM (the Pulse Code Modulation that obtains after the decoding, pulse-code modulation recording) storage is in frame buffer zone, the PTS data that this buffer zone address and this frame carry are kept in the video PTS array of the overall situation, and pass to the audio video synchronization output process.
Further, described video PTS array or audio frequency PTS array are the data structure of annular, and whether effectively each unit of described loop configuration all has a sign, can be to upgrade the data of this unit when sign is invalid.
Further, described message queue should judge whether it has spillover according to its length, if message queue is used and finishes in process of transmitting, then should P frame or B frame be abandoned according to video frame type, and empty the message queue of described frame correspondence.
The described temporal information of above step 3 is the PTS of this ES stream, and described storage space information comprises storage original position, end position and the initial code of decoded video ES stream and audio ES flow data.
The output procedure of the described audio video synchronization output process of above step 4 realizes by following steps: (a) the audio video synchronization output process is blocked in the video outgoing event; (b) because each frame output of video all produces a hardware interrupts, send a frame end of output incident in video breaks handling procedure ISR, notice audio video synchronization output process is removed the obstruction of this process; (c) the audio video synchronization output process in video PTS overall situation array, search produce with current system clock broadcast time an immediate frame PTS; (d) seek this frame original video data according to the PTS that finds, it is invalid that the data structure that the line output of going forward side by side (e) will be stored this frame frame address and PTS is masked as.
Further, can calculate according to output frame rate the blocking time of described audio video synchronization output process.
The output procedure of the described audio sync output process of above step 4 realizes by following steps: (a) the audio sync output process is blocked in the outgoing event of sampling PCM data; (b) speed according to the audio hardware output interface produces hardware interrupts, sends a frame end of output incident in audio frequency interrupt handling routine ISR, and the notification audio output process is removed the obstruction of this process; (c) the audio sync output process in audio frequency PTS overall situation array, search produce with current system clock broadcast time an immediate frame PTS; (d) seek the original PCM data of this frame according to the PTS that finds, the line output of going forward side by side; (e) data structure that will store this frame frame address and PTS be masked as invalid.
The present invention is owing to adopted above-mentioned technical scheme, not only realized the synchronous output of audio frequency and video effectively, and do not need jumbo buffering area to cushion coded data and decoded graphics/audio frame, saved the storage resources of system greatly, can be based on the embedded OS platform development, implementation complexity is low.
Embodiment
Below will the audio-visual synchronization output intent of multi-process control of the present invention be described in further detail.
The audio-visual synchronization output intent of multi-process of the present invention control is based on that on the DSP Blackfin BF533 platform of embedded OS of ADI exploitation realizes, the specific implementation step is as follows:
Step 1, under operating system, set up five processes, be respectively system layer demultiplexing process, video decode process, audio decoder process, audio video synchronization output process and audio sync output process;
Step 2, system layer demultiplexing process are finished looking the demultiplexing of audio transport stream packet, then transmit flow data are unpacked into two parts: (one) video ES (Element Stream, basic stream) stream and audio ES stream and (two) temporal information; Video ES stream and audio ES flow and will send the video decode process respectively to and the audio decoder process is decoded, and temporal information is used to upgrade the local system clock;
Step 3, audio decoder process and video decode process are decoded, and its time information is sent to audio video synchronization output process and audio sync output process with decoded data space information;
Step 4, audio video synchronization output process and audio sync output process compare the data of selecting to meet output time in the back according to the temporal information of decoded data and system clock and export.
Described video ES stream of above step 2 and audio ES stream filter according to PID (Program Identification, program identifier) and obtain, and are stored in the buffer circle.Described renewal local system clock is realized by following steps: (a) PID according to PCR seeks the PCR data in stream of the video ES behind demultiplexing and the audio ES stream; (b) find the PCR data after, judge whether local PCR exists, if there is no, then fiducial value and the expanding value with the PCR that finds stores respectively; If exist, then calculate current PCR data according to PCR data that exist and system time; (c) the current PCR data that calculate of comparison step (b) and the PCR data of obtaining from step (a) when both differences surpass the threshold value of system's regulation, are just upgraded the local system clock.
Further, described local system clock value interrupts adding up according to clock timer, timer produces an interruption for per 10 milliseconds, each clocking value that interrupts adds 90, when system clock value during more than or equal to the maximum of PCR, make zero and restart the timing that adds up, the maximum of PCR is 0xFB0E0500 in the present embodiment.
The implementation procedure of the described video decode process of above step 3 is as follows: (a) the video decode process blocking sends to the message queue of video decode process in the demultiplexing process, activates by the message queue that sends; (b) obtain current decode time according to PCR of system and local clock; (c) the PTS search according to each ES packet meets the ES data of decode time in video ES data buffer circle, if found the ES data that meet decode time, then gives decoder decode; If can not find then continue to be blocked on the message queue, the packet compliance with system decode time that carries up to new message; (d) the video original frame data that obtains after the decoding is stored in the frame buffer zone, and the PTS data that this buffer zone address and this frame carry are kept in the video PTS array of the overall situation, and pass to the audio video synchronization output process.
The implementation procedure of the described audio decoder process of above step 3 is as follows: (a) the audio decoder process blocking sends to the message queue of video decode process in the demultiplexing process, activates by the message queue that sends; (b) obtain current decode time according to PCR of system and local clock; (c) the PTS search according to each ES packet meets the ES data of decode time in audio ES data buffer circle, if found the ES data that meet decode time, then gives decoder decode; If can not find then continue to be blocked on the message queue, the packet compliance with system decode time that carries up to new message; (d) audio frequency PCM (the Pulse Code Modulation that obtains after the decoding, pulse-code modulation recording) storage is in frame buffer zone, the PTS data that this buffer zone address and this frame carry are kept in the video PTS array of the overall situation, and pass to the audio video synchronization output process.
Further, described video PTS array or audio frequency PTS array are the data structure of annular, and whether effectively each unit of described loop configuration all has a sign, can be to upgrade the data of this unit when sign is invalid.
Further, described message queue should judge whether it has spillover according to its length, if message queue is used and finishes in process of transmitting, then should P frame or B frame be abandoned according to video frame type, and empty the message queue of described frame correspondence.
The described temporal information of above step 3 is the PTS of this ES stream, and described storage space information comprises storage original position, end position and the initial code of decoded video ES stream and audio ES flow data.
The output procedure of the described audio video synchronization output process of above step 4 realizes by following steps: (a) the audio video synchronization output process is blocked in the video outgoing event; (b) because each frame output of video all produces a hardware interrupts, send a frame end of output incident in video breaks handling procedure ISR, notice audio video synchronization output process is removed the obstruction of this process; (c) the audio video synchronization output process in video PTS overall situation array, search produce with current system clock broadcast time an immediate frame PTS; (d) seek this frame original video data, the line output of going forward side by side according to the PTS that finds; (e) data structure that will store this frame frame address and PTS be masked as invalid.
Further, can calculate the blocking time of described audio video synchronization output process according to output frame rate, adopts the video output of Phase Alternation Line system in the present embodiment, and its frame per second was 29.97 frame/seconds, and can calculate the video output frame time interval probably is 40ms.
The output procedure of the described audio sync output process of above step 4 realizes by following steps: (a) the audio sync output process is blocked in the outgoing event of sampling PCM data; (b) speed according to the audio hardware output interface produces hardware interrupts, sends a frame end of output incident in audio frequency interrupt handling routine ISR, and the notification audio output process is removed the obstruction of this process; (c) the audio sync output process in audio frequency PTS overall situation array, search produce with current system clock broadcast time an immediate frame PTS; (d) seek the original PCM data of this frame according to the PTS that finds, the line output of going forward side by side; (e) data structure that will store this frame frame address and PTS be masked as invalid.
The present invention has utilized the basic principle of audio-visual system layer, the message event mechanism of compounding practice system, The characteristic of recycling embedded system hardware has been finished and has been looked audio sync output, and in the application of STB To checking, the advantage such as it is clear to have a software realization approach, and the code complexity is low.

Claims (12)

1. the audio-visual synchronization output intent of a multi-process control carries out multi-process by an operating system to the audio frequency and video transport stream and handles and export, and described system comprises a buffer circle and a frame buffer zone, it is characterized in that, described method realizes by following steps:
Step 1, under operating system, set up five processes, be respectively system layer demultiplexing process, video decode process, audio decoder process, audio video synchronization output process and audio sync output process;
Step 2, system layer demultiplexing process are finished looking the demultiplexing of audio transport stream packet, then transmit flow data are unpacked into video-frequency basic flow and audio frequency flows substantially, and temporal information; Video-frequency basic flow and audio frequency flow substantially and will send the video decode process respectively to and the audio decoder process is decoded, and temporal information is used to upgrade the local system clock;
Step 3, audio decoder process and video decode process are decoded, and its time information is sent to audio video synchronization output process and audio sync output process with decoded data space information;
Step 4, audio video synchronization output process and audio sync output process compare the data of selecting to meet output time in the back according to the temporal information of decoded data and system clock and export.
2. the audio-visual synchronization output intent of multi-process as claimed in claim 1 control is characterized in that, described video-frequency basic flow of step 2 and audio frequency flow substantially to filter according to program identifier (PID) and obtain, and are stored in the buffer circle.
3. the audio-visual synchronization output intent of multi-process control as claimed in claim 1, it is characterized in that the described renewal local system of step 2 clock is realized by following steps: the PID according to program clock reference PCR during (a) video-frequency basic flow behind demultiplexing and audio frequency flow substantially seeks the PCR data; (b) find the PCR data after, judge whether local PCR exists, if there is no, then fiducial value and the expanding value with the PCR that finds stores respectively; If exist, then calculate current PCR data according to PCR data that exist and system time; (c) the current PCR data that calculate of comparison step (b) and the PCR data of obtaining from step (a) when both differences surpass the threshold value of system's regulation, are just upgraded the local system clock.
4. the audio-visual synchronization output intent of multi-process control as claimed in claim 3, it is characterized in that, described local system clock value interrupts adding up according to clock timer, timer produces an interruption for per 10 milliseconds, each clocking value that interrupts adds 90, when system clock value during, make zero and restart the timing that adds up more than or equal to the maximum of PCR.
5. the audio-visual synchronization output intent of multi-process control as claimed in claim 1, it is characterized in that, the implementation procedure of the described video decode process of step 3 is as follows: (a) the video decode process blocking sends to the message queue of video decode process in the demultiplexing process, activates by the message queue that sends; (b) obtain current decode time according to PCR of system and local clock; (c) the demonstration timestamp PTS search according to each basic stream packets meets the basic flow data of decode time in video-frequency basic flow data buffer circle, if found the basic flow data that meets decode time, then gives decoder decode; If can not find then continue to be blocked on the message queue, the packet compliance with system decode time that carries up to new message; (d) the video original frame data that obtains after the decoding is stored in the frame buffer zone, and the PTS data that this buffer zone address and this frame carry are kept in the video PTS array of the overall situation, and pass to the audio video synchronization output process.
6. the audio-visual synchronization output intent of multi-process control as claimed in claim 1, it is characterized in that, the implementation procedure of the described audio decoder process of step 3 is as follows: (a) the audio decoder process blocking sends to the message queue of video decode process in the demultiplexing process, activates by the message queue that sends; (b) obtain current decode time according to PCR of system and local clock; (c) the PTS search according to each basic stream packets meets the basic flow data of decode time in the basic flow data buffer circle of audio frequency, if found the basic flow data that meets decode time, then gives decoder decode; If can not find then continue to be blocked on the message queue, the packet compliance with system decode time that carries up to new message; (d) the audio frequency pulse-code modulation recording PCM storage that obtains after the decoding is in frame buffer zone, and the PTS data that this buffer zone address and this frame carry are kept in the video PTS array of the overall situation, and pass to the audio video synchronization output process.
7. as the audio-visual synchronization output intent of claim 5 or 6 described multi-process control, it is characterized in that, described video PTS array or audio frequency PTS array are the data structure of annular, whether effectively each unit of described loop configuration all has a sign, can be to upgrade the data of this unit when sign is invalid.
8. as the audio-visual synchronization output intent of claim 5 or 6 described multi-process control, it is characterized in that, described message queue should judge whether it has spillover according to its length, if message queue is used and finishes in process of transmitting, then should single directional prediction frame or bi-directional predicted frames be abandoned, and empty the message queue of described frame correspondence according to video frame type.
9. the audio-visual synchronization output intent of multi-process control as claimed in claim 1, it is characterized in that, the described temporal information of step 3 is this PTS that flows substantially, and described storage space information comprises storage original position, end position and the initial code of decoded video-frequency basic flow and the basic flow data of audio frequency.
10. the audio-visual synchronization output intent of multi-process control as claimed in claim 1, it is characterized in that the output procedure of the described audio video synchronization output process of step 4 realizes by following steps: (a) the audio video synchronization output process is blocked in the video outgoing event; (b) because each frame output of video all produces a hardware interrupts, send a frame end of output incident in video breaks handling procedure ISR, notice audio video synchronization output process is removed the obstruction of this process; (c) the audio video synchronization output process in video PTS overall situation array, search produce with current system clock broadcast time an immediate frame PTS; (d) seek this frame original video data, the line output of going forward side by side according to the PTS that finds; (e) data structure that will store this frame frame address and PTS be masked as invalid.
11. the audio-visual synchronization output intent of multi-process control as claimed in claim 10 is characterized in that can calculate the blocking time of described audio video synchronization output process according to output frame rate.
12. the audio-visual synchronization output intent of multi-process control as claimed in claim 1, it is characterized in that the output procedure of the described audio sync output process of step 4 realizes by following steps: (a) the audio sync output process is blocked in the outgoing event of sampling PCM data; (b) speed according to the audio hardware output interface produces hardware interrupts, sends a frame end of output incident in audio frequency interrupt handling routine ISR, and the notification audio output process is removed the obstruction of this process; (c) the audio sync output process in audio frequency PTS overall situation array, search produce with current system clock broadcast time an immediate frame PTS; (d) seek the original PCM data of this frame according to the PTS that finds, the line output of going forward side by side; (e) data structure that will store this frame frame address and PTS be masked as invalid.
CNA2007101724161A 2007-12-17 2007-12-17 An audio and video synchronization output method for multi-process control Pending CN101188770A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CNA2007101724161A CN101188770A (en) 2007-12-17 2007-12-17 An audio and video synchronization output method for multi-process control

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CNA2007101724161A CN101188770A (en) 2007-12-17 2007-12-17 An audio and video synchronization output method for multi-process control

Publications (1)

Publication Number Publication Date
CN101188770A true CN101188770A (en) 2008-05-28

Family

ID=39480915

Family Applications (1)

Application Number Title Priority Date Filing Date
CNA2007101724161A Pending CN101188770A (en) 2007-12-17 2007-12-17 An audio and video synchronization output method for multi-process control

Country Status (1)

Country Link
CN (1) CN101188770A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101710958A (en) * 2009-12-02 2010-05-19 北京中星微电子有限公司 Audio and video composite device and method and device for synchronizing audio and video thereof
CN101984672A (en) * 2010-11-03 2011-03-09 深圳芯邦科技股份有限公司 Method and device for multi-thread video and audio synchronous control
CN101674486B (en) * 2009-09-29 2013-05-08 深圳市融创天下科技股份有限公司 Streaming media audio and video synchronization method and system
CN103325398A (en) * 2012-03-23 2013-09-25 腾讯科技(深圳)有限公司 Animation playing method and device
CN103491426A (en) * 2013-08-31 2014-01-01 中山大学 Video-on-demand system of IPTV
CN103581742A (en) * 2013-10-28 2014-02-12 南京熊猫电子股份有限公司 Method for converting encrypted audio stream into PCM codes on high-definition set top box
CN103747316A (en) * 2013-12-23 2014-04-23 乐视致新电子科技(天津)有限公司 Audio and video synchronizing method and electronic device
CN105306948A (en) * 2015-11-28 2016-02-03 讯美电子科技有限公司 Method and system for decoding multi-process video
CN106875952A (en) * 2016-12-23 2017-06-20 伟乐视讯科技股份有限公司 The soft encoding mechanism of MCVF multichannel voice frequency based on FPGA embedded systems
WO2017215516A1 (en) * 2016-06-14 2017-12-21 华为技术有限公司 Method and apparatus for determining decoding task
CN108882010A (en) * 2018-06-29 2018-11-23 深圳市九洲电器有限公司 A kind of method and system that multi-screen plays
CN105323596B (en) * 2014-06-30 2019-04-05 惠州市伟乐科技股份有限公司 It is most according to the system and method re-synchronized in a kind of TS stream program
CN109618198A (en) * 2018-12-10 2019-04-12 网易(杭州)网络有限公司 Live content reports method and device, storage medium, electronic equipment
CN110290422A (en) * 2019-06-13 2019-09-27 浙江大华技术股份有限公司 Timestamp stacking method, device, filming apparatus and storage device
CN112770165A (en) * 2020-12-28 2021-05-07 杭州电子科技大学 Distributed synchronization method for audio and video streams
CN113014981A (en) * 2019-12-19 2021-06-22 海信视像科技股份有限公司 Video playing method and device, electronic equipment and readable storage medium

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101674486B (en) * 2009-09-29 2013-05-08 深圳市融创天下科技股份有限公司 Streaming media audio and video synchronization method and system
CN101710958B (en) * 2009-12-02 2015-11-25 北京中星微电子有限公司 The method and apparatus of a kind of audio frequency and video equipment complex and audio-visual synchronization thereof
CN101710958A (en) * 2009-12-02 2010-05-19 北京中星微电子有限公司 Audio and video composite device and method and device for synchronizing audio and video thereof
CN101984672A (en) * 2010-11-03 2011-03-09 深圳芯邦科技股份有限公司 Method and device for multi-thread video and audio synchronous control
CN101984672B (en) * 2010-11-03 2012-10-17 深圳芯邦科技股份有限公司 Method and device for multi-thread video and audio synchronous control
CN103325398A (en) * 2012-03-23 2013-09-25 腾讯科技(深圳)有限公司 Animation playing method and device
CN103325398B (en) * 2012-03-23 2016-03-23 腾讯科技(深圳)有限公司 A kind of animation playing method and device
CN103491426A (en) * 2013-08-31 2014-01-01 中山大学 Video-on-demand system of IPTV
CN103581742A (en) * 2013-10-28 2014-02-12 南京熊猫电子股份有限公司 Method for converting encrypted audio stream into PCM codes on high-definition set top box
CN103747316B (en) * 2013-12-23 2018-04-06 乐视致新电子科技(天津)有限公司 A kind of audio and video synchronization method and electronic equipment
CN103747316A (en) * 2013-12-23 2014-04-23 乐视致新电子科技(天津)有限公司 Audio and video synchronizing method and electronic device
CN105323596B (en) * 2014-06-30 2019-04-05 惠州市伟乐科技股份有限公司 It is most according to the system and method re-synchronized in a kind of TS stream program
CN105306948A (en) * 2015-11-28 2016-02-03 讯美电子科技有限公司 Method and system for decoding multi-process video
US10886948B2 (en) 2016-06-14 2021-01-05 Huawei Technologies Co., Ltd. Method for determining a decoding task and apparatus
WO2017215516A1 (en) * 2016-06-14 2017-12-21 华为技术有限公司 Method and apparatus for determining decoding task
CN106875952A (en) * 2016-12-23 2017-06-20 伟乐视讯科技股份有限公司 The soft encoding mechanism of MCVF multichannel voice frequency based on FPGA embedded systems
CN108882010A (en) * 2018-06-29 2018-11-23 深圳市九洲电器有限公司 A kind of method and system that multi-screen plays
CN109618198A (en) * 2018-12-10 2019-04-12 网易(杭州)网络有限公司 Live content reports method and device, storage medium, electronic equipment
CN110290422A (en) * 2019-06-13 2019-09-27 浙江大华技术股份有限公司 Timestamp stacking method, device, filming apparatus and storage device
CN110290422B (en) * 2019-06-13 2021-09-10 浙江大华技术股份有限公司 Timestamp superposition method and device, shooting device and storage device
CN113014981A (en) * 2019-12-19 2021-06-22 海信视像科技股份有限公司 Video playing method and device, electronic equipment and readable storage medium
CN112770165A (en) * 2020-12-28 2021-05-07 杭州电子科技大学 Distributed synchronization method for audio and video streams

Similar Documents

Publication Publication Date Title
CN101188770A (en) An audio and video synchronization output method for multi-process control
JP5133567B2 (en) Codec change method and apparatus
US7120168B2 (en) System and method for effectively performing an audio/video synchronization procedure
CA2278376C (en) Method and apparatus for adaptive synchronization of digital video and audio playback in a multimedia playback system
CN101889451B (en) Systems and methods of reducing media stream delay through independent decoder clocks
JP4837744B2 (en) Multiplexer, integrated circuit, multiplexing method, multiplexed program, computer-readable recording medium recording the multiplexed program, and computer-readable recording medium recording the multiplexed stream
JP2012532568A5 (en)
CN201781583U (en) Multichannel server video playback synchronous control system
DK2665268T3 (en) A method and apparatus for encoding video, and method and apparatus for decoding video
WO2013185515A1 (en) Video coding system and method
KR20040037147A (en) Robust method for recovering a program time base in MPEG-2 transport streams and achieving audio/video synchronization
CN103458271A (en) Audio-video file splicing method and audio-video file splicing device
CN106303379A (en) A kind of video file backward player method and system
KR100744309B1 (en) Digital Video Stream Transmitting System Adapting SVC and Its Transmitting Method
CN106470291A (en) Recover in the interruption in time synchronized from audio/video decoder
WO1999014955A1 (en) Seamless splicing of compressed video programs
JP6051847B2 (en) Video information reproduction method and system
GB2515362A (en) Decoding frames
JP2009049506A5 (en)
US20100332591A1 (en) Media distribution switching method, receiving device and transmitting device
CN1589014A (en) Video frequency decoding control method and device
JP4373283B2 (en) Video / audio decoding method, video / audio decoding apparatus, video / audio decoding program, and computer-readable recording medium recording the program
JP2008135989A (en) Video reproduction system, synchronization method for video reproduction and video reproduction terminal
JP4318838B2 (en) Transmission system control method and apparatus
CN100459716C (en) Decoding error restoring method for MPEG4 decoder

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Open date: 20080528