CN114286149A - Method and system for synchronously rendering audio and video across equipment and system - Google Patents

Method and system for synchronously rendering audio and video across equipment and system Download PDF

Info

Publication number
CN114286149A
CN114286149A CN202111656282.7A CN202111656282A CN114286149A CN 114286149 A CN114286149 A CN 114286149A CN 202111656282 A CN202111656282 A CN 202111656282A CN 114286149 A CN114286149 A CN 114286149A
Authority
CN
China
Prior art keywords
rendering
audio
video
rts
time stamp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111656282.7A
Other languages
Chinese (zh)
Other versions
CN114286149B (en
Inventor
龙仕强
张伟民
肖铁军
陈智敏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Bohua Ultra Hd Innovation Center Co ltd
Original Assignee
Guangdong Bohua Ultra Hd Innovation Center Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Bohua Ultra Hd Innovation Center Co ltd filed Critical Guangdong Bohua Ultra Hd Innovation Center Co ltd
Priority to CN202111656282.7A priority Critical patent/CN114286149B/en
Publication of CN114286149A publication Critical patent/CN114286149A/en
Application granted granted Critical
Publication of CN114286149B publication Critical patent/CN114286149B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

A method and system for synchronous rendering of audio and video of cross-device and system, the code stream distributes the server to the audio and video decoding rendering device of cut-in and decodes the rendering device of the audio and video and carries on the time service of the Network Time Protocol (NTP), distribute the code stream of the audio and video to decode the rendering device of the video, withdraw the code stream of the audio at the same time and distribute to decoding the rendering device of the audio; after the video decoding rendering device starts decoding, generating a Rendering Time Stamp (RTS) and transmitting the RTS to the audio decoding rendering device through a code stream distribution server; and the audio decoding rendering equipment receives the Rendering Time Stamp (RTS) and then synchronously renders according to the Rendering Time Stamp (RTS) information. According to the invention, on the basis of a transmission stream audio and video Decoding Time Stamp (DTS) and a display time stamp (PTS), a Rendering Time Stamp (RTS) parameter is added, the RTS parameter is transmitted to an audio rendering device through a network video rendering device, and the audio rendering device adjusts rendering time according to the received video RTS parameter, so that the audio and video can be synchronously rendered in different systems and devices.

Description

Method and system for synchronously rendering audio and video across equipment and system
Technical Field
The invention belongs to the field of digital television broadcasting, and provides a method and a system for cross-device and system audio and video synchronous rendering when audio and video rendering is carried out on a digital television and audio through different devices.
Background
With the development of audio-visual transmission technology and the change of media ecology, the requirements of audiences on the definition and the reality of videos are higher and higher, the video technology is undergoing the evolution from high-definition technology to 4K/8K ultra-high-definition technology, and with the development of ultra-high-definition and internet technology, in order to better meet the cultural life consumption of audiences under the new media ecology, more audio-video presentation technologies are needed to meet the diversified requirements of the audiences. In the 2 nd month 2021, the 8K live broadcast test of the central broadcast television main station adopts different prior presentation technologies and modes, and the ultra-high definition 8K program is live broadcast on an indoor and outdoor public large screen, so that the praise of the industry and audiences is obtained.
The public large screen is located in an open space, the environmental noise is relatively complex, the large screen playing system brings visual enjoyment to audiences, and an independent sound presenting mode needs to be provided for each audience to improve auditory experience, so that the audio system of the public large screen provides a function of receiving synchronous broadcast audio by a mobile phone. Video is presented through a public large screen, and audio is distributed to a mobile phone or other special audio rendering equipment through the Internet for presentation. In the existing home television, cinema and sound system, audio and video decoding and rendering are completed in the same device or several devices which are locally interconnected, and an audio and video synchronization mechanism is completed through a decoding device. However, for a public large-screen system, a heterogeneous independent audio/video rendering device is adopted, the system connection is as shown in fig. 2, a code stream distribution system (server) distributes audio/video streams to a video decoder and an audio decoder through different networks respectively for the code stream, the audio decoder and the video decoder are in different networks, wherein RTSD represents rendering timestamp data, and NTP represents a network time protocol. The current synchronization mechanism can not solve the technical requirement of audio and video synchronization between heterogeneous cross networks, and a new method is needed to solve the problem of cross-network audio and video synchronization.
The difficulty in solving the above problems and defects is: the audio and video synchronization mechanism of the existing decoder synchronizes according to the PTS of the code stream, and for a heterogeneous cross-device system, because the audio and video are distributed independently, the PTS of the audio and video devices are independent from each other and cannot communicate with each other, and the decoding and rendering are performed in an independent and free decoding mode, so that the sound received by audiences through a mobile phone or other audio listening devices and the picture of a large screen are asynchronous.
The significance of solving the problems and the defects is as follows: the method can provide better audio and video experience for audiences, presents synchronous audio and video contents, and needs to invent a cross-device and network audio and video synchronization method to solve the problem of audio and video synchronization
Disclosure of Invention
The invention provides a method for synchronously rendering audio and video of cross-equipment and system, which is characterized in that a Rendering Time Stamp (RTS) parameter is added on the basis of a transmission stream audio and video Decoding Time Stamp (DTS) and a display time stamp (PTS), the RTS parameter is transmitted to audio rendering equipment through network video rendering equipment, and the audio rendering equipment adjusts rendering time according to the received video RTS parameter, so that the audio and video can be synchronously rendered in different systems and equipment.
The technical scheme of the invention is as follows:
according to one invention of the invention, a method for audio and video synchronous rendering across equipment and a system is provided, which comprises the following steps: s1: the code stream distribution server time service the accessed audio rendering equipment and video rendering equipment; s2: the video rendering device decodes the distributed video, the decoded data is placed in a rendering buffer,recording a Presentation Time Stamp (PTS) of a currently decoded frame; s3: the video rendering equipment generates Rendering Time Stamp (RTS) information and Rendering Time Stamp Data (RTSD) network messages every time one frame of image is rendered, and sends the Rendering Time Stamp Data (RTSD) network messages according to a specified period; s4: the code stream distribution server transmits a rendering timestamp data (RTSD) network message to all audio rendering equipment; s5: when the audio rendering device receives one Rendering Time Stamp Data (RTSD) network message, the difference D between the Rendering Time Stamp (RTS) and the display time stamp (PTS) in the Rendering Time Stamp Data (RTSD) network message is calculatedrtsAnd updating and storing; s6: after the audio rendering device starts decoding, caching the decoded audio data and a corresponding display time stamp (PTS); s7: audio rendering device according to DrtsCalculating a Rendering Time Stamp (RTS) of the audio to be rendered according to the local time stamp, and inquiring corresponding audio data in the buffer zone for rendering; and S8: the audio rendering device updates the local D according to the latest rendering timestamp data (RTSD)rtsAnd according to the latest DrtsAnd performing synchronous audio rendering.
Preferably, in the method for audio and video synchronous rendering across devices and systems, in step S1, the time service uses a network time protocol, and the audio and video rendering device parses a Network Time Protocol (NTP) message and updates the local time according to time information of the Network Time Protocol (NTP) after receiving the Network Time Protocol (NTP) time.
Preferably, in the method for audio and video synchronous rendering across devices and systems, in step S3, each time the video rendering device renders one frame of image, a Rendering Timestamp (RTS) is generated according to device Local Timestamp (LTS) information, and a rendering timestamp data (RTSD) network packet containing a display timestamp (PTS) of the current rendering frame and Rendering Timestamp (RTS) information is generated at the same time, the video rendering device transmits back the RTSD packet of the latest frame of image to the code stream distribution server at regular intervals,
preferably, in the method for audio-video synchronous rendering across devices and systems, in step S3, in step S3, the Rendering Timestamp (RTS) may adopt a data format and timestamp accuracy as a display timestamp (PTS).
Preferably, in the method for audio/video synchronous rendering across devices and systems, in step S3, in step S8, the audio rendering device performs the step S5 to calculate D once every time it receives a rendering timestamp data (RTSD) network packetrtsAnd with locally stored DrtsComparing the information when calculating D from Rendering Time Stamp Data (RTSD)rtsAnd local DrtsOut of the specified range, updating D of the audio rendering devicertsInformation simultaneously informing the audio rendering apparatus to re-perform the steps S6 and S7
According to another aspect of the present invention, there is provided an application system for audio-video synchronous rendering across devices and systems, comprising: the system comprises a code stream distribution server, a video decoding rendering device and an audio decoding rendering device, wherein the code stream distribution server carries out Network Time Protocol (NTP) time service on the accessed audio decoding rendering device, distributes an audio and video code stream to the video decoding rendering device, and extracts the audio code stream and distributes the audio code stream to the audio decoding rendering device; after the video decoding rendering device starts decoding, generating a Rendering Time Stamp (RTS) and transmitting the RTS to the audio decoding rendering device through a code stream distribution server; and the audio decoding rendering equipment receives the Rendering Time Stamp (RTS) and then synchronously renders according to the Rendering Time Stamp (RTS) information.
According to the technical scheme of the invention, the beneficial effects are as follows:
the invention provides an audio and video synchronous rendering method aiming at a use scene that the same audio and video code stream is decoded and rendered by independent audio and video decoding equipment on different networks, in particular to a public large-screen audio and video rendering scene, wherein videos are presented through a public large screen, and audios are presented through a mobile phone or other audio receiving equipment.
For a better understanding and appreciation of the concepts, principles of operation, and effects of the invention, reference will now be made in detail to the following examples, taken in conjunction with the accompanying drawings, in which:
drawings
In order to more clearly illustrate the detailed description of the invention or the technical solutions in the prior art, the drawings that are needed in the detailed description of the invention or the prior art will be briefly described below.
FIG. 1 is a detailed flow diagram of the method of audio video synchronized rendering across devices and systems of the present invention;
fig. 2 is a diagram of an application system of the present invention.
Detailed Description
In order to make the objects, technical means and advantages of the present invention more apparent, the present invention will be described in detail with reference to the accompanying drawings and specific examples. These examples are merely illustrative and not restrictive of the invention.
Fig. 1 is a detailed flowchart of a method for audio-video synchronous rendering across devices and systems of the present invention, comprising the following steps.
S1: the code stream distribution server is used for time service of the accessed audio rendering equipment and the accessed video rendering equipment, a Network Time Protocol (NTP) is adopted in the time service, and after the audio and video rendering equipment receives the NTP time, the NTP message is analyzed and the local time is updated according to the NTP time information.
S2: the video rendering apparatus decodes the distributed video, puts the decoded data into a rendering buffer, and records a display time stamp (PTS) of a currently decoded frame (i.e., image frame).
S3: the video rendering device generates RTS information and Rendering Time Stamp Data (RTSD) network messages every time one frame of image is rendered, and sends the RTSD network messages according to a specified period. Specifically, each time the video rendering device renders one frame of image, an RTS is generated according to device Local Timestamp (LTS) information, and a rendering timestamp data (RTSD) network message including a PTS of the current rendering frame and RTS information is generated at the same time, the video rendering device transmits back the RTSD message of the latest frame of image to the code stream distribution server at regular intervals, and the RTS may adopt a data format and timestamp accuracy identical to the PTS.
S4: and the code stream distribution server forwards the RTSD network message to all the audio rendering devices. Specifically, each time the code stream distribution server receives an RTSD network packet, the code stream distribution server performs broadcast, multicast or other network communication protocols to all the accessed audio rendering devices.
S5: when the audio rendering equipment receives one RTSD network message, the difference D between RTS and PTS in the RTSD network message is calculatedrtsAnd update and save, i.e. D after calculationrtsThe update is saved to the audio rendering device (i.e., the update is saved locally).
PTS Format as example DrtsThe calculation formula is as follows:
①VTpts=((Dpts1&0x0e)<<29+(Dpts2&0xff)<<22+(Dpts3&0xfe)<<15+(Dpts4&0xff)<<7+(Dpts5&0xfe)>>1)
VTptsfor the video display time calculated from the PTS value,
Dpts1、Dpts2、Dpts3、Dpts4、Dpts5for the byte content corresponding to the PTS in the transport stream,
0x indicates that the byte content is 16 system.
②VTrts=((Drts1&0x0e)<<29+(Drts2&0xff)<<22+(Drts3&0xfe)<<15+(Drts4&0xff)<<7+(Drts5&0xfe)>>1)
VTrtsFor video rendering times calculated from the RTS value,
Drts1、Drts2、Drts3、Drts4、Drts5the corresponding byte content in the RTS network message.
③Drts=VTpts-VTrts
DrtsIs VTptsAnd VTrtsIs the difference.
S6: after the audio rendering device starts decoding, the decoded audio data and the corresponding PTS are buffered.
S7: and the audio rendering device calculates the PTS of the audio to be rendered according to the Drts and the local timestamp LTS, and inquires corresponding audio data in the buffer area for rendering. The calculation formula taking the PTS format as an example is as follows:
①Tlts=((Dlts1&0x0e)<<29+(Dlts2&0xff)<<22+(Dlts3&0xfe)<<15+(Dlts4&0xff)<<7+(Dlts5&0xfe)>>1)
Tltsfor the local time calculated from the LTS value,
Dlts1、Dlts2、Dlts3、Dlts4、Dlts5the byte content corresponding to the local time.
②ATpts=Tlts+Drts
ATptsIs the audio rendering time calculated from the LTS time and the RTS time.
S8: the audio rendering device updates the local D according to the latest RTSDrtsAnd according to the latest DrtsAnd performing synchronous audio rendering. Specifically, the audio rendering device executes the step S5 once to calculate D each time it receives an RTSD network packetrtsAnd with locally stored DrtsInformation is compared, when D is calculated according to RTSDrts(i.e., D of network message RTSD)rts) And local DrtsOut of the specified range, updating D of the audio rendering devicertsInformation while notifying the rendering apparatus to re-execute steps S6 and S7.
The typical application system of audio-video synchronous rendering across devices and systems of the present invention, as shown in fig. 2, includes: a code stream distribution server (e.g., a server (code stream distribution system) in fig. 2), a video decoding rendering device (e.g., a video rendering device/system in fig. 2), and an audio decoding rendering device (e.g., an audio rendering device/system 1, an audio rendering device/system 2, and an audio rendering device/system 3 in fig. 2), wherein the code stream distribution server performs NTP time service on the accessed audio and video decoding rendering device, distributes the audio and video code stream to the video decoding rendering device, and extracts the audio code stream and distributes the audio code stream to the audio decoding rendering device; after the video decoding rendering equipment starts decoding, RTS is generated and transmitted to the audio decoding rendering equipment through a code stream distribution server; and after receiving the RTS, the audio decoding rendering equipment performs synchronous rendering according to the RTS information.
The method for synchronously rendering the audio and video of the cross-equipment and system increases a Rendering Time Stamp (RTS) parameter on the basis of transmitting a stream audio and video Decoding Time Stamp (DTS) and a display time stamp (PTS), the RTS parameter is transmitted to audio rendering equipment through network video rendering equipment, and the audio rendering equipment adjusts rendering time according to the received video RTS parameter, so that the audio and video can be synchronously rendered in different systems and equipment.
The foregoing description is of the preferred embodiment of the concepts and principles of operation in accordance with the invention. The above-described embodiments should not be construed as limiting the scope of the claims, and other embodiments and combinations of implementations according to the inventive concept are within the scope of the invention.

Claims (6)

1. A method for audio and video synchronous rendering across equipment and a system is characterized by comprising the following steps:
s1: the code stream distribution server time service the accessed audio rendering equipment and video rendering equipment;
s2: the video rendering device decodes the distributed video, places the decoded data into a rendering buffer area, and records a display time stamp (PTS) of a current decoded frame;
s3: the video rendering equipment generates Rendering Time Stamp (RTS) information and Rendering Time Stamp Data (RTSD) network messages every time one frame of image is rendered, and sends the Rendering Time Stamp Data (RTSD) network messages according to a specified period;
s4: the code stream distribution server transmits the rendering timestamp data (RTSD) network message to all audio rendering equipment;
s5: when the audio rendering equipment receives one Rendering Time Stamp Data (RTSD) network message, the difference D between the Rendering Time Stamp (RTS) and the display time stamp (PTS) in the Rendering Time Stamp Data (RTSD) network message is calculatedrtsAnd updating and storing;
s6: after the audio rendering device starts decoding, caching the decoded audio data and a corresponding display time stamp (PTS);
s7: the audio rendering device according to DrtsCalculating a Rendering Time Stamp (RTS) of the audio to be rendered according to the local time stamp, and inquiring corresponding audio data in the buffer zone for rendering; and
s8: the audio rendering device updates a local D according to the latest rendering timestamp data (RTSD)rtsAnd according to the latest DrtsAnd performing synchronous audio rendering.
2. The method for audio and video synchronous rendering across devices and systems according to claim 1, wherein in step S1, the time service employs a network time protocol, and the audio and video rendering device analyzes a Network Time Protocol (NTP) message and updates the local time according to the time information of the Network Time Protocol (NTP) after receiving the Network Time Protocol (NTP) time.
3. The method of audio/video synchronous rendering across devices and systems according to claim 1, wherein in step S3, each time the video rendering device renders one frame of image, a Rendering Timestamp (RTS) is generated according to device Local Timestamp (LTS) information, and a rendering timestamp data (RTSD) network packet containing a display timestamp (PTS) of a current rendering frame and Rendering Timestamp (RTS) information is generated at the same time, the video rendering device returns the RTSD packet of a latest frame of image to the code stream distribution server at regular intervals,
4. the method for audio-video synchronous rendering across devices and systems according to claim 1, wherein in step S3, in step S3, the Rendering Timestamp (RTS) can adopt a data format and timestamp accuracy as a display timestamp (PTS).
5. The method for audio-video synchronous rendering across devices and systems according to claim 1, wherein in step S3, in step S8, the audio rendering device performs the step S5 to calculate D each time it receives a rendering timestamp data (RTSD) network packetrtsAnd with locally stored DrtsComparing the information when calculating D from Rendering Time Stamp Data (RTSD)rtsAnd local DrtsUpdating D of the audio rendering device if the specified range is exceededrtsInformation while notifying the audio rendering apparatus to re-perform steps S6 and S7.
6. An application system for audio-video synchronous rendering across devices and systems, comprising: the system comprises a code stream distribution server, a video decoding rendering device and an audio decoding rendering device, wherein the code stream distribution server distributes an audio and video code stream to the video decoding rendering device when carrying out Network Time Protocol (NTP) time service on the accessed audio and video decoding rendering device, and extracts the audio code stream and distributes the audio code stream to the audio decoding rendering device; after the video decoding rendering device starts decoding, generating a Rendering Time Stamp (RTS) and transmitting the RTS to the audio decoding rendering device through the code stream distribution server; and the audio decoding rendering equipment receives the Rendering Time Stamp (RTS) and then synchronously renders according to the Rendering Time Stamp (RTS) information.
CN202111656282.7A 2021-12-31 2021-12-31 Audio and video synchronous rendering method and system of cross-equipment and system Active CN114286149B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111656282.7A CN114286149B (en) 2021-12-31 2021-12-31 Audio and video synchronous rendering method and system of cross-equipment and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111656282.7A CN114286149B (en) 2021-12-31 2021-12-31 Audio and video synchronous rendering method and system of cross-equipment and system

Publications (2)

Publication Number Publication Date
CN114286149A true CN114286149A (en) 2022-04-05
CN114286149B CN114286149B (en) 2023-07-07

Family

ID=80878727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111656282.7A Active CN114286149B (en) 2021-12-31 2021-12-31 Audio and video synchronous rendering method and system of cross-equipment and system

Country Status (1)

Country Link
CN (1) CN114286149B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965786A (en) * 2021-09-29 2022-01-21 杭州当虹科技股份有限公司 Method for accurately controlling video output and playing
CN115243088A (en) * 2022-07-21 2022-10-25 苏州金螳螂文化发展股份有限公司 Multi-host video frame-level synchronous rendering method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230389A1 (en) * 2011-03-11 2012-09-13 Anthony Laurent Decoder and method at the decoder for synchronizing the rendering of contents received through different networks
CN103024517A (en) * 2012-12-17 2013-04-03 四川九洲电器集团有限责任公司 Method for synchronously playing streaming media audios and videos based on parallel processing
CN105245976A (en) * 2015-09-30 2016-01-13 合一网络技术(北京)有限公司 Method and system for synchronously playing audio and video
WO2016008131A1 (en) * 2014-07-17 2016-01-21 21 Vianet Group, Inc. Techniques for separately playing audio and video data in local networks
CN109088887A (en) * 2018-09-29 2018-12-25 北京金山云网络技术有限公司 A kind of decoded method and device of Streaming Media
CN109361945A (en) * 2018-10-18 2019-02-19 广州市保伦电子有限公司 The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN109889907A (en) * 2019-04-08 2019-06-14 北京东方国信科技股份有限公司 A kind of display methods and device of the video OSD based on HTML5
CN111314764A (en) * 2020-03-04 2020-06-19 南方电网科学研究院有限责任公司 Synchronization method of cross-screen animation in distributed rendering environment
CN113225598A (en) * 2021-05-07 2021-08-06 上海一谈网络科技有限公司 Method, device and equipment for synchronizing audio and video of mobile terminal and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120230389A1 (en) * 2011-03-11 2012-09-13 Anthony Laurent Decoder and method at the decoder for synchronizing the rendering of contents received through different networks
CN103024517A (en) * 2012-12-17 2013-04-03 四川九洲电器集团有限责任公司 Method for synchronously playing streaming media audios and videos based on parallel processing
WO2016008131A1 (en) * 2014-07-17 2016-01-21 21 Vianet Group, Inc. Techniques for separately playing audio and video data in local networks
CN105245976A (en) * 2015-09-30 2016-01-13 合一网络技术(北京)有限公司 Method and system for synchronously playing audio and video
CN109088887A (en) * 2018-09-29 2018-12-25 北京金山云网络技术有限公司 A kind of decoded method and device of Streaming Media
CN109361945A (en) * 2018-10-18 2019-02-19 广州市保伦电子有限公司 The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN109889907A (en) * 2019-04-08 2019-06-14 北京东方国信科技股份有限公司 A kind of display methods and device of the video OSD based on HTML5
CN111314764A (en) * 2020-03-04 2020-06-19 南方电网科学研究院有限责任公司 Synchronization method of cross-screen animation in distributed rendering environment
CN113225598A (en) * 2021-05-07 2021-08-06 上海一谈网络科技有限公司 Method, device and equipment for synchronizing audio and video of mobile terminal and storage medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113965786A (en) * 2021-09-29 2022-01-21 杭州当虹科技股份有限公司 Method for accurately controlling video output and playing
CN113965786B (en) * 2021-09-29 2024-03-26 杭州当虹科技股份有限公司 Method for precisely controlling video output playing
CN115243088A (en) * 2022-07-21 2022-10-25 苏州金螳螂文化发展股份有限公司 Multi-host video frame-level synchronous rendering method

Also Published As

Publication number Publication date
CN114286149B (en) 2023-07-07

Similar Documents

Publication Publication Date Title
CN103200461B (en) A kind of multiple stage playback terminal synchronous playing system and player method
JP4649091B2 (en) Communication terminal, server device, relay device, broadcast communication system, broadcast communication method, and program
CN114286149B (en) Audio and video synchronous rendering method and system of cross-equipment and system
CN103167320B (en) The live client of audio and video synchronization method, system and mobile phone
CN101127917B (en) A method and system for synchronizing Internet stream media format video and audio
JP5827896B2 (en) Dynamic application insertion for MPEG stream switching
EP1487216A2 (en) Device and method for receiving and transmitting digital multimedia broadcasting
CN109168059B (en) Lip sound synchronization method for respectively playing audio and video on different devices
KR20140130218A (en) Frame capture and buffering at source device in wireless display system
KR101841313B1 (en) Methods for processing multimedia flows and corresponding devices
JP2009284282A (en) Content server, information processing apparatus, network device, content distribution method, information processing method, and content distribution system
CN108366283A (en) The media sync playback method of more equipment rooms
JPWO2006027969A1 (en) Transmitting apparatus, relay apparatus, receiving apparatus, and network system including them
JP2012049836A (en) Video/audio output apparatus, video/audio output system and master apparatus
WO2013083133A1 (en) System for multimedia broadcasting
JP2018074480A (en) Reception terminal and program
CN114339290A (en) Large screen management subsystem, large screen synchronous playing system and method
CN113691847A (en) Multi-screen frame synchronization method and device
CN105898233B (en) A kind of audio and video playing method and device in video monitoring
CN203387627U (en) Live broadcast and order system of mobile streaming media
CN111669605B (en) Method and device for synchronizing multimedia data and associated interactive data thereof
US11503385B2 (en) Live broadcast IP latency compensation
US20190116390A1 (en) Method for a primary device
CN112738551A (en) Method and device for smoothly playing video
US20190230414A1 (en) Primary device and companion device communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant