CN103546662A - Audio and video synchronizing method in network monitoring system - Google Patents

Audio and video synchronizing method in network monitoring system Download PDF

Info

Publication number
CN103546662A
CN103546662A CN201310437082.1A CN201310437082A CN103546662A CN 103546662 A CN103546662 A CN 103546662A CN 201310437082 A CN201310437082 A CN 201310437082A CN 103546662 A CN103546662 A CN 103546662A
Authority
CN
China
Prior art keywords
video
data
audio
audio frequency
rtp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310437082.1A
Other languages
Chinese (zh)
Inventor
孟利民
蒋维
周凯
司徒涨勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201310437082.1A priority Critical patent/CN103546662A/en
Publication of CN103546662A publication Critical patent/CN103546662A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

An audio and video synchronizing method in a network monitoring system includes that (1) videos are compressed through self hardware compression of a chip TW2835 to generate H.264 data, audios are compressed through software compression to generate a G.729 data format, and the H.264 data and the G.729 data are transmitted to an RTP (real time protocol) database to be packaged and transmitted; (2)after IP (internet protocol) packet headers and UDP (user datagram protocol) packet headers of data packets received from the network are removed, an RTP packet is decided to place into an audio or video buffer area according to load type in an RTP packet header, and load data in the RTP packet are inserted into the correct position of the buffer area according to sequence of serial number fields in the RTP packet; proper data packets are pre-stored in the audio or video buffer area after network data are started to be received; after data of audio and video streams are full of a prestorage area, playing is started; the audio stream is used as time dominant in synchronization, and audio and video synchronization is realized by adjusting video targets. The audio and video synchronizing method is simple to implement and high in synchronization accuracy.

Description

Audio and video synchronization method in a kind of network monitoring system
Technical field
The present invention relates to network monitoring field, especially a kind of Voice & Video synchronous method of network monitoring system.
Background technology
Be accompanied by the raising of Video Supervision Technique and monitor the extensive use in every field, in a lot of occasions, people start to pay attention to Voice Surveillance.No matter be mechanisms of public security organs, or the key unit such as airport, railway, bank, increasing high-quality safe protection engineering is badly in need of audio-visual simultaneous monitoring system clear, true to nature, and Voice Surveillance field has become the new highlight of security protection industry.There is adding of Voice Surveillance, just can take leave of " silent movie " epoch of simple video monitoring, be conducive to the comprehensive control to accident, carried out accurate evaluation and disposal.Voice applications, in monitoring, has been filled up to a large blank of safety-security area, is a great development direction of network monitoring in recent years.Yet the objects such as the Voice & Video in multi-medium data itself have strict time relationship, and Internet Transmission can cause this original time domain relation destroyed, and then causes sound and image in system cannot realize synchronized playback.Therefore, research and the realization of network monitoring system sound intermediate frequency and audio video synchronization technology, just seem especially important.But there is the problems such as complicated, synchronous accuracy is not high that realize in existing simultaneous techniques.
Summary of the invention
In order to overcome synchronous complicated, the synchronous not high deficiency of accuracy of realization of Voice & Video of existing network monitoring system, the invention provides a kind of audio and video synchronization method in the network monitoring system that simple, synchronous accuracy is higher of realizing.
The technical solution adopted for the present invention to solve the technical problems is:
In an audio and video synchronization method, described synchronous method comprises the following steps:
(1) hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression; Then being sent into RTP storehouse encapsulates, sends;
Described RTP packet header comprises sequence number and timestamp, and in the process of transmitting of data, the sequence number in the RTP packet that each sends increases one by one, and timestamp is identifying the collection moment of audio, video data;
(2) after the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header, determine RTP bag to put into audio frequency or screen buffer, and then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area;
After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore; When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously; Synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.
Further, in described step (2), using the timestamp of audio frequency as the relative reference time, after playback starts, with constant speed, voice data is taken out and sends into decoder from buffering area, and write the timestamp A of buffering area the first blocks of data t, then by A ttimestamp V with screen buffer the first blocks of data tcompare, according to both difference A t-V tdetermine the propelling movement speed of video data and the playback rate of video, specific implementation is:
2.1) as-100ms≤A t-V tduring≤100ms, audio frequency and video buffering is pressed normal speed propelling data, and the speed of broadcasting also remains unchanged;
2.2) when 100ms≤| A t-V t| during≤160ms, need to synchronously adjust;
If 1. 100ms≤A t-V t≤ 160ms, the leading video of audio frequency, accelerates the data in pushing video buffering, accelerates the playback rate of video, audio frequency and video timestamp is trended towards identical;
If 2.-160ms≤A t-V t≤-100ms, i.e. audio frequency hysteresis video, the propelling movement speed of the video buffer that slows down, the playback rate of reduction video, trends towards audio frequency and video timestamp identical;
2.3) as | A t-V t| during>=160ms, need to re-start synchronous;
If 1. A t-V t>=160ms, the serious leading video of audio frequency, abandons video packets the oldest in screen buffer, until A t=V t, start to start playback with normal speed;
If 2. A t-V t≤-160ms, the audio frequency video that seriously lags behind, abandons audio pack the oldest in audio buffer, until A t=V t, start to start playback with normal speed.
Further again, described buffering area is a data link table, and described data link table comprises back end, and every kind of Media Stream has two kinds of back end, and a kind of is idle back end FreeDatanode, and a kind of is back end BusyDatanode in using; When there being new RTP bag to receive, just apply for that a FreeDatanode is as BusyDatanode, write the media data of load in RTP bag and the sequence number of RTP bag, and according to this sequence number, this BusyDatanode is inserted into the tram of buffering area, to recover original time relationship of media data in buffering area; After data in BusyDatanode back end are sent into decoder and play, BusyDatanode will become FreeDatanode.When FreeDatanode uses, when has expired buffering area, the data in the oldest BusyDatanode are by deleted, and itself can change into FreeDatanode automatically.
Technical conceive of the present invention is: a complete network monitoring system comprises the collection compression of audio, video data, the links such as transmission, Internet Transmission, network reception, real-time synchronization realization of packing, different according to its residing position, system can be divided into 3 parts: equipment end (comprise and gather compression and packing transmission), the webserver (Internet Transmission), receiving terminal (comprising network reception and synchronous realization), as shown in Figure 1, its basic implementation procedure is as follows: receiving terminal sends control command, the audio, video data of notification server forwarding unit end; Transmitting terminal receives after instruction, by audio-video collection chip, gathers audio, video data, then compresses respectively (video format for H.264, audio format is for G.729), is packaged into RTP Packet Generation to the webserver afterwards according to Real-time Transport Protocol standard; The webserver is transmitted to receiving terminal after receiving these data, at receiving terminal, by dynamic buffering technology, Directshow technology etc., realizes the real-time synchronization playback to monitoring site audio, video data.
Beneficial effect of the present invention is mainly manifested in: 1) H.264 the compression of audio frequency and video adopts respectively and G.729, these two kinds of algorithms are all the algorithms with high compression rate of current popular, can effectively save bandwidth, improve the efficiency of transmission of network.2) adopt RTP host-host protocol, there is very strong live effect, can reproduce in real time field scene by network remote.3) it is short that audio-visual synchronization is adjusted the time, and synchronization accuracy is high, and the maximum step-out interval of system is 160ms.4) synchronous implementation complexity is low, has greatly simplified audio-visual synchronization algorithm, has improved efficiency.
Accompanying drawing explanation
Fig. 1 is the Organization Chart of network monitoring system.
Fig. 2 is the schematic diagram of audio, video data encapsulation process.
The schematic diagram in Tu3Shi dynamic buffering district.
Fig. 4 is the schematic diagram that audio frequency is play thread.
Fig. 5 is the schematic diagram of video playback thread.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
With reference to Fig. 1~Fig. 5, audio and video synchronization method in a kind of network monitoring system, described synchronous method comprises the following steps:
(1) hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression; Then being sent into RTP storehouse encapsulates, sends;
Described RTP packet header comprises sequence number and timestamp, and in the process of transmitting of data, the sequence number in the RTP packet that each sends increases one by one, and timestamp is identifying the collection moment of audio, video data;
(2) after the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header, determine RTP bag to put into audio frequency or screen buffer, and then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area;
After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore; When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously; Synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.
In monitoring site, the audio, video data of equipment end collects by WM8731 chip and TW2835 chip respectively.Afterwards, the hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression.Then sent into RTP storehouse and encapsulate, send, encapsulation process as shown in Figure 2.
In synchronous implementation procedure, these two fields of the sequence number in RTP packet header and timestamp are particularly important.In the process of transmitting of data, sequence number in the RTP packet that each sends increases one by one, to facilitate at receiving terminal, packet is sorted, to recover the original time relation of packet, and then overcome the impact that network congestion, server time delay etc. cause packet.Timestamp is even more important, and it is identifying the collection moment of audio, video data, is an of paramount importance scalar in Synchronization Control.These information are all to seal in the process of dressing up RTP bag and realize at audio, video data.
The webserver main effect in synchronous implementation procedure is exactly the transfer that realizes signaling and data.
Under the impact of the various factorss such as network delay, packet loss, the RTP bag front and back order that arrives receiving terminal can be entanglement, and even certain video packets has arrived receiving terminal, and corresponding audio pack is also in network with it.For this reason, need to be respectively Voice & Video at receiving terminal and be provided with one section of dynamic buffer, as shown in Figure 3.After the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header (PT value), determine RTP bag to put into audio frequency or screen buffer.And then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area.In actual design, buffering area is a data link table, some nodes, consists of, and specific implementation is: 1) every kind of Media Stream has two kinds of back end, an idle back end FreeDatanode, a kind of is back end BusyDatanode in using.2) there is new RTP bag to receive, just apply for that a FreeDatanode is as BusyDatanode, write the media data of load in RTP bag and the sequence number of RTP bag, and according to this sequence number, this BusyDatanode is inserted into the tram of buffering area, to recover original time relationship of media data in buffering area.3) after the data in BusyDatanode back end are sent into decoder and play, BusyDatanode will become FreeDatanode.When FreeDatanode uses, when has expired buffering area, the data in the oldest BusyDatanode are by deleted, and itself can change into FreeDatanode automatically.Fig. 3 has shown the concrete structure of Datanode simultaneously, except data data, also have four signs Len, Key, SequNum, Timestamp, whether representative data section length, data are the part of key frame of video, the sequence number of packet, timestamp respectively.
The design of dynamic buffer is that receiving terminal is realized synchronous committed step, and it not only recovers the normal play order in Media Stream, is controlling realization synchronous between media simultaneously.After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore.The length in district of prestoring not only wants to meet the demand of shaking in Media Stream that makes up, and the while meets the requirement of real-time again, is unlikely to allow period of reservation of number long.So the length general control in the district that prestores is in 500ms.When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously.When processing audio-visual synchronization broadcasting, need to select suitable reference stream.Because people's the sense of hearing is more responsive than vision, when fixed frequency sound is play, the change of time-out and speed all can make people be difficult to accept, bandwidth the lacking more than video that audio stream takies in addition.Therefore, synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.Herein, we using the timestamp of audio frequency as the relative reference time.After playback starts, with constant speed, voice data is taken out and sends into decoder from buffering area, and write the timestamp A of buffering area the first blocks of data t.Then by A ttimestamp V with screen buffer the first blocks of data tcompare, according to both difference A t-V tdetermine the propelling movement speed of video data and the playback rate of video, specific implementation is:
1) as-100ms≤A t-V tduring≤100ms, the asynchrony phenomenon of the imperceptible audio frequency and video of people, Here it is synchronous region.In this case, audio frequency and video buffering is pressed normal speed propelling data, and the speed of broadcasting also remains unchanged.
2) when 100ms≤| A t-V t| during≤160ms, Here it is synchronous critical zone, need to synchronously adjust.
If 1. 100ms≤A t-V t≤ 160ms, the leading video of audio frequency, accelerates the data in pushing video buffering, accelerates the playback rate of video, audio frequency and video timestamp is trended towards identical.
If 2.-160ms≤A t-V t≤-100ms, i.e. audio frequency hysteresis video, the propelling movement speed of the video buffer that slows down, the playback rate of reduction video, trends towards audio frequency and video timestamp identical.
3) as | A t-V t| during>=160ms, people can obviously feel the asynchrony phenomenon of audio frequency and video, and Here it is step-out region need to re-start synchronous.
If 1. A t-V t>=160ms, the serious leading video of audio frequency, abandons video packets the oldest in screen buffer, until A t=V t, start to start playback with normal speed.
If 2. A t-V t≤-160ms, the audio frequency video that seriously lags behind, abandons audio pack the oldest in audio buffer, until A t=V t, start to start playback with normal speed.
Above-mentioned synchronisation control means has kept well the time relationship between audio stream and video flowing in the running of system, in long playing situation, also there will not be the phenomenon that audio frequency is leading or lag behind, also do not occur the discontinuous broadcasting phenomenon of audio stream or video flowing.
Audio frequency is play thread as shown in Figure 4, when audio buffer arrives, prestores after length, and the sign that audio frequency is play will be set to true, and program starts from buffering area reading out data.Because the frequency acquisition of native system transmitting terminal audio frequency is 8KHZ, be quantified as 16, so we read voice data by the speed with 16000 bytes per second, simultaneously by timestamp assignment in every voice data to A tvariable, judges whether that according to the difference of itself and video time stamp it is directly sent into decoder plays.
Video playback thread is similar with the flow process that audio frequency is play thread, difference is that audio frequency broadcasting only need to adjust in re-synchronization, the adjustment of video playback has run through whole synchronous playing process, when the time tolerance of audio stream and video flowing changes, playback rate that need to be by adjusting video flowing is to reach audio-visual synchronization effect.

Claims (3)

1. an audio and video synchronization method in network monitoring system, is characterized in that: described synchronous method comprises the following steps:
(1) hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression; Then being sent into RTP storehouse encapsulates, sends;
Described RTP packet header comprises sequence number and timestamp, and in the process of transmitting of data, the sequence number in the RTP packet that each sends increases one by one, and timestamp is identifying the collection moment of audio, video data;
(2) after the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header, determine RTP bag to put into audio frequency or screen buffer, and then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area;
After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore; When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously; Synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.
2. audio and video synchronization method in network monitoring system as claimed in claim 1, it is characterized in that: in described step (2), using the timestamp of audio frequency as the relative reference time, after playback starts, with constant speed, voice data is taken out and sends into decoder from buffering area, and write the timestamp A of buffering area the first blocks of data t, then by A ttimestamp V with screen buffer the first blocks of data tcompare, according to both difference A t-V tdetermine the propelling movement speed of video data and the playback rate of video, specific implementation is:
2.1) as-100ms≤A t-V tduring≤100ms, audio frequency and video buffering is pressed normal speed propelling data, and the speed of broadcasting also remains unchanged;
2.2) when 100ms≤| A t-V t| during≤160ms, need to synchronously adjust;
If 1. 100ms≤A t-V t≤ 160ms, the leading video of audio frequency, accelerates the data in pushing video buffering, accelerates the playback rate of video, audio frequency and video timestamp is trended towards identical;
If 2.-160ms≤A t-V t≤-100ms, i.e. audio frequency hysteresis video, the propelling movement speed of the video buffer that slows down, the playback rate of reduction video, trends towards audio frequency and video timestamp identical;
2.3) as | A t-V t| during>=160ms, need to re-start synchronous;
If 1. A t-V t>=160ms, the serious leading video of audio frequency, abandons video packets the oldest in screen buffer, until A t=V t, start to start playback with normal speed;
If 2. A t-V t≤-160ms, the audio frequency video that seriously lags behind, abandons audio pack the oldest in audio buffer, until A t=V t, start to start playback with normal speed.
3. audio and video synchronization method in network monitoring system as claimed in claim 1 or 2, it is characterized in that: described buffering area is a data link table, described data link table comprises back end, every kind of Media Stream has two kinds of back end, an idle back end FreeDatanode, a kind of is back end BusyDatanode in using; When there being new RTP bag to receive, just apply for that a FreeDatanode is as BusyDatanode, write the media data of load in RTP bag and the sequence number of RTP bag, and according to this sequence number, this BusyDatanode is inserted into the tram of buffering area, to recover original time relationship of media data in buffering area; When
After data in BusyDatanode back end are sent into decoder and are play,
BusyDatanode will become FreeDatanode.When FreeDatanode uses, when has expired buffering area, the data in the oldest BusyDatanode are by deleted, and itself can change into FreeDatanode automatically.
CN201310437082.1A 2013-09-23 2013-09-23 Audio and video synchronizing method in network monitoring system Pending CN103546662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310437082.1A CN103546662A (en) 2013-09-23 2013-09-23 Audio and video synchronizing method in network monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310437082.1A CN103546662A (en) 2013-09-23 2013-09-23 Audio and video synchronizing method in network monitoring system

Publications (1)

Publication Number Publication Date
CN103546662A true CN103546662A (en) 2014-01-29

Family

ID=49969689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310437082.1A Pending CN103546662A (en) 2013-09-23 2013-09-23 Audio and video synchronizing method in network monitoring system

Country Status (1)

Country Link
CN (1) CN103546662A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581422A (en) * 2015-02-05 2015-04-29 成都金本华科技股份有限公司 Method and device for processing network data transmission
CN104597456A (en) * 2015-02-27 2015-05-06 南通航大电子科技有限公司 Multi-board-card synchronous control method of GNSS signal simulation system
CN104618786A (en) * 2014-12-22 2015-05-13 深圳市腾讯计算机系统有限公司 Audio/video synchronization method and device
CN104869461A (en) * 2015-05-22 2015-08-26 南京创维信息技术研究院有限公司 Video data processing system and method
CN105228028A (en) * 2015-09-18 2016-01-06 南京大学镇江高新技术研究院 A kind of video stream media Data dissemination based on udp broadcast and pre-cache method
CN105245976A (en) * 2015-09-30 2016-01-13 合一网络技术(北京)有限公司 Method and system for synchronously playing audio and video
CN105744334A (en) * 2016-02-18 2016-07-06 海信集团有限公司 Method and equipment for audio and video synchronization and synchronous playing
CN106162293A (en) * 2015-04-22 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video sound and the method and device of image synchronization
CN108200481A (en) * 2017-12-07 2018-06-22 北京佳讯飞鸿电气股份有限公司 A kind of RTP-PS method for stream processing, device, equipment and storage medium
CN108282685A (en) * 2018-01-04 2018-07-13 华南师范大学 A kind of method and monitoring system of audio-visual synchronization
CN108337230A (en) * 2017-12-26 2018-07-27 武汉烽火众智数字技术有限责任公司 A kind of real-time retransmission method of audio and video based on smart mobile phone and system
CN108599774A (en) * 2018-04-26 2018-09-28 郑州云海信息技术有限公司 a kind of compression method, system, device and computer readable storage medium
CN108616767A (en) * 2018-04-28 2018-10-02 青岛海信电器股份有限公司 A kind of audio data transmission method and device
WO2019153960A1 (en) * 2018-02-11 2019-08-15 Zhejiang Dahua Technology Co., Ltd. Systems and methods for synchronizing audio and video
CN111988674A (en) * 2020-08-18 2020-11-24 广州极飞科技有限公司 Multimedia data transmission method, device, equipment and storage medium
CN112511885A (en) * 2020-11-20 2021-03-16 深圳乐播科技有限公司 Audio and video synchronization method and device and storage medium
CN112511886A (en) * 2020-11-25 2021-03-16 杭州当虹科技股份有限公司 Audio and video synchronous playing method based on audio expansion and contraction
CN113099310A (en) * 2021-04-08 2021-07-09 李蕊男 Real-time media internal video and audio coordination method based on android platform
CN114285513A (en) * 2021-11-22 2022-04-05 杭州当虹科技股份有限公司 Time delay device and method supporting lossless long-time delay of IP signals
CN115297337A (en) * 2022-08-05 2022-11-04 深圳市野草声学有限公司 Audio transmission method and system during video live broadcasting based on data receiving and transmitting cache

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902649A (en) * 2010-07-15 2010-12-01 浙江工业大学 Audio-video synchronization control method based on H.264 standard
CN102547482A (en) * 2011-12-30 2012-07-04 北京锐安科技有限公司 Synchronous playing method of multi-path IP (Internet Protocol) audio-video stream
CN103137191A (en) * 2011-11-28 2013-06-05 国际商业机器公司 Programming of phase-change memory cells
CN203137191U (en) * 2013-03-27 2013-08-21 谢静琦 Anti-trip vibration prompting shoe

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902649A (en) * 2010-07-15 2010-12-01 浙江工业大学 Audio-video synchronization control method based on H.264 standard
CN103137191A (en) * 2011-11-28 2013-06-05 国际商业机器公司 Programming of phase-change memory cells
CN102547482A (en) * 2011-12-30 2012-07-04 北京锐安科技有限公司 Synchronous playing method of multi-path IP (Internet Protocol) audio-video stream
CN203137191U (en) * 2013-03-27 2013-08-21 谢静琦 Anti-trip vibration prompting shoe

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
司徒涨勇,孟利民,黄成君: "网络监控系统中多媒体同步控制的研究与实现", 《电声技术》 *
方立华,骆似骏: "一种网络监控系统中音视频同步的方法", 《电声技术》 *
方立华: "网络监控系统中音视频实时流同步技术的研究与设计", 《中国优秀硕士论文全文数据库》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618786A (en) * 2014-12-22 2015-05-13 深圳市腾讯计算机系统有限公司 Audio/video synchronization method and device
CN104618786B (en) * 2014-12-22 2018-01-05 深圳市腾讯计算机系统有限公司 Audio and video synchronization method and device
CN104581422A (en) * 2015-02-05 2015-04-29 成都金本华科技股份有限公司 Method and device for processing network data transmission
CN104581422B (en) * 2015-02-05 2017-09-15 成都金本华科技股份有限公司 A kind of method and apparatus transmitted for network data
CN104597456A (en) * 2015-02-27 2015-05-06 南通航大电子科技有限公司 Multi-board-card synchronous control method of GNSS signal simulation system
CN104597456B (en) * 2015-02-27 2018-03-30 南通航大电子科技有限公司 A kind of more board synchronisation control means of GNSS signal analogue system
CN106162293A (en) * 2015-04-22 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video sound and the method and device of image synchronization
CN106162293B (en) * 2015-04-22 2019-11-08 无锡天脉聚源传媒科技有限公司 A kind of method and device of video sound and image synchronization
CN104869461A (en) * 2015-05-22 2015-08-26 南京创维信息技术研究院有限公司 Video data processing system and method
CN105228028B (en) * 2015-09-18 2018-05-11 南京大学镇江高新技术研究院 A kind of video stream media data distribution based on udp broadcast and pre-cache method
CN105228028A (en) * 2015-09-18 2016-01-06 南京大学镇江高新技术研究院 A kind of video stream media Data dissemination based on udp broadcast and pre-cache method
CN105245976B (en) * 2015-09-30 2016-11-23 合一网络技术(北京)有限公司 Voice & Video synchronizes the method and system play
CN105245976A (en) * 2015-09-30 2016-01-13 合一网络技术(北京)有限公司 Method and system for synchronously playing audio and video
CN105744334A (en) * 2016-02-18 2016-07-06 海信集团有限公司 Method and equipment for audio and video synchronization and synchronous playing
CN108200481B (en) * 2017-12-07 2020-12-15 北京佳讯飞鸿电气股份有限公司 RTP-PS stream processing method, device, equipment and storage medium
CN108200481A (en) * 2017-12-07 2018-06-22 北京佳讯飞鸿电气股份有限公司 A kind of RTP-PS method for stream processing, device, equipment and storage medium
CN108337230A (en) * 2017-12-26 2018-07-27 武汉烽火众智数字技术有限责任公司 A kind of real-time retransmission method of audio and video based on smart mobile phone and system
CN108282685A (en) * 2018-01-04 2018-07-13 华南师范大学 A kind of method and monitoring system of audio-visual synchronization
US11343560B2 (en) 2018-02-11 2022-05-24 Zhejiang Xinsheng Electronic Technology Co., Ltd. Systems and methods for synchronizing audio and video
WO2019153960A1 (en) * 2018-02-11 2019-08-15 Zhejiang Dahua Technology Co., Ltd. Systems and methods for synchronizing audio and video
CN108599774A (en) * 2018-04-26 2018-09-28 郑州云海信息技术有限公司 a kind of compression method, system, device and computer readable storage medium
CN108616767A (en) * 2018-04-28 2018-10-02 青岛海信电器股份有限公司 A kind of audio data transmission method and device
CN108616767B (en) * 2018-04-28 2020-12-29 海信视像科技股份有限公司 Audio data transmission method and device
CN111988674A (en) * 2020-08-18 2020-11-24 广州极飞科技有限公司 Multimedia data transmission method, device, equipment and storage medium
CN112511885A (en) * 2020-11-20 2021-03-16 深圳乐播科技有限公司 Audio and video synchronization method and device and storage medium
CN112511886A (en) * 2020-11-25 2021-03-16 杭州当虹科技股份有限公司 Audio and video synchronous playing method based on audio expansion and contraction
CN113099310A (en) * 2021-04-08 2021-07-09 李蕊男 Real-time media internal video and audio coordination method based on android platform
CN114285513A (en) * 2021-11-22 2022-04-05 杭州当虹科技股份有限公司 Time delay device and method supporting lossless long-time delay of IP signals
CN114285513B (en) * 2021-11-22 2023-10-27 杭州当虹科技股份有限公司 Delay device and method for supporting long-time delay of lossless IP signal
CN115297337A (en) * 2022-08-05 2022-11-04 深圳市野草声学有限公司 Audio transmission method and system during video live broadcasting based on data receiving and transmitting cache
CN115297337B (en) * 2022-08-05 2024-05-28 深圳市野草声学有限公司 Audio transmission method and system based on data transceiving cache during live video broadcast

Similar Documents

Publication Publication Date Title
CN103546662A (en) Audio and video synchronizing method in network monitoring system
CN103237191B (en) The method of synchronized push audio frequency and video in video conference
WO2023024834A9 (en) Game data processing method and apparatus, and storage medium
CN100579238C (en) Synchronous playing method for audio and video buffer
CN101488967B (en) Video transmission method, embedded monitoring terminal and monitoring platform server
US10034037B2 (en) Fingerprint-based inter-destination media synchronization
CA3078998C (en) Embedded appliance for multimedia capture
CN102932676B (en) Self-adaptive bandwidth transmitting and playing method based on audio and video frequency synchronization
CA2951065A1 (en) Synchronizing playback of segmented video content across multiple video playback devices
CN105491393A (en) Method for implementing multi-user live video business
CN109168059B (en) Lip sound synchronization method for respectively playing audio and video on different devices
JP2004525545A (en) Webcast method and system for synchronizing multiple independent media streams in time
CN103856787B (en) Commentary video passing-back live system based on public network and live method of commentary video passing-back live system based on public network
CN109361945A (en) The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
CN101902649A (en) Audio-video synchronization control method based on H.264 standard
CN106791271B (en) A kind of audio and video synchronization method
CN109565466A (en) More equipment room labial synchronization method and apparatus
CN105791735B (en) Method and system for video calling code stream dynamic adjustment
CN103607664B (en) A kind of audio and video synchronization method of embedded multimedia playing system
CN104079870A (en) Video monitoring method and system for single-channel video and multiple-channel audio frequency
CN101202613A (en) Terminal for clock synchronising
WO2017071670A1 (en) Audio and video synchronization method, device and system
KR20070008069A (en) Appratus and method for synchronizing audio/video signal
CN103596033A (en) Method for solving problem of audio and video non-synchronization in multimedia system terminal playback
CN117255236A (en) Audio and video synchronization method for digital visual intercom

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20140129