CN102364952A - Method for processing audio and video synchronization in simultaneous playing of a plurality of paths of audio and video - Google Patents

Method for processing audio and video synchronization in simultaneous playing of a plurality of paths of audio and video Download PDF

Info

Publication number
CN102364952A
CN102364952A CN2011103271660A CN201110327166A CN102364952A CN 102364952 A CN102364952 A CN 102364952A CN 2011103271660 A CN2011103271660 A CN 2011103271660A CN 201110327166 A CN201110327166 A CN 201110327166A CN 102364952 A CN102364952 A CN 102364952A
Authority
CN
China
Prior art keywords
audio
video
compression bag
road
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2011103271660A
Other languages
Chinese (zh)
Other versions
CN102364952B (en
Inventor
胡开荆
李群巍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Wanpeng Digital Intelligence Technology Co ltd
Original Assignee
ZHEJIANG WANPENG NETWORK TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG WANPENG NETWORK TECHNOLOGY Co Ltd filed Critical ZHEJIANG WANPENG NETWORK TECHNOLOGY Co Ltd
Priority to CN 201110327166 priority Critical patent/CN102364952B/en
Publication of CN102364952A publication Critical patent/CN102364952A/en
Application granted granted Critical
Publication of CN102364952B publication Critical patent/CN102364952B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The invention relates to a method for processing audio and video synchronization in the simultaneous playing of a plurality of paths of audio and video. The requirements of multi-user communication application on the simultaneous synchronization of the plurality of paths of audio and video cannot be met by a conventional audio and video synchronization technology. The method provided by the invention comprises the following steps of: each user acquires own audio and video data, compresses the acquired audio and video data into audio and video compressed packets, marks timestamps on the audio and video compressed packets and transmits the audio and video compressed packets to a server; the server decompresses and mixes the audio and video compressed packets received from each user, records the timestamps corresponding to all the audio compressed packets participating in the mixing in mixing results, compresses the audio compressed packets into mixed compressed packets, transmits the mixed compressed packets to clients and directly transmits the video compressed packets to the clients; and after receiving the mixed compressed packets and the video compressed packets, each client decompresses the mixed compressed packets, sequentially plays the decompressed audio data, and displays video frames in corresponding video compressed packets according to the principle of driving video by audio. By the method, synchronization relationships between all the audio and the video can be integrally stored.

Description

When playing simultaneously, handles a kind of multichannel audio-video frequency the method for audio-visual synchronization
Technical field
The invention belongs to technical field of computer multimedia, relate to the method for multichannel audio-video frequency being handled through after the Network Transmission, specifically is the method that a kind of multichannel audio-video frequency is handled audio-visual synchronization when playing simultaneously.
Background technology
Along with the develop rapidly of current internet broadband technology and multimedia information technology, network-multimedia application has become the important content that internet uses.Particularly in the network teleconference,, need play simultaneously multichannel audio-video frequency owing to relate to the interbehavior between many people.This moment, each road audio frequency and video all needed synchronously, otherwise can't accomplish the effect of " labial is synchronous ", the fluency that influence is linked up.Traditional audio-visual synchronization technology is with timestamp of each mark of audio frequency and video bag, when playing, carries out synchronously according to this timestamp.This mode can only work in the situation of one road audio frequency and one road video, can't operate as normal under the situation of MCVF multichannel voice frequency and multi-channel video, and can not satisfy people more than this type of video conference and link up application multichannel audio-video frequency is carried out synchronous requirement simultaneously.
Summary of the invention
The objective of the invention is deficiency, a kind of multi-channel video method for synchronous that drives based on voice playing is provided to prior art.
The concrete steps of the inventive method are:
Step (1). each user obtains audio, video data separately respectively and Voice & Video is compressed separately; Is that unit is divided into audio data unit with the voice data of gathering with 10~120 milliseconds, and each audio data unit is compressed into the audio compression bag, each audio compression bag mark collection client machine timestamp constantly; Each frame in the video data is compressed into the video compression bag, each video compression bag mark collection client machine timestamp constantly; Each audio compression bag and each video compression bag are sent to server;
The method that each user obtains audio, video data separately respectively comprises through the equipment collection and from media file, obtains; As passing through the equipment collection, the moment of then described timestamp for gathering; As from media file, obtaining, playback of media files or decompress(ion) assembly can be provided with timestamp for data, and this timestamp is the moment that relative media file begins, and converting into current computer is the timestamp of standard constantly.
Step (2). audio mixing behind each user's that server will receive the audio compression bag decompress(ion), the audio compression bag time corresponding of all participation audio mixings of record is stabbed in the audio mixing result then, is compressed into the audio mixing compressed package, sends to client; The video compression bag directly sends to client.
N user U1, U2 ..., UN, each user has one road audio frequency, total N road audio frequency, be respectively A1, A2 ..., AN; Server needs audio mixing to go out N+1 road audio frequency, is respectively:
The 0 the tunnel. comprised all audio frequency, be designated as M0,
The 1 the tunnel. other all audio frequency except that A1, be designated as M1,
The 2 the tunnel. other all audio frequency except that A2, be designated as M2,
…、
The N road. other all audio frequency except that AN are designated as MN.
The every road audio frequency that generates all need it is corresponding N or the timestamp of source, N-1 road audio frequency write in this road audio frequency, this audio pack will have N or N-1 timestamp, and the pairing source of these timestamps audio frequency.
After generating this N+1 road audio frequency, M0 is sent to the user that all do not send audio frequency, M1 sends to U1, and M2 sends to U2, and by that analogy, the audio content that sends to each user does not comprise this user's audio frequency.
Step (3). after each client receives audio mixing compressed package and video compression bag,,, show the frame of video in the corresponding video compression bag then according to the principle of audio driven video with played in order behind the audio mixing compressed package decompress(ion).
The content that each client receives is the N road video compression bag of one road audio mixing compressed package and server forwards; Carry out through the audio driven video during broadcast, audio compression bag of promptly every broadcast, write down all timestamps of comprising in this audio compression bag (U, A) When playing X user's video, take out the corresponding video time stamp of this road video frame to be play (UX, VX); (UX AX), compares VX and AX the same user's of the audio frame that taking-up was simultaneously play recently timestamp; If VX is more than or equal to AX, represent that then video content after audio content, can play; And if VX less than AX; According to audio driven video principle, represent that this frame of video does not also arrive the moment of playing, need therefore to wait for that broadcast judgement next time determines whether and can play.
The inventive method is tie with the audio time stamp, with multi-channel video and audio sync, reach all videos all can with the effect of audio frequency " labial is synchronous ".The inventive method sound intermediate frequency is when the server audio mixing; Do not use single timestamp to come audio mixing compressed package of mark; But the timestamp that will participate in the MCVF multichannel voice frequency of this audio mixing compressed package is all preserved; As the timestamp of audio mixing compressed package, so just intactly preserved the synchronized relation between all Voice & Videos.
Embodiment
Handle the method for audio-visual synchronization when a kind of multichannel audio-video frequency is play simultaneously, concrete steps are:
Step (1). each user obtains audio, video data separately respectively and Voice & Video is compressed separately; Is that unit is divided into audio data unit with the voice data of gathering with 10~120 milliseconds, and each audio data unit is compressed into the audio compression bag, each audio compression bag mark collection client machine timestamp constantly; Each frame in the video data is compressed into the video compression bag, each video compression bag mark collection client machine timestamp constantly; Each audio compression bag and each video compression bag are sent to server.
The method that each user obtains audio, video data separately respectively comprises through the equipment collection and from media file, obtains; As passing through the equipment collection, the moment of then described timestamp for gathering; As from media file, obtaining, playback of media files or decompress(ion) assembly can be provided with timestamp for data, and this timestamp is the moment that relative media file begins, and converting into current computer is the timestamp of standard constantly.
Video processing then is that the video with input is unit with the frame, use the video encoder compression after, according to network condition, the size (being generally 400~1400 bytes) and the timestamp of this frame of video that cut into suitable transmission send to server together.For making things convenient for the receiving terminal ordering and judge whether that the packet loss phenomenon is arranged in transmission course, audio frequency and video Bao Jun has sequence number.Sequence number is that 2 bytes increase progressively, and restarts from 0 above after the maximum.For improving the user experience of bandwidth when relatively poor, audio, video data uses different connections to send, and when bandwidth is not enough, audio frequency connects because the relative video connection of data is fewer, is protected easily like this.And our mutual main means are through audio frequency, and in general video is supplementary means, do like this to let audio frequency relatively more smooth, reduce the influence to the user.
Step (2). audio mixing behind each user's that server will receive the audio compression bag decompress(ion), the audio compression bag time corresponding of all participation audio mixings of record is stabbed in the audio mixing result then, is compressed into the audio mixing compressed package, sends to client; The video compression bag directly sends to client.
N user U1, U2 ..., UN, each user has one road audio frequency, total N road audio frequency, be respectively A1, A2 ..., AN; Server needs audio mixing to go out N+1 road audio frequency, is respectively:
The 0 the tunnel. comprised all audio frequency, be designated as M0,
The 1 the tunnel. other all audio frequency except that A1, be designated as M1,
The 2 the tunnel. other all audio frequency except that A2, be designated as M2,
…、
The N road. other all audio frequency except that AN are designated as MN.
The every road audio frequency that generates all need it is corresponding N or the timestamp of source, N-1 road audio frequency write in this road audio frequency, this audio pack will have N or N-1 timestamp, and the pairing source of these timestamps audio frequency.For example M0 will comprise (U1, A1) (U2, A2) ... (UN, AN), M1 will comprise (U2, A2) (U3, A3) ... (UN, AN).
After generating this N+1 road audio frequency, M0 is sent to the user that all do not send audio frequency, M1 sends to U1, and M2 sends to U2, and by that analogy, the audio content that sends to each user does not comprise this user's audio frequency, promptly avoids echogenicity in these user's loudspeaker.
Step (3). after each client receives audio mixing compressed package and video compression bag,,, show the frame of video in the corresponding video compression bag then according to the principle of audio driven video with played in order behind the audio mixing compressed package decompress(ion).
The content that each client receives is the N road video compression bag of one road audio mixing compressed package and server forwards; Carry out through the audio driven video during broadcast, audio compression bag of promptly every broadcast, write down all timestamps of comprising in this audio compression bag (U, A) When playing X user's video, take out the corresponding video time stamp of this road video frame to be play (UX, VX); (UX AX), compares VX and AX the same user's of the audio frame that taking-up was simultaneously play recently timestamp; If VX is more than or equal to AX, represent that then video content after audio content, can play; And if VX less than AX; According to audio driven video principle, represent that this frame of video does not also arrive the moment of playing, need therefore to wait for that broadcast judgement next time determines whether and can play.
Network Transmission uncertain more intense, main performance have following some: data packet disorder and the uncertainty that receives time-delay.When sending data through TCP, the data that different connections are sent may be different with the order of sending when receiving, and when sending data through UDP, the order that different packets arrives also is unwarrantable, and this is the out of order characteristic of packet.No matter use TCP or UDP; The time that the packet arrival the other side computer that sends is consumed all is uncertain; Can change along with the Network Transmission quality condition, generally possibly in 1 millisecond to 500 milliseconds, fluctuate, in the time of the network difference even might reach the several seconds.Because above two characteristics, need sort respectively to the audio, video data that receives and cushion processing.The foundation of ordering is the sequence number in the packet, and the time of buffering will determine according to network delay.Network delay is more little, and the expression network condition is good more, can suitably reduce the voice data of buffering so, obtains better real-time property.Network delay is big more; The expression network condition is poor more, and we will suspend broadcast so, equals the duration of network delay up to the voice data duration of buffering; Though sacrificed real-time like this; But improved the fluency of playing, reduced when playing because buffering is too short, data are not had the data can be with the phenomenon of a card that causes after playing.

Claims (1)

1. handle the method for audio-visual synchronization when a multichannel audio-video frequency is play simultaneously, it is characterized in that the concrete steps of this method are:
Step (1). each user obtains audio, video data separately respectively and Voice & Video is compressed separately; Is that unit is divided into audio data unit with the voice data of gathering with 10~120 milliseconds, and each audio data unit is compressed into the audio compression bag, each audio compression bag mark collection client machine timestamp constantly; Each frame in the video data is compressed into the video compression bag, each video compression bag mark collection client machine timestamp constantly; Each audio compression bag and each video compression bag are sent to server;
The method that each user obtains audio, video data separately respectively comprises through the equipment collection and from media file, obtains; As passing through the equipment collection, the moment of then described timestamp for gathering; As from media file, obtaining, playback of media files or decompress(ion) assembly can be provided with timestamp for data, and this timestamp is the moment that relative media file begins, and converting into current computer is the timestamp of standard constantly;
Step (2). audio mixing behind each user's that server will receive the audio compression bag decompress(ion), the audio compression bag time corresponding of all participation audio mixings of record is stabbed in the audio mixing result then, is compressed into the audio mixing compressed package, sends to client; The video compression bag directly sends to client;
N user U1, U2 ..., UN, each user has one road audio frequency, total N road audio frequency, be respectively A1, A2 ..., AN; Server needs audio mixing to go out N+1 road audio frequency, is respectively:
The 0 the tunnel. comprised all audio frequency, be designated as M0,
The 1 the tunnel. other all audio frequency except that A1, be designated as M1,
The 2 the tunnel. other all audio frequency except that A2, be designated as M2,
……
The N road. other all audio frequency except that AN are designated as MN;
The every road audio frequency that generates all need it is corresponding N or the timestamp of source, N-1 road audio frequency write in this road audio frequency, this audio pack will have N or N-1 timestamp, and the pairing source of these timestamps audio frequency;
After generating this N+1 road audio frequency, M0 is sent to the user that all do not send audio frequency, M1 sends to U1, and M2 sends to U2, and by that analogy, the audio content that sends to each user does not comprise this user's audio frequency;
Step (3). after each client receives audio mixing compressed package and video compression bag,,, show the frame of video in the corresponding video compression bag then according to the principle of audio driven video with played in order behind the audio mixing compressed package decompress(ion);
The content that each client receives is the N road video compression bag of one road audio mixing compressed package and server forwards; Carry out through the audio driven video during broadcast, audio compression bag of promptly every broadcast, write down all timestamps of comprising in this audio compression bag (U, A); When playing X user's video, take out the corresponding video time stamp of this road video frame to be play (UX, VX); (UX AX), compares VX and AX the same user's of the audio frame that taking-up was simultaneously play recently timestamp; If VX is more than or equal to AX, represent that then video content after audio content, can play; And if VX less than AX; According to audio driven video principle, represent that this frame of video does not also arrive the moment of playing, wait for that broadcast judgement next time determines whether and can play.
CN 201110327166 2011-10-25 2011-10-25 Method for processing audio and video synchronization in simultaneous playing of plurality of paths of audio and video Active CN102364952B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 201110327166 CN102364952B (en) 2011-10-25 2011-10-25 Method for processing audio and video synchronization in simultaneous playing of plurality of paths of audio and video

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 201110327166 CN102364952B (en) 2011-10-25 2011-10-25 Method for processing audio and video synchronization in simultaneous playing of plurality of paths of audio and video

Publications (2)

Publication Number Publication Date
CN102364952A true CN102364952A (en) 2012-02-29
CN102364952B CN102364952B (en) 2013-12-25

Family

ID=45691502

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 201110327166 Active CN102364952B (en) 2011-10-25 2011-10-25 Method for processing audio and video synchronization in simultaneous playing of plurality of paths of audio and video

Country Status (1)

Country Link
CN (1) CN102364952B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428535A (en) * 2013-08-02 2013-12-04 汎惪股份有限公司 Real-time video transmission method, real-time video transmission and synchronous playing method and system and sending unit for real-time video transmission method
CN103702013A (en) * 2013-11-28 2014-04-02 北京航空航天大学 Frame synchronization method for multiple channels of real-time videos
CN105187883A (en) * 2015-09-11 2015-12-23 广东威创视讯科技股份有限公司 Data processing method and client equipment
CN105516090A (en) * 2015-11-27 2016-04-20 刘军 Media play method, device and music teaching system
CN103428535B (en) * 2013-08-02 2016-11-30 汎惪股份有限公司 Real-time audio-video transmits and transmits and synchronous broadcast method, system and transmitting element thereof
CN106658030A (en) * 2016-12-30 2017-05-10 上海寰视网络科技有限公司 Method and device for playing composite video comprising single-path audio and multipath videos
CN106941613A (en) * 2017-04-14 2017-07-11 武汉鲨鱼网络直播技术有限公司 A kind of compacting of audio frequency and video interflow and supplying system and method
CN107195308A (en) * 2017-04-14 2017-09-22 苏州科达科技股份有限公司 Sound mixing method, the apparatus and system of audio/video conference system
CN108021675A (en) * 2017-12-07 2018-05-11 北京慧听科技有限公司 A kind of automatic segmentation alignment schemes of more equipment recording
US9979997B2 (en) 2015-10-14 2018-05-22 International Business Machines Corporation Synchronization of live audio and video data streams
CN109120974A (en) * 2018-07-25 2019-01-01 深圳市异度信息产业有限公司 A kind of method and device that audio-visual synchronization plays
CN109361886A (en) * 2018-10-24 2019-02-19 杭州叙简科技股份有限公司 A kind of conference video recording labeling system based on sound detection
CN109600649A (en) * 2018-08-01 2019-04-09 北京微播视界科技有限公司 Method and apparatus for handling data
CN111277885A (en) * 2020-03-09 2020-06-12 北京三体云时代科技有限公司 Audio and video synchronization method and device, server and computer readable storage medium
CN113259762A (en) * 2021-04-07 2021-08-13 广州虎牙科技有限公司 Audio processing method and device, electronic equipment and computer readable storage medium
CN114760274A (en) * 2022-06-14 2022-07-15 北京新唐思创教育科技有限公司 Voice interaction method, device, equipment and storage medium for online classroom

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997021310A2 (en) * 1995-12-07 1997-06-12 Philips Electronics N.V. A method and device for encoding, transferring and decoding a non-pcm bitstream between a digital versatile disc device and a multi-channel reproduction apparatus
CN1878315A (en) * 2006-07-14 2006-12-13 杭州国芯科技有限公司 Video-audio synchronization method
CN101232623A (en) * 2007-01-22 2008-07-30 李会根 System and method for transmitting stereo audio and video numerical coding based on transmission stream

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997021310A2 (en) * 1995-12-07 1997-06-12 Philips Electronics N.V. A method and device for encoding, transferring and decoding a non-pcm bitstream between a digital versatile disc device and a multi-channel reproduction apparatus
CN1878315A (en) * 2006-07-14 2006-12-13 杭州国芯科技有限公司 Video-audio synchronization method
CN101232623A (en) * 2007-01-22 2008-07-30 李会根 System and method for transmitting stereo audio and video numerical coding based on transmission stream

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103428535B (en) * 2013-08-02 2016-11-30 汎惪股份有限公司 Real-time audio-video transmits and transmits and synchronous broadcast method, system and transmitting element thereof
CN103428535A (en) * 2013-08-02 2013-12-04 汎惪股份有限公司 Real-time video transmission method, real-time video transmission and synchronous playing method and system and sending unit for real-time video transmission method
CN103702013A (en) * 2013-11-28 2014-04-02 北京航空航天大学 Frame synchronization method for multiple channels of real-time videos
CN103702013B (en) * 2013-11-28 2017-02-01 北京航空航天大学 Frame synchronization method for multiple channels of real-time videos
CN105187883A (en) * 2015-09-11 2015-12-23 广东威创视讯科技股份有限公司 Data processing method and client equipment
CN105187883B (en) * 2015-09-11 2018-05-29 广东威创视讯科技股份有限公司 A kind of data processing method and client device
US9979997B2 (en) 2015-10-14 2018-05-22 International Business Machines Corporation Synchronization of live audio and video data streams
CN105516090A (en) * 2015-11-27 2016-04-20 刘军 Media play method, device and music teaching system
CN105516090B (en) * 2015-11-27 2019-01-22 刘军 Media playing method, equipment and music lesson system
CN106658030A (en) * 2016-12-30 2017-05-10 上海寰视网络科技有限公司 Method and device for playing composite video comprising single-path audio and multipath videos
CN106658030B (en) * 2016-12-30 2019-07-30 上海寰视网络科技有限公司 A kind of playback method and equipment of the composite video comprising SCVF single channel voice frequency multi-channel video
CN107195308A (en) * 2017-04-14 2017-09-22 苏州科达科技股份有限公司 Sound mixing method, the apparatus and system of audio/video conference system
CN106941613A (en) * 2017-04-14 2017-07-11 武汉鲨鱼网络直播技术有限公司 A kind of compacting of audio frequency and video interflow and supplying system and method
CN108021675A (en) * 2017-12-07 2018-05-11 北京慧听科技有限公司 A kind of automatic segmentation alignment schemes of more equipment recording
CN108021675B (en) * 2017-12-07 2021-11-09 北京慧听科技有限公司 Automatic segmentation and alignment method for multi-equipment recording
CN109120974A (en) * 2018-07-25 2019-01-01 深圳市异度信息产业有限公司 A kind of method and device that audio-visual synchronization plays
CN109600649A (en) * 2018-08-01 2019-04-09 北京微播视界科技有限公司 Method and apparatus for handling data
WO2020024960A1 (en) * 2018-08-01 2020-02-06 北京微播视界科技有限公司 Method and device for processing data
CN109361886A (en) * 2018-10-24 2019-02-19 杭州叙简科技股份有限公司 A kind of conference video recording labeling system based on sound detection
CN111277885A (en) * 2020-03-09 2020-06-12 北京三体云时代科技有限公司 Audio and video synchronization method and device, server and computer readable storage medium
CN111277885B (en) * 2020-03-09 2023-01-10 北京世纪好未来教育科技有限公司 Audio and video synchronization method and device, server and computer readable storage medium
CN113259762A (en) * 2021-04-07 2021-08-13 广州虎牙科技有限公司 Audio processing method and device, electronic equipment and computer readable storage medium
CN113259762B (en) * 2021-04-07 2022-10-04 广州虎牙科技有限公司 Audio processing method and device, electronic equipment and computer readable storage medium
CN114760274A (en) * 2022-06-14 2022-07-15 北京新唐思创教育科技有限公司 Voice interaction method, device, equipment and storage medium for online classroom
CN114760274B (en) * 2022-06-14 2022-09-02 北京新唐思创教育科技有限公司 Voice interaction method, device, equipment and storage medium for online classroom

Also Published As

Publication number Publication date
CN102364952B (en) 2013-12-25

Similar Documents

Publication Publication Date Title
CN102364952B (en) Method for processing audio and video synchronization in simultaneous playing of plurality of paths of audio and video
CN105430537B (en) Synthetic method, server and music lesson system are carried out to multichannel data
TWI568230B (en) Method and system for synchronizing audio and video streams in media relay conferencing
US7814515B2 (en) Digital data delivery system and method of the same
CN102893542B (en) Method and apparatus for synchronizing data in a vehicle
JP4702397B2 (en) Content server, information processing apparatus, network device, content distribution method, information processing method, and content distribution system
CN101465996B (en) Method, equipment and system for displaying network television time
WO2012119465A1 (en) Method and system for transmitting and broadcasting media data in telepresence technology
CN105100954A (en) Interactive response system and method based on Internet communication and streaming media live broadcast
JP2004525545A (en) Webcast method and system for synchronizing multiple independent media streams in time
US20050062843A1 (en) Client-side audio mixing for conferencing
CN104426832A (en) Multi-terminal multichannel independent playing method and device
CN109361945A (en) The meeting audiovisual system and its control method of a kind of quick transmission and synchronization
WO2011050690A1 (en) Method and system for recording and replaying multimedia conference
EP2472799B1 (en) Method, apparatus and system for rapid acquisition of multicast realtime transport protocol sessions
CN105992040A (en) Multichannel audio data transmitting method, audio data synchronization playing method and devices
JP2007274019A5 (en)
CN110267064A (en) Audio broadcast state processing method, device, equipment and storage medium
JP6197211B2 (en) Audiovisual distribution system, audiovisual distribution method, and program
CN108111872B (en) Audio live broadcasting system
CN101237289A (en) Broadcast terminal and method of controlling vibration of broadcast terminal
WO2008028361A1 (en) A method for synchronous playing video and audio data in mobile multimedia broadcasting
WO2017071670A1 (en) Audio and video synchronization method, device and system
WO2008031293A1 (en) A method for quickly playing the multimedia broadcast channels
JP5428734B2 (en) Network device, information processing apparatus, stream switching method, information processing method, program, and content distribution system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
C56 Change in the name or address of the patentee

Owner name: ZHEJIANG WINUPON EDUCATIONAL TECHNOLOGY CO., LTD.

Free format text: FORMER NAME: ZHEJIANG WINUPON NETWORK TECHNOLOGY CO., LTD.

CP03 Change of name, title or address

Address after: Xihu District Hangzhou City, Zhejiang Province, 310013 West Road Wensan No. 118 Hangzhou electronic commerce building room 1406

Patentee after: ZHEJIANG WANPENG EDUCATION SCIENCE AND TECHNOLOGY STOCK CO.,LTD.

Address before: The electronic commerce building, No. 118 Hangzhou West Road, Zhejiang province 310013 city 15 Floor

Patentee before: ZHEJIANG WANPENG NETWORK TECHNOLOGY Co.,Ltd.

CP02 Change in the address of a patent holder
CP02 Change in the address of a patent holder

Address after: 310051 12 / F, building 8, No. 19, Jugong Road, Xixing street, Binjiang District, Hangzhou City, Zhejiang Province

Patentee after: ZHEJIANG WANPENG EDUCATION SCIENCE AND TECHNOLOGY STOCK Co.,Ltd.

Address before: Xihu District Hangzhou City, Zhejiang Province, 310013 West Road Wensan No. 118 Hangzhou electronic commerce building room 1406

Patentee before: ZHEJIANG WANPENG EDUCATION SCIENCE AND TECHNOLOGY STOCK Co.,Ltd.

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: 12 / F, building 8, No. 19, Jugong Road, Xixing street, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Patentee after: Zhejiang Wanpeng Digital Intelligence Technology Co.,Ltd.

Address before: 12 / F, building 8, No. 19, Jugong Road, Xixing street, Binjiang District, Hangzhou City, Zhejiang Province, 310051

Patentee before: ZHEJIANG WANPENG EDUCATION SCIENCE AND TECHNOLOGY STOCK CO.,LTD.