CN115691516A - Low-delay audio matrix configuration method and server - Google Patents

Low-delay audio matrix configuration method and server Download PDF

Info

Publication number
CN115691516A
CN115691516A CN202211362509.1A CN202211362509A CN115691516A CN 115691516 A CN115691516 A CN 115691516A CN 202211362509 A CN202211362509 A CN 202211362509A CN 115691516 A CN115691516 A CN 115691516A
Authority
CN
China
Prior art keywords
audio
audio data
delay
equipment
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202211362509.1A
Other languages
Chinese (zh)
Other versions
CN115691516B (en
Inventor
黄志军
王子兵
林弟
张常华
朱正辉
赵定金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Baolun Electronics Co Ltd
Original Assignee
Guangzhou Baolun Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Baolun Electronics Co Ltd filed Critical Guangzhou Baolun Electronics Co Ltd
Priority to CN202211362509.1A priority Critical patent/CN115691516B/en
Publication of CN115691516A publication Critical patent/CN115691516A/en
Application granted granted Critical
Publication of CN115691516B publication Critical patent/CN115691516B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D30/00Reducing energy consumption in communication networks
    • Y02D30/70Reducing energy consumption in communication networks in wireless communication networks

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a low-delay audio matrix configuration method and a server, wherein the method comprises the following steps: s1: driving the input equipment to establish connection with the output equipment, and performing audio data interaction between the input equipment and the output equipment; s2: acquiring the hardware operation rate, the current network delay condition and the audio data packet processing amount of each playing device, calculating the audio data interception amount, driving the output device to intercept the corresponding audio data, and then sending the audio data to the playing device. In the invention, the hardware operation rate of each playing device, the current network delay condition and the audio data packet processing amount are fully considered, and the number to be intercepted is calculated in real time, so that the data to be processed by the devices with different network delay conditions are dynamically intercepted to different degrees, a plurality of devices can be synchronized, the requirement of synchronously playing audio by the devices in a professional scene is met, and the delay sense caused by a network environment is eliminated.

Description

Low-delay audio matrix configuration method and server
Technical Field
The present invention relates to the field of audio processing technologies, and in particular, to a low-delay audio matrix configuration method and a server.
Background
With the progress of social informatization, various industries have higher and higher audio output requirements of audio matrixes in professional scenes such as medium-large stage conferences, broadcasting rooms and broadcasting halls in different scenes, including but not limited to output accuracy of output sources, no delay, high-quality sound mixing effect and complicated audio preemption logic.
At present, the existing technology is only directed at splitting audio data packets, and splitting audio data of a large packet into a plurality of small packets, so as to reduce delay to a certain extent, but in a network or device environment, there are delay problems caused by a series of environments such as network devices, and the like.
Disclosure of Invention
In order to overcome the disadvantages of the prior art, an object of the present invention is to provide a low-delay audio matrix configuration method, which can solve the problem that the transmission of audio data is significantly delayed due to the network environment in a professional scene.
It is another object of the present invention to provide a low-delay audio matrix configuration server, which can solve the problem that the transmission of audio data is significantly delayed due to the influence of network environment in professional scenes.
In order to achieve one of the purposes, the technical scheme adopted by the invention is as follows:
a low-delay audio matrix configuration method, comprising the steps of:
s1: driving the input equipment to establish connection with the output equipment, and performing audio data interaction between the input equipment and the output equipment;
s2: acquiring the hardware operation rate, the current network delay condition and the audio data packet processing amount of each playing device, calculating the audio data interception amount, driving the output device to intercept the corresponding audio data, and then sending the audio data to the playing device.
Preferably, S2 is specifically realized by the following steps:
s21: acquiring the current network delay condition of each playing device;
s22: judging whether the current network delay condition is greater than or equal to a preset delay value, if not, driving the output equipment to send the audio data to the playing equipment, and if so, executing S23;
s23: acquiring the hardware operation rate and the audio data packet processing capacity of the playing equipment, and calculating the audio data interception quantity;
s24: and driving the output equipment to intercept the corresponding audio data and then sending the audio data to the playing equipment.
Preferably, the S1 is specifically realized by the following steps:
and sending audio configuration information to the input equipment and the output equipment, and driving the input equipment and the output equipment to establish connection through the audio configuration information to carry out audio data interaction.
Preferably, the following steps are further included between S1 and S2:
a1: driving the input equipment to clear the echo from the audio frame, and performing noise reduction processing, identification and background noise filtration;
a2: judging whether the input equipment acquires audio data currently, if not, encoding to generate a blank encoded frame, and transmitting the blank encoded frame to the output equipment, and if so, executing A3;
a3: and judging whether to perform sound mixing processing, if so, attenuating the voice through an attenuation factor, otherwise, analyzing the audio data into PCM audio stream data, and performing noise detection processing.
In order to achieve the second purpose, the technical scheme adopted by the invention is as follows:
a low-delay audio matrix configuration server comprising a memory and a processor;
a memory for storing program instructions;
a processor for executing the program instructions to perform the low-delay audio matrix configuration method as described above.
Compared with the prior art, the invention has the beneficial effects that: the hardware operation rate, the current network delay condition and the audio data packet processing amount of each playing device are fully considered, and the number needing to be intercepted is calculated in real time, so that the data needing to be processed by the devices with different network delay conditions are dynamically intercepted to different degrees, a plurality of devices can be synchronized, the requirement of synchronously playing audio by the devices in a professional scene is met, and the delay caused by a network environment is eliminated.
Drawings
Fig. 1 is a flowchart of a low-delay audio matrix configuration method according to the present invention.
Detailed Description
The preferred embodiments of the present invention will be described in conjunction with the accompanying drawings, and it will be understood that they are described herein for the purpose of illustration and explanation and not limitation.
In the description of the present invention, it should be noted that the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc., indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, and are only for convenience of description and simplicity of description, but do not indicate or imply that the device or element being referred to must have a particular orientation, be constructed and operated in a particular orientation, and thus, should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in a specific case to those of ordinary skill in the art.
The invention will be further described with reference to the accompanying drawings and the detailed description below:
in the present invention, a host (equivalent to a server), an input device and an output device are constructed as an audio matrix, the input device can be a TX-8770 encoder, the output device can be a TX-8704 decoder, and the host, the input device and the output device are all connected to the same network.
The first embodiment is as follows:
as shown in fig. 1, a method for configuring a low-delay audio matrix includes the following steps:
s1: the input equipment is driven to be connected with the output equipment, and audio data interaction is carried out between the input equipment and the output equipment;
specifically, by driving the input device to establish a connection with the output device, the input device transmits audio data collected from an audio source to the output device, and the output device transmits the audio data to the playback device, in this embodiment, the S1 is specifically implemented by the following steps:
and sending audio configuration information to the input equipment and the output equipment, and driving the input equipment and the output equipment to establish connection through the audio configuration information to carry out audio data interaction.
Preferably, after the service of the host (server) is normally started, a UDP broadcast protocol is sent to the network environment, after receiving the UDP broadcast protocol, the input device and the output device send an online request protocol to the host, after receiving the devices of the online protocol of each device, the host marks device configuration information and maintains healthy heartbeat communication with each device, device information, matrix configuration, preemption logic configuration and audio configuration are set in the host, according to the addresses of the input device and the output device, the host sends audio configuration information (equivalent to configuration protocol signals) to the input device and the output device, the input device performs configuration according to the received audio configuration information and outputs audio data to the output source device, the output device performs configuration and verification according to the received audio configuration information and receives audio data transmitted by the input device, and then each channel of the output device outputs the received audio data to the playing device.
If the host is off-line, the input device and the output device can still transmit audio data according to the audio configuration information, if the input device and the output device are in abnormal conditions such as power failure, network failure and the like, after the device is restarted, the host can send the audio configuration information again, and the audio data interaction of the input device and the output device is recovered to be normal.
Further, the following steps are also included between S1 and S2:
a1: driving the input equipment to clear the echo from the audio frame, and performing noise reduction processing, identification and background noise filtration;
specifically, the audio data is subjected to conventional audio processing such as echo cancellation, noise suppression, silence detection and the like through an input device, and preferably, according to the audio frame which is just played, operations similar to cancellation are performed in the collected audio frame, so that echoes are removed from the collected audio frame; when noise reduction processing is carried out, the part belonging to the background noise is identified and filtered from the audio frame.
A2: judging whether the input equipment acquires audio data currently, if not, encoding to generate a blank encoded frame, and transmitting the blank encoded frame to the output equipment, and if so, executing A3;
specifically, by combining the silence detection algorithm with the noise suppression algorithm, it can be recognized that no audio source is currently input to the input device, and if not, the input device can output a special encoded frame, such as a blank encoded frame, to be transmitted to the output device, so as to save bandwidth and flow.
A3: and judging whether to perform sound mixing processing, if so, attenuating the voice through an attenuation factor, otherwise, analyzing the audio data into PCM audio stream data, and performing noise detection processing.
Specifically, when mixing, a normalized mixing algorithm (adaptive weighted mixing algorithm) is used for mixing, more bits (32 bit) are used for representing a sample of the audio data, and after mixing, the amplitude of the audio data is reduced by a method so that the audio data can still be distributed in a range represented by 16 bit. To avoid overflow, the speech is attenuated using a variable attenuation factor. This attenuation factor represents the weight of the speech, and the attenuation factor varies with the audio data, and is called adaptive weighted mixing. When overflow occurs, the attenuation factor is small, so that the overflow data can be within a critical value after attenuation, and when overflow does not occur, the attenuation factor is slowly increased, so that the data is changed more slowly.
And in the non-audio mixing process, an AEC algorithm based on deep learning is adopted, and a mode of combining traditional signal processing and deep learning technology is adopted, so that further residual echo suppression is performed after echo cancellation adaptive filtering, audio is analyzed into PCM audio stream data in a program, a series of noise detection processing is performed, the sound quality is adaptively gained to a stable frequency interval, the delay is reduced in transmission, and the audio quality is improved.
S2: acquiring the hardware operation rate, the current network delay condition and the audio data packet processing amount of each playing device, calculating the audio data interception amount, driving the output device to intercept the corresponding audio data, and then sending the audio data to the playing device.
Specifically, before sending the audio data, according to the delay in the network and the device processing data rate between each playback device, the audio data interception amount is calculated, and the low-delay processing sending is synchronized in real time, in this embodiment, the step S2 is specifically implemented by the following steps:
s21: acquiring the current network delay condition of each playing device;
specifically, the current network delay condition may be obtained by directly ping the network delay degree of each terminal device (playing device) stored in the server.
S22: judging whether the current network delay condition is greater than or equal to a preset delay value, if not, driving the output equipment to send the audio data to the playing equipment, and if so, executing S23;
specifically, whether the audio data needs to be processed is measured by judging whether the current network delay condition is greater than a preset delay value, preferably, if the current network delay condition is less than 1ms, the audio data is considered to be normal, the current second delay condition of each terminal is recorded, and if the current network delay condition is greater than 1ms, the audio data needs to be intercepted.
S23: acquiring the hardware operation rate and the audio data packet processing amount of the playing equipment, and calculating the audio data interception amount;
specifically, the calculation formula for calculating the obtained operation rate of the chip of the current device is as follows, because a specific private protocol exists between the device and the server, the heartbeat data can be kept in real time, and the heartbeat packet stores basic information of the current device, such as network conditions, temperature, current processing rate, and rate: data size (byte/second) = sampling rate (Hz) × sampling number of bits (bit) × chip sampling channel number/8, wherein 1byte =8bit,12 bit sampling number of bits, 1 second data size of AD chip for 4 channels and 200K sampling frequency is: 200000 × 12 × 4/8=1200000byte =1171.875k =1.14441m; that is, the transmission rate of one second is 1200000Byte/s, 1200Byte/ms is transmitted every millisecond, and as other data is received, the parameter of the data amount calculated to remain and be processed is read and stored in the server, and then the condition of the data packet per second of the audio is calculated as follows: setting audio sampling rate =48000, sampling channel =2, bit depth =16, sampling interval =20ms, i.e. the total amount of data per second, sampling interval 20ms, which needs to be acquired 50 times per second (1s = 1000ms), then the total amount of data is calculated as:
total data volume in one second =44100 × 2 × 16/8=192000 (Byte)
Audio data size per frame =192000/50=3840 (Byte)
Number of samples per channel =320/2=160 (Byte)
Amount of data per millisecond =192000/1000=192 (Byte)
Then, according to the hardware operation rate of the device playback device and the current network delay condition, for example, the remaining energy processing amount of the current processing rate of the playback device a: 5000 bytes/ms and the current network delay condition is 1ms, then the data processing rate of 5000 bytes/ms is more than 192 bytes/ms, and the data is transmitted normally;
if the self-current processing rate of the playing device B has the remaining energy processing capacity: 100 bytes/ms and the current network delay condition of 2ms, that is, the current delay of 1ms is increased, the processing rate is 100byte and 192byte, and the total data to be processed is 192+192-100=284byte;
if the rate of the current processing of the playback device C itself has the remaining energy throughput: 1200 bytes/ms and the current network delay condition of 3ms, namely, the current delay of 2ms is added, so that the current audio data is 192 × 2=386 bytes, and the processing rate of the device is 1200 bytes/ms >192 bytes/ms, so that the total number of the audio data to be processed is 386 bytes;
in summary, as a result of the finally transmitted data, the 192000byte/s data is divided into 512byte data, and the 512byte data is transmitted 375 times a second, and in this embodiment, in a general lan environment, the maximum data transmitted by udp is 1472 bytes best (fragmentation reassembly is avoided). However, in network programming, the routers in the Internet may be set to different values (less than the default value), and the standard MTU value on the Internet is 576, so the data length is preferably within 576-20-8=548 bytes in UDP programming of the Internet, and because 512 is the power of 9 of 2, the computer is binary. Therefore, all multiplication and division related to 2 can be converted into displacement operation, and the calculation burden can be greatly reduced.
S24: and driving the output equipment to intercept the corresponding audio data and then sending the audio data to the playing equipment.
In this embodiment, if the playback device a normally sends 192000byte data, the first packet data is sent normally; the playing device B sends 192000-284=191716bytes in one second, intercepts 284 bytes of data in front of the data, and calculates the first packet data sending from 284 th to 192000 th data; the playing device C sends 192000-386=191614byte data one second, intercepts 386 bytes of data in front of the data, and starts from 386 to 192000 data, and sends 512-386=126byte data first packet data; therefore, after the audio data are intercepted, the errors of the playing devices A, B and C in time are solved, the data output is synchronous, the sound output and the sound input are errors of 1ms, the normal use is not influenced, although the devices have certain audio data which are less, the fade-in effect of sound is realized, the effect is from small to large, the time is completely enough for the audio to be synchronously transmitted together, the use is not influenced, the subsequent audio data packet is adjusted and calculated in real time along with the network fluctuation condition and the processing condition of the devices, and in the professional scene of a broadcasting room, the time delay from the sound source to the sound output is kept within 1 ms.
Further, unique identification is carried out on the audio data, and data identification and salt encryption are carried out on the data, so that the data cannot be intercepted by a capture packet in transmission and interference is caused to continuous audio output of target equipment). And when the audio preemption is not configured, mixing or non-mixing the audio of each channel.
In addition, the system considers that the audio frequency needs to be switched at any time in a specific scene, different audio frequencies are output, IO trigger switching scenes are also provided, offline operation is achieved, each needed scene matrix point is configured on the host and corresponds to different IO trigger ports of an equipment encoder (input equipment), then offline operation is convenient for a user to switch, in a certain specific place, the host can be bypassed when the scene needs to be switched in time, the IO port is directly triggered on the equipment encoder, equipment scene configuration is carried out, audio processing between different encoders and decoders is switched, different audio source inputs are applied, and different audio frequencies are output. The process is as follows:
scene 1: the method comprises an encoder 1, an encoder 2, a decoder 1 and a decoder 2, wherein after the output channel and the input channel corresponding to each matrix point are configured, the output channel and the input channel are stored, in the encoder 1, the channel 1 triggered by IO is linked to a scene 2, and the storage is carried out
Scene 2: the method comprises an encoder 1, an encoder 3, a decoder 1 and a decoder 2, wherein after the output channel and the input channel corresponding to each matrix point are configured, the output channel and the input channel are stored, and in the encoder 1, the channel 2 triggered by IO is linked to a scene 1 for storage.
And (3) carrying out scene switching on a scene stored before the IO channel port of the encoder 1 is short-circuited and triggered to switch according to actual requirements on a project site or carrying out scene switching on a computer terminal, applying different audios to the scene, and inputting and outputting the appointed audio sound source.
The second embodiment:
a low-delay audio matrix configuration server comprising a memory and a processor;
a memory for storing program instructions;
a processor for executing the program instructions to perform the low-delay audio matrix configuration method according to the first embodiment.
Various other modifications and changes may be made by those skilled in the art based on the above-described technical solutions and concepts, and all such modifications and changes should fall within the scope of the claims of the present invention.

Claims (5)

1. A method for configuring a low-delay audio matrix, comprising the steps of:
s1: the input equipment is driven to be connected with the output equipment, and audio data interaction is carried out between the input equipment and the output equipment;
s2: acquiring the hardware operation rate, the current network delay condition and the audio data packet processing amount of each playing device, calculating the audio data interception amount, driving the output device to intercept the corresponding audio data, and then sending the audio data to the playing device.
2. The method for configuring a low-delay audio matrix according to claim 1, wherein the step S2 is implemented by:
s21: acquiring the current network delay condition of each playing device;
s22: judging whether the current network delay condition is greater than or equal to a preset delay value, if not, driving the output equipment to send the audio data to the playing equipment, and if so, executing S23;
s23: acquiring the hardware operation rate and the audio data packet processing capacity of the playing equipment, and calculating the audio data interception quantity;
s24: and driving the output equipment to intercept the corresponding audio data and then sending the audio data to the playing equipment.
3. The method for configuring a low-delay audio matrix according to claim 1, wherein S1 is implemented by:
and sending audio configuration information to the input equipment and the output equipment, and driving the input equipment and the output equipment to establish connection through the audio configuration information to carry out audio data interaction.
4. The method for configuring the low-delay audio matrix according to claim 1, further comprising the following steps between S1 and S2:
a1: driving the input equipment to clear the echo from the audio frame, and performing noise reduction processing, identification and background noise filtration;
a2: judging whether the input equipment acquires audio data currently, if not, encoding to generate a blank encoded frame, and transmitting the blank encoded frame to the output equipment, and if so, executing A3;
a3: and judging whether to carry out sound mixing processing, if so, attenuating the voice through an attenuation factor, otherwise, analyzing the audio data into PCM audio stream data, and carrying out noise detection processing.
5. A low-delay audio matrix configuration server, characterized by: comprises a storage and a processor;
a memory for storing program instructions;
a processor for executing said program instructions to perform the low-delay audio matrix configuration method of any of claims 1 to 4.
CN202211362509.1A 2022-11-02 2022-11-02 Low-delay audio matrix configuration method and server Active CN115691516B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211362509.1A CN115691516B (en) 2022-11-02 2022-11-02 Low-delay audio matrix configuration method and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211362509.1A CN115691516B (en) 2022-11-02 2022-11-02 Low-delay audio matrix configuration method and server

Publications (2)

Publication Number Publication Date
CN115691516A true CN115691516A (en) 2023-02-03
CN115691516B CN115691516B (en) 2023-09-05

Family

ID=85047457

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211362509.1A Active CN115691516B (en) 2022-11-02 2022-11-02 Low-delay audio matrix configuration method and server

Country Status (1)

Country Link
CN (1) CN115691516B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140323036A1 (en) * 2013-04-29 2014-10-30 Motorola Mobility Llc Systems and Methods for Syncronizing Multiple Electronic Devices
US20150363411A1 (en) * 2014-06-12 2015-12-17 Huawei Technologies Co., Ltd. Synchronous Audio Playback Method, Apparatus and System
CN105992040A (en) * 2015-02-15 2016-10-05 深圳市民展科技开发有限公司 Multichannel audio data transmitting method, audio data synchronization playing method and devices
US20170019870A1 (en) * 2015-07-16 2017-01-19 Samsung Electronics Co., Ltd. Method and apparatus for synchronization in a network
US20200204854A1 (en) * 2018-12-20 2020-06-25 Qingdao Hisense Electronics Co., Ltd Audio Playing And Transmitting Methods And Apparatuses
US20200252678A1 (en) * 2019-02-06 2020-08-06 Bose Corporation Latency negotiation in a heterogeneous network of synchronized speakers
CN112086095A (en) * 2020-09-10 2020-12-15 深圳前海微众银行股份有限公司 Data processing method, device, equipment and storage medium
WO2021119090A1 (en) * 2019-12-09 2021-06-17 Vowel, Inc. Leveraging a network of microphones for inferring room location and speaker identity for more accurate transcriptions and semantic context across meetings
CN113300797A (en) * 2021-03-29 2021-08-24 广州市保伦电子有限公司 Multi-device self-adaptive audio synchronous playing method and processing terminal
CN114245197A (en) * 2022-02-24 2022-03-25 广州市保伦电子有限公司 Audio and video interaction low-delay transmission method and background server

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140323036A1 (en) * 2013-04-29 2014-10-30 Motorola Mobility Llc Systems and Methods for Syncronizing Multiple Electronic Devices
US20150363411A1 (en) * 2014-06-12 2015-12-17 Huawei Technologies Co., Ltd. Synchronous Audio Playback Method, Apparatus and System
CN105992040A (en) * 2015-02-15 2016-10-05 深圳市民展科技开发有限公司 Multichannel audio data transmitting method, audio data synchronization playing method and devices
US20170019870A1 (en) * 2015-07-16 2017-01-19 Samsung Electronics Co., Ltd. Method and apparatus for synchronization in a network
US20200204854A1 (en) * 2018-12-20 2020-06-25 Qingdao Hisense Electronics Co., Ltd Audio Playing And Transmitting Methods And Apparatuses
US20200252678A1 (en) * 2019-02-06 2020-08-06 Bose Corporation Latency negotiation in a heterogeneous network of synchronized speakers
WO2021119090A1 (en) * 2019-12-09 2021-06-17 Vowel, Inc. Leveraging a network of microphones for inferring room location and speaker identity for more accurate transcriptions and semantic context across meetings
CN112086095A (en) * 2020-09-10 2020-12-15 深圳前海微众银行股份有限公司 Data processing method, device, equipment and storage medium
CN113300797A (en) * 2021-03-29 2021-08-24 广州市保伦电子有限公司 Multi-device self-adaptive audio synchronous playing method and processing terminal
CN114245197A (en) * 2022-02-24 2022-03-25 广州市保伦电子有限公司 Audio and video interaction low-delay transmission method and background server

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JINGFENG ZHANG 等: "Using timestamp to realize audio-video synchronization in Real-Time streaming media transmission", 《2008 INTERNATIONAL CONFERENCE ON AUDIO, LANGUAGE AND IMAGE PROCESSING》, pages 1073 - 1075 *
王英兰 等: "一种面向互联网应用的多路实时流媒体同步合成方案", 东华大学学报(自然科学版), no. 01, pages 109 - 114 *

Also Published As

Publication number Publication date
CN115691516B (en) 2023-09-05

Similar Documents

Publication Publication Date Title
US9858936B2 (en) Methods and systems for selecting layers of encoded audio signals for teleconferencing
EP1708471B1 (en) System and method for audio multicast
KR102420569B1 (en) Appratus and method for transmitting and receiving voice data in wireless communication system
EP2130203B1 (en) Method of transmitting data in a communication system
US20020118650A1 (en) Devices, software and methods for generating aggregate comfort noise in teleconferencing over VoIP networks
US20140093086A1 (en) Audio Encoding Method and Apparatus, Audio Decoding Method and Apparatus, and Encoding/Decoding System
US20080273683A1 (en) Device method and system for teleconferencing
EP2055055A2 (en) Jitter buffer adjustment
US20170104867A1 (en) In-service monitoring of voice quality in teleconferencing
CN107360530A (en) The method of testing and device of a kind of echo cancellor
EP2211494A2 (en) Voice activity detection (VAD) dependent retransmission scheme for wireless communication systems
US11856149B2 (en) Method for establishing call connection, first terminal, server, and storage medium
US20160164759A1 (en) Transmission device, receiving device, and relay device
EP2158753B1 (en) Selection of audio signals to be mixed in an audio conference
CN109147812B (en) Echo cancellation method and device
CN115691516B (en) Low-delay audio matrix configuration method and server
JP2000270024A (en) Method for exchanging capability of frame packet processing size in internet phone, terminal utilizing internet phone and medium recording program of internet phone
US20060143001A1 (en) Method for the adaptation of comfort noise generation parameters
CN103325385B (en) Voice communication method and equipment, the method and apparatus of operation wobble buffer
JP4218456B2 (en) Call device, call method, and call system
US11562761B2 (en) Methods and apparatus for enhancing musical sound during a networked conference
US8117029B2 (en) Method and apparatus for matching sound quality measurement sections of variable bandwidth multi-codec
RU2805130C1 (en) Method for building a codec of speech signals adaptive to acoustic noise
Perkins et al. Multicast audio: The next generation
CN110910892A (en) Conference system terminal, audio data processing method and teleconference system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: No. 56 Nanli East Road, Shiqi Town, Panyu District, Guangzhou City, Guangdong Province, 510000

Applicant after: Guangdong Baolun Electronics Co.,Ltd.

Address before: No.19 Chuangyuan Road, Zhongcun street, Panyu District, Guangzhou, Guangdong 510000

Applicant before: GUANGZHOU ITC ELECTRONIC TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant