CN112152975A - Audio data processing method and device - Google Patents
Audio data processing method and device Download PDFInfo
- Publication number
- CN112152975A CN112152975A CN201910575418.8A CN201910575418A CN112152975A CN 112152975 A CN112152975 A CN 112152975A CN 201910575418 A CN201910575418 A CN 201910575418A CN 112152975 A CN112152975 A CN 112152975A
- Authority
- CN
- China
- Prior art keywords
- audio data
- terminal
- data packet
- data forwarding
- relation table
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/75—Media network packet handling
- H04L65/765—Media network packet handling intermediate
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L12/00—Data switching networks
- H04L12/02—Details
- H04L12/16—Arrangements for providing special services to substations
- H04L12/18—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast
- H04L12/185—Arrangements for providing special services to substations for broadcast or conference, e.g. multicast with management of multicast group membership
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L45/00—Routing or path finding of packets in data switching networks
- H04L45/16—Multipoint routing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/40—Support for services or applications
- H04L65/403—Arrangements for multi-party communication, e.g. for conferences
- H04L65/4038—Arrangements for multi-party communication, e.g. for conferences with floor control
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Telephonic Communication Services (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The embodiment of the invention provides a method and a device for processing audio data, wherein the method comprises the following steps: after receiving the audio data packet sent by the main speaking terminal, the data forwarding server corresponding to the main speaking terminal modifies the defined synchronous source SSRC in the audio data packet according to the updating times of the data forwarding relation table, and then sends the modified audio data packet to other terminals according to the link information of data forwarding maintained in the data forwarding relation table.
Description
Technical Field
The present invention relates to the field of multimedia technologies, and in particular, to a method and an apparatus for processing audio data.
Background
The audio group comprises at least three terminals, the three terminals can send audio data packets to other terminals in the audio group through respective data forwarding servers, and after receiving the audio data packets, the other terminals determine whether to play the audio data packets according to a defined Synchronization Source (SSRC) and a Sequence (SEQ) number in the audio data packets.
For example, taking a voice group including a terminal a, a terminal B, and a terminal C as an example, when the terminal a is in a talkback state, an audio data packet with an SSRC of 0x10 is sent to the terminal B and the terminal C, and a corresponding SEQ starts from 1 and is numbered until SEQ number is 100; correspondingly, after receiving the audio data packet, both the terminal B and the terminal C set the SSRC in their caches to 0x10, and set the SEQ id number to 100. When the main speech is changed to terminal B, if terminal B also sends an audio data packet with SSRC 0x10, and the corresponding SEQ number starts numbering from 1, for terminal C, after receiving a new audio data packet sent by terminal B, the SSRC of the new audio data packet is compared with the SSRC cached by the terminal B, since the SSRC of the previously cached audio data packet is also 0x10, it is further determined whether the SEQ number of the new audio packet is greater than the SEQ number cached before, since the SEQ number of the new audio data packet is sent from 1 and is less than 100, it is determined that the audio data packet is repeated, at this time, terminal C discards the audio packet sent by terminal B, and during this time period, terminal C does not output sound, thereby resulting in fluency of audio data playing. And the terminal C can not play the new audio data packet until the SEQ number of the new audio data packet sent by the terminal B is more than 100.
Therefore, with the existing audio data processing method, since the SSRC of the new audio data packet received by the terminal C is the same as the SSRC cached by the terminal C, and the SEQ number of the new audio data packet is smaller than the SEQ number cached by the terminal C, the terminal C will discard the new audio data packet, and during the time period, the terminal C does not output sound, thereby resulting in low fluency of audio data playing.
Disclosure of Invention
According to the audio data processing method and device provided by the embodiment of the invention, the fluency of audio data playing is improved in the audio data playing process.
The embodiment of the invention provides a method for processing audio data, which comprises the following steps:
receiving an audio data packet sent by a main speaking terminal;
modifying the defined synchronous source SSRC in the audio data packet according to the updating times of a data forwarding relation table to obtain a modified audio data packet, wherein link information of data forwarding is stored in the data forwarding relation table;
and sending the modified audio data packet to other terminals except the main speaking terminal according to the link information forwarded by the data, wherein the main speaking terminal and the other terminals are in the same audio conference.
In a possible implementation manner, the modifying the defined synchronization source SSRC in the audio data packet according to the number of updates of the data forwarding relation table to obtain a modified audio data packet includes:
determining the change times of the role of the main speaking terminal according to the update times of the data forwarding relation table;
and modifying the defined synchronous source SSRC in the audio data packet according to the change times of the role of the main speaking terminal to obtain a modified audio data packet.
In a possible implementation manner, the modifying the defined synchronization source SSRC in the audio data packet according to the number of changes of the role of the main speech terminal to obtain a modified audio data packet includes:
and modifying the SSRC in the audio data packet into the change times of the role of the main speaking terminal to obtain the modified audio data packet.
In a possible implementation manner, the determining the number of times of changing the role of the main terminal according to the number of times of updating the data forwarding relation table includes:
and determining the updating times of the data forwarding relation table as the changing times of the role of the main speaking terminal.
In a possible implementation manner, the sending the modified audio data packet to other terminals except the main speaking terminal includes:
and sending the modified audio data packet to a data forwarding server corresponding to the other terminal, so that the data forwarding server forwards the modified audio data packet to the other terminal.
In one possible implementation manner, the audio data processing method may further include:
receiving an updating request message sent by a control node server;
and updating the data forwarding relation table according to the updating request message.
In a second aspect, an embodiment of the present invention further provides an apparatus for processing audio data, where the apparatus for processing audio data may include:
the receiving unit is used for receiving the audio data packet sent by the main speaking terminal;
the processing unit is used for modifying the defined synchronization source SSRC in the audio data packet according to the updating times of a data forwarding relation table to obtain a modified audio data packet, and the data forwarding relation table stores link information for data forwarding;
and the sending unit is used for sending the modified audio data packet to other terminals except the main speaking terminal according to the link information forwarded by the data, and the main speaking terminal and the other terminals are in the same audio conference.
In a possible implementation manner, the processing unit is specifically configured to determine the number of times of changing a role of the talkback terminal according to the number of times of updating the data forwarding relationship table; and modifying the defined synchronous source SSRC in the audio data packet according to the change times of the role of the main speaking terminal to obtain a modified audio data packet.
In a possible implementation manner, the processing unit is specifically configured to modify the SSRC in the audio data packet to the number of times of changing the role of the master terminal, so as to obtain the modified audio data packet.
In a possible implementation manner, the processing unit is specifically configured to determine the number of updates of the data forwarding relationship table as the number of changes of the role of the master terminal.
In a possible implementation manner, the sending unit is specifically configured to send the modified audio data packet to a data forwarding server corresponding to the other terminal, so that the data forwarding server forwards the modified audio data packet to the other terminal.
In a possible implementation manner, the receiving unit is further configured to receive an update request message sent by a control node server;
the processing unit is further configured to update the data forwarding relation table according to the update request message.
In a third aspect, an embodiment of the present invention further provides a processing apparatus, which may include a processor and a memory, wherein,
the memory is to store program instructions;
the processor is configured to read the program instructions in the memory, and execute the audio data processing method according to any one of the possible implementation manners of the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer storage medium, which includes instructions that, when executed by one or more processors, cause a processing apparatus to perform the method for processing audio data according to any one of the possible implementation manners of the first aspect.
Therefore, according to the audio data processing method and device provided by the embodiment of the invention, after receiving the audio data packet sent by the main speaking terminal, the data forwarding server corresponding to the main speaking terminal modifies the defined synchronization source SSRC in the audio data packet according to the update times of the data forwarding relation table, and then sends the modified audio data packet to other terminals according to the link information of data forwarding maintained in the data forwarding relation table.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a schematic view of an application scenario provided in an embodiment of the present invention;
fig. 2 is a flowchart illustrating a method for processing audio data according to an embodiment of the present invention;
fig. 3 is a flowchart illustrating another audio data processing method according to an embodiment of the invention;
fig. 4 is a schematic structural diagram of an apparatus for processing audio data according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of another audio data processing apparatus according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In the embodiments of the present invention, "at least one" means one or more, "a plurality" means two or more. "and/or" describes the association relationship of the associated objects, meaning that there may be three relationships, e.g., a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone, wherein A and B can be singular or plural. In the description of the present invention, the character "/" generally indicates that the former and latter associated objects are in an "or" relationship.
Fig. 1 is a schematic view of an application scenario provided by an embodiment of the present invention, for example, please refer to fig. 3, a voice set group shown in the application scenario may include three terminals, which are a terminal a, a terminal B, and a terminal C, respectively, and the three terminals may send an audio data packet to other terminals in the voice group through respective data forwarding servers. Taking a terminal a as a main speaking terminal as an example, the terminal a will send an audio data packet to a data forwarding server a corresponding to the terminal a through a transmission path P1, and the data forwarding server a passes the audio data packet through a transmission path I1 to a data forwarding server B corresponding to the terminal B, so that the data forwarding server B passes the audio data packet through a transmission path P2 to the terminal B after receiving the audio data packet; meanwhile, the data forwarding server a passes the audio data packet through the transmission path I2 to the data forwarding server C corresponding to the terminal C, so that after receiving the audio data packet, the data forwarding server C passes the audio data packet through the transmission path P3 to the terminal C, so that the terminal B and the terminal C play the audio data packet. After receiving the audio data packet, the terminal B and the terminal C determine whether the SSRC of the audio data packet is the same as the SSRC cached by the terminal B, and if the terminal C determines that the SSRC of the audio data packet is the same as the SSRC cached by the terminal C and the SEQ number of the audio data packet is smaller than the SEQ number cached by the terminal C, the terminal C discards the audio data packet, and the terminal C does not output sound in the time period, thereby resulting in low fluency of audio data playing.
In order to solve the problem of low fluency of audio data playing in the audio data playing process in the prior art, the embodiment of the invention provides an audio data processing method, which is applied to a data forwarding server corresponding to a main speaking terminal, wherein after receiving an audio data packet sent by the main speaking terminal, the data forwarding server modifies a defined synchronization source SSRC in the audio data packet according to the updating times of a data forwarding relation table to obtain a modified audio data packet, and then sends the modified audio data packet to other terminals except the main speaking terminal according to link information of data forwarding maintained in the data forwarding relation table, because the updating times of the forwarding relation table are different every time, the SSRC in the modified audio data packet obtained after being modified according to the updating times of the forwarding relation table is different, the situation that the terminal has no sound output because the SSRC of the new audio data packet is the same as the SSRC cached by the new audio data packet and the SEQ number of the new audio data packet is smaller than the SEQ number cached by the new audio data packet is avoided, and therefore the fluency of audio data playing is improved. When the modified audio data packet is sent to other terminals except the main speaking terminal, the other terminals are terminals in the same audio conference with the main speaking terminal.
The technical solution of the present invention will be described in detail below with specific examples. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments.
Fig. 2 is a schematic flowchart of a processing method of audio data according to an embodiment of the present invention, where the processing method of audio data may be executed by an audio data processing device, and the audio data processing device may be independently configured or may be configured in a data forwarding server corresponding to a main terminal. For example, referring to fig. 2, the audio data processing method may include:
s201, receiving an audio data packet sent by the main speaking terminal.
Illustratively, the SSRC and SEQ number would be included in the audio data packet. In the primary talkback process, the talkback terminal normally sends the same SSRC but different SEQ numbers to each audio data packet. The SEQ number increases in turn for each audio data packet sent by the terminal. For example, in the sequential increasing process of the SEQ number, 1 may be added each time, and certainly, 2 may also be added each time, and specifically, the setting may be performed according to actual needs, and here, as for the increasing manner of the SEQ number, the embodiment of the present invention is not further limited.
When speaking, the main speaking terminal sends the audio data packet corresponding to the voice data to the data forwarding server corresponding to the main speaking terminal in real time, so that the data forwarding server executes the following S102 after receiving the audio data packet:
s202, according to the updating times of the data forwarding relation table, modifying the defined synchronous source SSRC in the audio data packet to obtain a modified audio data packet.
The data forwarding relation table stores link information for data forwarding, and the link information for data forwarding is used for indicating a transmission path of the audio data packet.
For the data forwarding servers corresponding to the same audio conference, a data forwarding relation table is maintained, when the role of the main speaking terminal changes, for example, when the main speaking terminal a changes from the terminal a to the terminal B, the data forwarding relation tables maintained by the data forwarding servers are updated, the updating times of the data forwarding relation tables maintained by the data forwarding servers are the same, and when the role of the main speaking terminal changes, the SSRCs in the audio data packet modified by the data forwarding servers are increased, so that when the main speaking terminal changes from the terminal a to the terminal B or the terminal C in the following, the data forwarding server corresponding to the terminal B or the terminal C modifies the defined synchronization source SSRCs in the audio data packet according to the updating times of the data forwarding relation table maintained by the data forwarding server, the SSRCs in the modified audio data packet do not correspond to the SSRCs in the audio data forwarding server modified by the previous terminal a Meanwhile, the situation that the sound output of the terminal is not caused due to the fact that the SSRC is the same is avoided, and therefore the fluency of audio data playing is improved.
It can be seen that, in the embodiment of the present invention, after the data forwarding server corresponding to the main terminal receives the audio data packet, unlike the prior art, the data forwarding server corresponding to the main terminal does not perform transparent transmission of the audio data packet, but modifies the defined synchronization source SSRC in the audio data packet according to the number of updates of the data forwarding relation table maintained by the data forwarding server, so as to obtain a modified audio data packet, so that the data forwarding server corresponding to the main terminal can send the modified audio data packet to other terminals except the main terminal according to the link information of data forwarding, that is, the following S203 is executed:
and S203, sending the modified audio data packet to other terminals except the main speaking terminal according to the link information forwarded by the data.
The main speaking terminal and other terminals are in the same audio conference.
Optionally, when the data forwarding server corresponding to the main speaking terminal sends the modified audio data packet to other terminals except the main speaking terminal, the modified audio data packet may be sent to the data forwarding servers corresponding to the other terminals, so that the data forwarding servers corresponding to the other terminals forward the modified audio data packet to the other terminals after receiving the modified audio data packet, so that the other terminals play the audio data packet.
It can be understood that, when the data forwarding servers corresponding to other terminals forward the modified audio data packet to other terminals, the transmission of the modified audio data packet is a transparent transmission process, that is, the data forwarding servers corresponding to other terminals do not modify the modified audio data packet, but forward the modified audio data packet to other terminals.
For example, as shown in fig. 1, taking the current main terminal as the terminal a, the link information for data forwarding maintained in the data forwarding server a corresponding to the terminal a may be: i1- > P2, and I2- > P3, so that when the data forwarding server a sends the modified audio data packet to other terminals except the main speaking terminal, the audio data packet can be transmitted to the data forwarding server B corresponding to the terminal B through the transmission path I1, so that after receiving the audio data packet, the data forwarding server B transmits the audio data packet to the terminal B through the transmission path P2; meanwhile, the data forwarding server a passes the audio data packet through the transmission path I2 to the data forwarding server C corresponding to the terminal C, so that after receiving the audio data packet, the data forwarding server C passes the audio data packet through the transmission path P3 to the terminal C, so that the terminal B and the terminal C play the audio data packet.
Therefore, in the embodiment of the present invention, after receiving the audio data packet sent by the main terminal, the data forwarding server corresponding to the main terminal does not directly transmit the audio data packet to other terminals, but modifies the defined synchronization source SSRC in the audio data packet according to the update times of the data forwarding relation table to obtain a modified audio data packet, and then sends the modified audio data packet to other terminals according to the link information of data forwarding maintained in the data forwarding relation table, because the update times of the forwarding relation table are different each time, the SSRC in the modified audio data packet obtained after modifying the SSRC according to the update times of the forwarding relation table is inevitably different from the cached SSRC, and it is avoided that the SSRC of a new audio data packet is the same as the cached SSRC thereof, and the SEQ number of the new audio data packet is smaller than the SEQ number of the new audio data packet, the terminal has no sound output, thereby improving the fluency of audio data playing.
Based on the embodiment shown in fig. 2, in order to more clearly illustrate how, in the embodiment of the present invention, according to the number of updates of the data forwarding relationship table, a defined synchronization source SSRC in an audio data packet is modified to obtain a modified audio data packet, for example, please refer to fig. 3, where fig. 3 is a schematic flow diagram of another audio data processing method provided in the embodiment of the present invention, and the audio data processing method may further include:
s301, determining the change times of the role of the main speaking terminal according to the update times of the data forwarding relation table.
Optionally, when the data forwarding server corresponding to the master terminal determines the change times of the role of the master terminal according to the update times of the data forwarding relation table, the update times of the data forwarding relation table may be directly determined as the change times of the role of the master terminal; or performing an operation of adding 1 to the update times of the data forwarding relation table, and determining the times obtained by adding 1 to the update times of the data forwarding relation table as the change times of the role of the talkback terminal; of course, the operation of adding 2 to the update times of the data forwarding relation table may also be performed, and the number obtained by adding 2 to the update times of the data forwarding relation table is determined as the change time of the role of the terminal, which may be specifically set according to actual needs, and the embodiment of the present invention is not particularly limited.
It should be noted that, when the number of times of changing the role of the main terminal is determined according to the number of times of updating the data forwarding relationship table, the determination manners of the data forwarding server corresponding to the main terminal and the data forwarding servers corresponding to other terminals are the same, so that when the data forwarding servers corresponding to the subsequent terminals modify the defined synchronization source SSRC in the audio data packet according to the number of times of changing the role of the main terminal, it is possible to avoid that the terminals do not output sound due to the same SSRC, thereby improving the smoothness of playing the audio data.
After determining the number of changes of the role of the main terminal according to the number of updates of the data forwarding relation table, the following S302 may be performed:
s302, according to the change times of the role of the main speaking terminal, modifying the defined synchronous source SSRC in the audio data packet to obtain a modified audio data packet.
Optionally, when the data forwarding server corresponding to the main terminal modifies the defined synchronization source SSRC in the audio data packet according to the change times of the main terminal role, the SSRC in the audio data packet may be directly modified to the change times of the main terminal role; or adding 1 to the change times of the role of the main speaking terminal, and modifying the SSRC in the audio data packet into the change times of the role of the main speaking terminal to be the times obtained by adding 1 to the change times of the role of the main speaking terminal; of course, the operation of adding 2 to the change frequency of the role of the main speaking terminal may also be performed, and the SSRC in the audio data packet may be modified to the change frequency of the role of the main speaking terminal by adding 2 to the change frequency of the role of the main speaking terminal, which may be specifically set according to actual needs, where embodiments of the present invention are not particularly limited.
It should be noted that, when the defined synchronization source SSRC in the audio data packet is modified according to the number of changes of the role of the main terminal, the data forwarding servers corresponding to the main terminal and the data forwarding servers corresponding to other terminals have the same modification mode, so that when the defined synchronization source SSRC in the audio data packet is modified according to the number of changes of the role of the main terminal, the data forwarding servers corresponding to the subsequent terminals can avoid that the terminals have no sound output due to the same SSRC, thereby improving the smoothness of audio data playing.
In order to facilitate understanding of the embodiment of the present invention, how to modify the defined synchronization source SSRC in the audio data packet according to the number of updates of the data forwarding relation table is performed, so as to obtain an example of a modified audio data packet. For example, referring to fig. 1, when a terminal a is used as a main terminal, a data forwarding server a corresponding to the terminal a updates the update times of the data forwarding relationship table, sets the update times to 1, a data forwarding server B corresponding to the terminal B also updates the update times of the data forwarding relationship table, sets the update times to 1, a data forwarding server C corresponding to the terminal C also updates the update times of the data forwarding relationship table, sets the update times to 1, and after the data forwarding server a receives an audio data packet sent by the terminal a, since each data forwarding server cannot sense a change in the role of the main terminal, only receives the update times 1 of the data forwarding relationship table sent by a control node server, determines that the change times of the role of the main terminal is 1, and modifies a defined synchronization source SSRC in the audio data packet according to the change times 1 of the role of the main terminal, during modification, the SSRC in the audio data packet may be directly modified to change the role of the main speaking terminal by the number of times 1, and then the modified audio data packet is sent to the data forwarding server B and the data forwarding server C, so that the modified audio data packet is sent to the terminal B through the data forwarding server B, and the modified audio data packet is sent to the terminal C through the data forwarding server C, so that the terminal B and the terminal C play the audio data packet.
When the main terminal is changed from a terminal A to a terminal B, namely the role of the main terminal is changed, a data forwarding server B corresponding to the terminal B updates the updating times of the data forwarding relation table, the updating times are set to be 2, a data forwarding server A corresponding to the terminal A also updates the updating times of the data forwarding relation table, the updating times are set to be 2, a data forwarding server C corresponding to the terminal C also updates the updating times of the data forwarding relation table, the updating times are set to be 2, after the data forwarding server B receives an audio data packet sent by the terminal B, because each data forwarding server cannot sense the change of the role of the main terminal, only the updating times 2 of the data forwarding relation table sent by a control node server can be received, the change times of the role of the main terminal is determined to be 2, and a defined synchronization source SSRC in the audio data packet is modified according to the change times 2 of the role of the main terminal, during modification, the SSRC in the audio data packet may be directly modified to change the role of the main speaking terminal by 2 times, and then the modified audio data packet is sent to the data forwarding server a and the data forwarding server C, so that the modified audio data packet is sent to the terminal a through the data forwarding server a, and the modified audio data packet is sent to the terminal C through the data forwarding server C, so that the terminal a and the terminal C play the audio data packet.
Further, when the main terminal is changed from the terminal B to the terminal C, that is, the role of the main terminal is changed, the data forwarding server C corresponding to the terminal C updates the update times of the data forwarding relation table, the update times is set to 3, the data forwarding server a corresponding to the terminal a also updates the update times of the data forwarding relation table, the update times is set to 3, the data forwarding server B corresponding to the terminal B also updates the update times of the data forwarding relation table, the update times is set to 3, after the data forwarding server C receives the audio data packet sent by the terminal C, since each data forwarding server cannot sense the change of the role of the main terminal, only the update times 3 of the data forwarding relation table sent by the control node server can be received, the change times of the role of the main terminal is determined to be 3, and the defined synchronization source SSRC in the audio data packet is modified according to the change times 3 of the role of the main terminal, during modification, the SSRC in the audio data packet may be directly modified to change the role of the main speaking terminal by 3 times, and then the modified audio data packet is sent to the data forwarding server a and the data forwarding server B, so that the modified audio data packet is sent to the terminal a through the data forwarding server a, and the modified audio data packet is sent to the terminal B through the data forwarding server B, so that the terminal a and the terminal B play the audio data packet.
Therefore, when the data forwarding server corresponding to the main speaking terminal modifies the defined synchronous source SSRC in the audio data packet according to the updating times of the data forwarding relation table, the updating times of the forwarding relation table are gradually increased, so that after the SSRC is modified according to the updating times of the forwarding relation table, the SSRC in the modified audio data packet is different from the cached SSRC, the situation that the terminal has no sound output because the SSRC of the new audio data packet is the same as the cached SSRC of the new audio data packet and the SEQ number of the new audio data packet is smaller than the SEQ number of the new audio data packet is avoided, and the fluency of audio data playing is improved.
In addition, in the embodiment shown in fig. 2 or fig. 3, for the data forwarding server corresponding to each terminal, the data forwarding relationship table maintained by the data forwarding server is not actively updated, but when the control node server senses that the role of the main speaking terminal changes, the data forwarding server corresponding to each terminal in the audio conference is controlled to update the data forwarding relationship table maintained by the control node server, so that the data forwarding relationship table maintained by the data forwarding server is the latest data forwarding relationship table.
Optionally, when the control node server controls the data forwarding server corresponding to each terminal in the audio conference to update the stored data forwarding relationship table, the control node server may send an update request message to the data forwarding server, so that the data forwarding server updates the data forwarding relationship table according to the update request message after receiving the update request message.
For example, the update request message may include the number of changes of the role of the intercom terminal and link information of data forwarding. Correspondingly, the data forwarding server updates the data forwarding relation table maintained in advance according to the update request message, which may include the change times of the role of the main speaking terminal and the link information of data forwarding.
It can be understood that, in the embodiment of the present invention, updating the data forwarding relation table may be understood as adding, in the data forwarding relation table, the number of times of changing the role of the intercom terminal and the link information of data forwarding included in the new update request message; it can also be understood that the number of changes of the role of the main terminal and the link information of data forwarding maintained in the data forwarding relationship table are changed into the number of changes of the role of the main terminal and the link information of data forwarding included in the new update request message, that is, the number of changes of the role of the main terminal and the link information of data forwarding maintained before the data forwarding relationship table are removed.
Fig. 4 is a schematic structural diagram of an audio data processing apparatus 40 according to an embodiment of the present invention, for example, please refer to fig. 4, where the audio data processing apparatus 40 may include:
the receiving unit 401 is configured to receive an audio data packet sent by a main terminal.
The processing unit 402 is configured to modify the defined synchronization source SSRC in the audio data packet according to the update times of the data forwarding relation table, so as to obtain a modified audio data packet, where link information for data forwarding is stored in the data forwarding relation table.
A sending unit 403, configured to send the modified audio data packet to other terminals except the main terminal according to the link information forwarded by the data, where the main terminal and the other terminals are in the same audio conference.
Optionally, the processing unit 402 is specifically configured to determine the number of times of changing the role of the talkback terminal according to the number of times of updating the data forwarding relation table; and modifying the defined synchronous source SSRC in the audio data packet according to the change times of the role of the main speaking terminal to obtain the modified audio data packet.
Optionally, the processing unit 402 is specifically configured to modify the SSRC in the audio data packet to the number of times of changing the role of the master terminal, so as to obtain a modified audio data packet.
Optionally, the processing unit 402 is specifically configured to determine the number of times of updating the data forwarding relationship table as the number of times of changing the role of the terminal.
Optionally, the sending unit 403 is specifically configured to send the modified audio data packet to a data forwarding server corresponding to another terminal, so that the data forwarding server forwards the modified audio data packet to the other terminal.
Optionally, the receiving unit 401 is further configured to receive an update request message sent by the control node server.
The processing unit 402 is further configured to update the data forwarding relation table according to the update request message.
The audio data processing apparatus 40 shown in the embodiment of the present invention may execute the technical solution of the audio data processing method in the embodiment shown in fig. 2 or fig. 3, and the implementation principle and the beneficial effect thereof are similar to those of the audio data processing method, and are not described herein again.
Fig. 5 is a schematic structural diagram of a processing device 50 according to an embodiment of the present invention, for example, as shown in fig. 5, the processing device 50 may include a processor 501 and a memory 502, wherein,
the memory 502 is used to store program instructions;
the processor 501 is configured to read the program instruction in the memory 502, and execute the technical solution of the audio data processing method in the embodiment shown in fig. 2 or fig. 3 according to the program instruction in the memory 502, and the implementation principle and the beneficial effect of the technical solution are similar to those of the audio data processing method, and are not described here again.
An embodiment of the present invention further provides a computer storage medium, which includes instructions, and when the instructions are executed by one or more processors, the processing apparatus is enabled to execute the technical solution of the audio data processing method in the embodiment shown in fig. 2 or fig. 3, and the implementation principle and the beneficial effects of the method are similar to those of the audio data processing method, and are not described herein again.
An embodiment of the present invention further provides a chip, where a computer program is stored on the chip, and when the computer program is executed by a processor, the technical solution of the method for processing audio data in the embodiment shown in fig. 2 or fig. 3 is executed, and an implementation principle and beneficial effects of the method for processing audio data are similar to those of the method for processing audio data, and are not described herein again.
The processor in the above embodiments may be a general purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor. The software modules may be located in a Random Access Memory (RAM), a flash memory, a read-only memory (ROM), a programmable ROM, an electrically erasable programmable memory, a register, or other storage media that are well known in the art. The storage medium is located in a memory, and a processor reads instructions in the memory and combines hardware thereof to complete the steps of the method.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
Those of ordinary skill in the art will understand that: all or a portion of the steps of implementing the above-described method embodiments may be performed by hardware associated with program instructions. The program may be stored in a computer-readable storage medium. When executed, the program performs steps comprising the method embodiments described above; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (14)
1. A method of processing audio data, comprising:
receiving an audio data packet sent by a main speaking terminal;
modifying the defined synchronous source SSRC in the audio data packet according to the updating times of a data forwarding relation table to obtain a modified audio data packet, wherein link information of data forwarding is stored in the data forwarding relation table;
and sending the modified audio data packet to other terminals except the main speaking terminal according to the link information forwarded by the data, wherein the main speaking terminal and the other terminals are in the same audio conference.
2. The method according to claim 1, wherein the modifying the defined synchronization source SSRC in the audio data packet according to the number of updates of the data forwarding relation table to obtain a modified audio data packet comprises:
determining the change times of the role of the main speaking terminal according to the update times of the data forwarding relation table;
and modifying the defined synchronous source SSRC in the audio data packet according to the change times of the role of the main speaking terminal to obtain a modified audio data packet.
3. The method according to claim 2, wherein the modifying the defined synchronization source SSRC in the audio data packet according to the change number of the role of the main terminal to obtain the modified audio data packet comprises:
and modifying the SSRC in the audio data packet into the change times of the role of the main speaking terminal to obtain the modified audio data packet.
4. The method according to claim 2 or 3, wherein the determining the number of changes of the role of the main terminal according to the number of updates of the data forwarding relation table comprises:
and determining the updating times of the data forwarding relation table as the changing times of the role of the main speaking terminal.
5. The method according to any one of claims 1 to 3, wherein the sending the modified audio data packet to a terminal other than the main speaking terminal comprises:
and sending the modified audio data packet to a data forwarding server corresponding to the other terminal, so that the data forwarding server forwards the modified audio data packet to the other terminal.
6. The method according to any one of claims 1-3, further comprising:
receiving an updating request message sent by a control node server;
and updating the data forwarding relation table according to the updating request message.
7. An apparatus for processing audio data, comprising:
the receiving unit is used for receiving the audio data packet sent by the main speaking terminal;
the processing unit is used for modifying the defined synchronization source SSRC in the audio data packet according to the updating times of a data forwarding relation table to obtain a modified audio data packet, and the data forwarding relation table stores link information for data forwarding;
and the sending unit is used for sending the modified audio data packet to other terminals except the main speaking terminal according to the link information forwarded by the data, and the main speaking terminal and the other terminals are in the same audio conference.
8. The apparatus of claim 7,
the processing unit is specifically configured to determine the change times of the role of the talkback terminal according to the update times of the data forwarding relation table; and modifying the defined synchronous source SSRC in the audio data packet according to the change times of the role of the main speaking terminal to obtain a modified audio data packet.
9. The apparatus of claim 8,
the processing unit is specifically configured to modify the SSRC in the audio data packet to the change times of the role of the master terminal, so as to obtain the modified audio data packet.
10. The apparatus according to claim 8 or 9,
the processing unit is specifically configured to determine the number of updates of the data forwarding relation table as the number of changes of the role of the talkback terminal.
11. The apparatus according to any one of claims 7 to 9,
the sending unit is specifically configured to send the modified audio data packet to a data forwarding server corresponding to the other terminal, so that the data forwarding server forwards the modified audio data packet to the other terminal.
12. The apparatus according to any one of claims 7 to 9,
the receiving unit is further configured to receive an update request message sent by the control node server;
the processing unit is further configured to update the data forwarding relation table according to the update request message.
13. A processing apparatus comprising a processor and a memory, wherein,
the memory is to store program instructions;
the processor is used for reading the program instructions in the memory and executing the audio data processing method of any one of the claims 1-6 according to the program instructions in the memory.
14. A computer storage medium comprising instructions that, when executed,
the instructions, when executed by one or more processors, cause a processing device to perform a method of processing audio data as claimed in any of claims 1-6 above.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910575418.8A CN112152975B (en) | 2019-06-28 | 2019-06-28 | Audio data processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910575418.8A CN112152975B (en) | 2019-06-28 | 2019-06-28 | Audio data processing method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112152975A true CN112152975A (en) | 2020-12-29 |
CN112152975B CN112152975B (en) | 2022-11-08 |
Family
ID=73869314
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910575418.8A Active CN112152975B (en) | 2019-06-28 | 2019-06-28 | Audio data processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112152975B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114268911A (en) * | 2021-09-23 | 2022-04-01 | 珠海市杰理科技股份有限公司 | Communication method and device, readable storage medium and TWS system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1564992A1 (en) * | 2004-02-13 | 2005-08-17 | Seiko Epson Corporation | Method and system for recording videoconference data |
CN103220258A (en) * | 2012-01-20 | 2013-07-24 | 华为技术有限公司 | Conference sound mixing method, terminal and media resource server (MRS) |
CN104079870A (en) * | 2013-03-29 | 2014-10-01 | 杭州海康威视数字技术股份有限公司 | Video monitoring method and system for single-channel video and multiple-channel audio frequency |
CN105706425A (en) * | 2013-10-01 | 2016-06-22 | 奥兰治 | Method for distributing identifiers of multicast sources |
CN106331847A (en) * | 2015-07-06 | 2017-01-11 | 成都鼎桥通信技术有限公司 | Audio and video playing method and device |
CN109327415A (en) * | 2017-07-31 | 2019-02-12 | 成都鼎桥通信技术有限公司 | Transmission method, device and the server of voice group data |
WO2019051089A2 (en) * | 2017-09-06 | 2019-03-14 | Texas Instruments Incorporated | Bluetooth media device time synchronization |
-
2019
- 2019-06-28 CN CN201910575418.8A patent/CN112152975B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1564992A1 (en) * | 2004-02-13 | 2005-08-17 | Seiko Epson Corporation | Method and system for recording videoconference data |
CN103220258A (en) * | 2012-01-20 | 2013-07-24 | 华为技术有限公司 | Conference sound mixing method, terminal and media resource server (MRS) |
CN104079870A (en) * | 2013-03-29 | 2014-10-01 | 杭州海康威视数字技术股份有限公司 | Video monitoring method and system for single-channel video and multiple-channel audio frequency |
CN105706425A (en) * | 2013-10-01 | 2016-06-22 | 奥兰治 | Method for distributing identifiers of multicast sources |
CN106331847A (en) * | 2015-07-06 | 2017-01-11 | 成都鼎桥通信技术有限公司 | Audio and video playing method and device |
CN109327415A (en) * | 2017-07-31 | 2019-02-12 | 成都鼎桥通信技术有限公司 | Transmission method, device and the server of voice group data |
WO2019051089A2 (en) * | 2017-09-06 | 2019-03-14 | Texas Instruments Incorporated | Bluetooth media device time synchronization |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114268911A (en) * | 2021-09-23 | 2022-04-01 | 珠海市杰理科技股份有限公司 | Communication method and device, readable storage medium and TWS system |
Also Published As
Publication number | Publication date |
---|---|
CN112152975B (en) | 2022-11-08 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111539726B (en) | Block chain consensus system and method | |
US9621483B2 (en) | Ethercat packet forwarding with distributed clocking | |
US11252111B2 (en) | Data transmission | |
US8572615B2 (en) | Parallel computing system, synchronization device, and control method of parallel computing system | |
CN107087038A (en) | A kind of method of data syn-chronization, synchronizer, device and storage medium | |
EP4213037A1 (en) | Data storage and reconciliation method and system | |
KR20220080198A (en) | Audio data processing method, server, and storage medium | |
CN110602338B (en) | Audio processing method, device, system, storage medium and equipment | |
CN112152975B (en) | Audio data processing method and device | |
KR20230150878A (en) | Data transmission methods and devices, and servers, storage media, and program products | |
JP2001313678A (en) | Method for synchronizing reproduction of audio data in computer network | |
CN107483628B (en) | DPDK-based one-way proxy method and system | |
CN111813795B (en) | Method and apparatus for confirming transactions in a blockchain network | |
CN109889922A (en) | Method, device, equipment and storage medium for forwarding streaming media data | |
CN112702146B (en) | Data processing method and device | |
CN110445578B (en) | SPI data transmission method and device | |
WO2010109761A1 (en) | Parallel processing system, parallel processing method, network switch device, and recording medium for parallel processing program | |
CN108900422B (en) | Multicast forwarding method and device and electronic equipment | |
CN105824707A (en) | Merging back-source method and device for multiple processes of streaming media service | |
CN113132300B (en) | Audio data transmission method and device | |
WO2022206480A1 (en) | Data packet sending method and device | |
CN113381938B (en) | Data packet sending method and device, storage medium and electronic equipment | |
CN111800337B (en) | Data center-based method and device, electronic equipment and storage medium | |
CN115314643A (en) | Method, system, equipment and storage medium for realizing net switching | |
CN113141620B (en) | Flexe service processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |