CN112218019B - Audio data transmission method and device - Google Patents

Audio data transmission method and device Download PDF

Info

Publication number
CN112218019B
CN112218019B CN201910615836.5A CN201910615836A CN112218019B CN 112218019 B CN112218019 B CN 112218019B CN 201910615836 A CN201910615836 A CN 201910615836A CN 112218019 B CN112218019 B CN 112218019B
Authority
CN
China
Prior art keywords
channels
data
channel
paths
subdata
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910615836.5A
Other languages
Chinese (zh)
Other versions
CN112218019A (en
Inventor
李见
黄飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hisense Visual Technology Co Ltd
Original Assignee
Hisense Visual Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hisense Visual Technology Co Ltd filed Critical Hisense Visual Technology Co Ltd
Priority to CN201910615836.5A priority Critical patent/CN112218019B/en
Priority to PCT/CN2020/070887 priority patent/WO2021004045A1/en
Priority to PCT/CN2020/070891 priority patent/WO2021004047A1/en
Priority to PCT/CN2020/070890 priority patent/WO2021004046A1/en
Priority to PCT/CN2020/070902 priority patent/WO2021004048A1/en
Priority to PCT/CN2020/070929 priority patent/WO2021004049A1/en
Publication of CN112218019A publication Critical patent/CN112218019A/en
Application granted granted Critical
Publication of CN112218019B publication Critical patent/CN112218019B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • H04N5/607Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals for more than one sound signal, e.g. stereo, multilanguages

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Time-Division Multiplex Systems (AREA)
  • Stereophonic System (AREA)

Abstract

The embodiment of the invention provides an audio data transmission method and device, and relates to the field of audio data processing. The embodiment of the invention can utilize the limited I2S channel resources to transmit more channel data. The method comprises the following steps: acquiring audio data of m channels; transmitting the audio data of the m channels to a data receiving end by using n channels of I2S channels; wherein m > n. The invention is applied to audio data processing.

Description

Audio data transmission method and device
Technical Field
The present invention relates to the field of audio data processing, and in particular, to an audio data transmission method and apparatus.
Background
At present, the television set produced by each manufacturer mostly adopts a two-channel stereo standard, i.e. audio data to be played is transmitted to two speakers of the television set for output after being processed. For example, the current television sets often employ a speaker respectively disposed at the left and right sides to output audio data of the left channel and the right channel. The television adopting the audio playing mode has poor audio effect. At this time, it is necessary to increase the number of speakers of the television and increase the number of channels for playing different sound channels, so as to achieve a more realistic stereo surround effect.
Taking the 7.1 channel standard as an example, to achieve the surround effect of 7.1 channels. It is desirable that the tv set be able to play the front left channel, front right channel, center channel, surround left channel, surround right channel, top left channel, top right channel, and 8 channels of sub-woof.
Because one path of I2S bus in the existing I2S standard comprises two I2S channels, one path of I2S bus can only transmit data of two channels. If the data of 7.1 sound channels needs to be played, at least 4 paths of I2S buses are needed between the master chip and the slave device (for example, between the master decoding chip and the power amplifier end chip) for data transmission.
The conventional tv main control board usually supports only 3I 2S buses or even less, which greatly limits the number of channels of the tv. To achieve multi-channel audio playback in the true sense, the number of I2S buses between the master chip and the slave device needs to be increased. This requires redesigning the tv main control board, which is costly.
Disclosure of Invention
Embodiments of the present invention provide an audio data transmission method and apparatus, which can transmit more channel data by using limited I2S channel resources.
In a first aspect, the present invention provides an audio data transmission method, including: acquiring audio data of m channels; transmitting the audio data of the m channels to a data receiving end by using n channels of I2S channels; wherein m > n.
In a second aspect, the present invention provides another audio data transmission method, including: receiving transmission data sent by n paths of I2S channels; converting transmission data sent by n paths of I2S channels into audio data of m paths of sound channels; wherein m > n.
In a third aspect, an embodiment of the present invention provides an audio data transmission apparatus, including: an acquisition unit, configured to acquire audio data of m channels; the transmitting unit is used for transmitting the audio data of the m channels to the data receiving end by using the n channels of the I2S channels; wherein m is more than n.
In a fourth aspect, an embodiment of the present invention provides another audio data transmission apparatus, including: the receiving unit is used for receiving transmission data sent by n paths of I2S channels; the conversion unit is used for converting the transmission data sent by the n paths of I2S channels into audio data of m paths of sound channels; wherein m > n.
In a fifth aspect, an embodiment of the present invention provides another audio data transmission apparatus, including: a processor, a memory, a bus, and a communication interface; the memory is used for storing computer-executable instructions, the processor is connected with the memory through the bus, and when the audio data transmission device runs, the processor executes the computer-executable instructions stored in the memory, so that the audio data transmission device executes the audio data transmission method provided by the first aspect.
In a sixth aspect, an embodiment of the present invention provides another audio data transmission apparatus, including: a processor, a memory, a bus, and a communication interface; the memory is used for storing computer execution instructions, the processor is connected with the memory through the bus, and when the audio data transmission device runs, the processor executes the computer execution instructions stored in the memory, so that the audio data transmission device executes the audio data transmission method provided by the second aspect.
In a seventh aspect, an embodiment of the present invention provides a computer storage medium, which includes instructions that, when run on an audio data transmission apparatus, cause the audio data transmission apparatus to execute the audio data transmission method provided in the first aspect.
In an eighth aspect, an embodiment of the present invention provides a computer storage medium, which includes instructions that, when run on an audio data transmission apparatus, cause the audio data transmission apparatus to execute the audio data transmission method provided in the second aspect.
In a ninth aspect, an embodiment of the present invention provides a television, which when running, enables the television to execute the audio data transmission method according to the first aspect and any implementation manner thereof, and/or the second aspect and any implementation manner thereof.
The audio data transmission method and device provided in the embodiments of the present invention consider that when an I2S channel is used for audio data transmission, one I2S channel may not be limited to be used for transmitting audio data of one channel according to the content of an I2S protocol, but an inventive concept of decoupling the number of I2S channels from the number of channels is adopted. Furthermore, the invention adopts the mode of sending the audio data of the m channels to the data receiving end by using the n channels of I2S channels, and can avoid the problem that the audio data with the number of the channels larger than the number of the I2S channels cannot be transmitted due to the limitation of the I2S protocol on the number of the channels in the transmission process.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below.
Fig. 1 is a schematic signal timing diagram of an I2S channel according to an embodiment of the present invention;
fig. 2 is a second schematic signal timing diagram of an I2S channel according to an embodiment of the present invention;
fig. 3 is an external view of a television according to an embodiment of the present invention;
fig. 4 is a second schematic external view of a television according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a television according to an embodiment of the present invention;
fig. 6 is a flowchart illustrating an audio data transmission method according to an embodiment of the present invention;
fig. 7 is a third schematic signal timing diagram of an I2S channel according to an embodiment of the present invention;
fig. 8 is a schematic structural diagram of a transmission data according to an embodiment of the present invention;
fig. 9 is a second flowchart illustrating an audio data transmission method according to an embodiment of the invention;
fig. 10 is a second schematic diagram of a structure of transmitting data according to an embodiment of the present invention;
fig. 11 is a fourth schematic signal timing diagram of an I2S channel according to an embodiment of the present invention;
fig. 12 is a third schematic structural diagram of a data transmission according to an embodiment of the present invention;
fig. 13 is a third schematic flowchart illustrating an audio data transmission method according to an embodiment of the present invention;
fig. 14 is a fourth schematic diagram of a structure of data transmission according to an embodiment of the present invention;
fig. 15 is a fifth schematic view of a structure of transmitting data according to an embodiment of the present invention;
fig. 16 is a second schematic structural diagram of a television according to an embodiment of the present invention;
fig. 17 is a schematic diagram of a protocol architecture of a device according to an embodiment of the present invention;
fig. 18 is a fourth schematic view illustrating an audio data transmission method according to an embodiment of the present invention;
fig. 19 is a fifth diagram illustrating a sixth exemplary method of transmitting audio data according to an embodiment of the present invention;
fig. 20 is a sixth schematic view of a structure of transmitting data according to an embodiment of the present invention;
fig. 21 is a seventh schematic structural diagram of a data transmission according to an embodiment of the present invention;
fig. 22 is a schematic structural diagram of an audio data transmission apparatus according to an embodiment of the present invention;
fig. 23 is a second schematic structural diagram of an audio data transmission device according to an embodiment of the present invention;
fig. 24 is a third schematic structural diagram of an audio data transmission apparatus according to an embodiment of the present invention;
fig. 25 is a fourth schematic structural diagram of an audio data transmission device according to an embodiment of the present invention;
fig. 26 is a fifth schematic structural diagram of an audio data transmission device according to an embodiment of the present invention;
fig. 27 is a sixth schematic structural diagram of an audio data transmission device according to an embodiment of the present invention.
Detailed Description
Embodiments of the present invention will be described in detail below with reference to the accompanying drawings.
The terms "first" and "second", etc. in the description and drawings of the present invention are used for distinguishing different objects, and are not used for describing a particular order of the objects.
Furthermore, to the extent that the terms "includes" and "having," and any variations thereof, are used in the description of the present invention, it is intended to cover a non-exclusive inclusion. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements but may alternatively include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be noted that, in the embodiments of the present invention, words such as "exemplary" or "for example" are used to indicate examples, illustrations or explanations. Any embodiment or design described as "exemplary" or "e.g.," an embodiment of the present invention is not necessarily to be construed as preferred or advantageous over other embodiments or designs. Rather, use of the word "exemplary" or "such as" is intended to present relevant concepts in a concrete fashion.
The term "and/or" in the present invention includes the use of either or both of the two methods.
In the description of the present invention, the meaning of "a plurality" means two or more unless otherwise specified.
First, technical terms involved in the embodiments of the present invention are explained:
the I2S (Inter-IC Sound) bus, also called as an integrated circuit built-in audio bus, is a bus standard established by philips for audio data transmission between digital audio devices. In an I2S protocol used by an I2S bus, one I2S bus can include data transmission of two I2S channels. Wherein data of one channel is transferred when the frame clock signal WS is "0", and data of the other channel is transferred when WS is "1". In the existing I2S protocol, two I2S channels included in one I2S bus are used to transmit data of different channels, respectively (where WS =0 indicates that the data being transmitted is data of a left channel, and WS =1 indicates that the data being transmitted is data of a right channel).
Specifically, one I2S bus mainly includes: MCLK, BCLK, SDATA, WS, etc., wherein:
BCLK, the serial clock. BCLK has 1 pulse for each bit of digital audio.
WS, frame clock. WS is used to switch data of left and right channels. When WS is "1" it means that data of the left channel is being transmitted, and "0" it means that data of the right channel is being transmitted. The frequency of WS is equal to the sampling frequency of the audio data. For example, when the sampling frequency of a certain audio data is 48KHz, it is stated that 48K sub data of the left channel and 48K sub data of the right channel need to be played each second. And the frequency of WS needs to be 48KHz in order to normally transmit the audio data.
SDATA, serial data. The method is used for transmitting the audio data in a binary complement mode.
MCLK, a master clock, is also called system clock. For better synchronization of the data sender system and the data receiver system. Typically, the frequency of MCLK is 256 times or 384 times the sampling frequency. For example, if the sampling frequency of the audio data is 48KHz, MCLK is 48khz × 256=12.288mhz.
With the development of technology, there are a variety of data formats in transferring data using an I2S bus. Depending on the position of SDATA with respect to WS and BCLK, there are a classification into the I2S standard format (philips specified format), left-aligned and right-aligned. Illustratively, as shown in fig. 1, when the I2S bus adopts a 16-bit right alignment transmission mode, in one sampling period T, when WS changes from "0" to "1", SDATA is used to transmit left channel data at this time, and in the last 16 BCLK periods included in WS, 16bit data are sequentially transmitted from the upper bit to the lower bit of the data by SDATA in a manner of BCLK rising edge acquisition. When the I2S bus adopts a 20-bit right alignment transmission mode, after WS is changed from '0' to '1' in a sampling period T, and in the last 20 BCLK periods included by WS, according to the BCLK rising edge collection mode, utilizing SDATA to sequentially transmit 20-bit data from high bit to low bit of the data. When the I2S bus adopts a 24-bit right alignment transmission mode, after WS is changed from '0' to '1' in a sampling period T, in the last 24 BCLK periods included by WS, according to the BCLK rising edge acquisition mode, the 24-bit data is transmitted from high bit to low bit of the data by using SDATA.
As shown in fig. 1, when the I2S bus adopts a 24-bit left-aligned transmission mode, after WS changes from "0" to "1" in a sampling period T, 24-bit data is sequentially transmitted from the high bit to the low bit of the data by SDATA in a manner of BCLK rising edge acquisition from the first BCLK period after WS change.
For example, fig. 2 shows a transmission diagram of the I2S standard format. Where the data transfer begins at the second BCLK pulse after WS has changed (as shown, for example, the second rising edge of BCLK). Specifically, the figure shows schematic diagrams of transmission processes in the 24-bit mode, the 20-bit mode, and the 16-bit mode, respectively.
The embodiment of the invention is applied to a scene of audio data transmission by using the I2S bus.
The inventive principle of the present invention is described below: at present, the television set produced by each manufacturer mostly adopts a two-channel stereo standard, i.e. audio data to be played is transmitted to two speakers of the television set for output after being processed. For example, as shown in fig. 3, a current television set usually employs a speaker respectively disposed at the left and right sides to respectively output audio data of a left channel and a right channel. The television set using the audio playing mode has poor audio effect. The audio data standard commonly used at present is higher and higher, and more audio data with 5.1, 7.1 or even higher standard is adopted for the video data. The audio data of these high standards will get data of multiple channels after decoding, while the standard of the audio data commonly used in the current television is higher and higher, and more audio data adopts the 5.1, 7.1 and even higher channel standard with only two speakers. At this time, it is necessary to increase the number of speakers of the television and increase the number of channels for playing different sound channels, so as to achieve a more realistic stereo surround effect.
Taking the 7.1 channel standard as an example, as shown in fig. 4, in order to realize the surround effect of 7.1 channels. It is desirable that the tv set be able to play back 8 channels, front left channel, front right channel, center channel, surround left channel, surround right channel, top left channel, top right channel, and subwoofer (wherein the subwoofer is usually located on the back of the tv set and not shown in the figure).
Because one path of I2S bus in the existing I2S standard comprises two I2S channels, one path of I2S bus can only transmit data of two channels. If the data of 7.1 sound channels needs to be played, at least 4 paths of I2S buses are needed between the master chip and the slave device (for example, between the master decoding chip and the power amplifier end chip) for data transmission.
The conventional tv main control board usually supports only 3I 2S buses or even less, which greatly limits the number of channels of the tv. If multi-channel audio playing is really realized, the number of I2S buses between a master chip and a slave device needs to be increased. This requires redesigning the tv main control board, which is costly.
Based on the above technical problem, in the embodiments of the present invention, if limited I2S bus resources are used to transmit more channel data, playing of multi-channel stereo sound can be achieved without greatly changing the structure of the existing tv main control board.
Based on the inventive principle, the embodiment of the invention provides a data transmission method, which is applied to a main control board of a television. Fig. 5 is a schematic structural diagram of a television main control board according to an embodiment of the present invention. The tv main control board 01 includes a main chip 10 and a slave device 20. The master chip 10 includes a decoder 101, a Digital Audio Player (DAP) 102, and a re-encoder 103. The slave device 20 includes an audio coprocessor 201 and a plurality of power amplifier units 202 (for example, as shown in the figure, the power amplifier units 202 specifically include five power amplifier units, namely a power amplifier unit 202a, a power amplifier unit 202b, a power amplifier unit 202c, a power amplifier unit 202d, and a power amplifier unit 202 e). The audio coprocessor 201 may be an XMOS chip.
When the audio data needs to be played, the main chip 10 reads the audio data to be played and decodes the audio data by using the decoder 101 to generate each channel of channel data that needs to be played. Taking 7.1 channels as an example, the decoding performed by the decoder 101 may obtain original audio data of a top right channel TopR, a top left channel TopL, a surround left channel SL, a surround right channel SR, a front left channel L, a front right channel R, a subwoofer channel wooefer, a center channel Conter, and 8 channels.
After the decoder 101 obtains each path of original audio data to be played, the digital audio player 102 performs processing such as AVI (audio video interface), DRC (dynamic range compression), EQ (Equalizer) parameter setting on each path of original audio data, and generates processed audio data of 8 channels.
Further, the re-encoder 103 processes the processed audio data of 8 channels according to the method provided by the embodiment of the present invention, and transmits the audio data of 8 channels to the slave device 20 side by using 6I 2S channels (specifically, 3I 2S channels in the figure, so that the 3I 2S channels include 6I 2S channels).
On the slave device 20 side, after the audio coprocessor 201 receives the data sent by the 3 channels of I2S channels, 8 channels of channel data are obtained according to the audio data transmission method provided by the embodiment of the present invention and transmitted to each power amplifier unit 202, so as to drive each speaker to play the corresponding channel data.
It should be noted that fig. 5 only illustrates an application scenario of the audio data transmission method provided by the embodiment of the present invention. In practical applications, those skilled in the art may also apply the embodiment of the present invention to other scenarios, so as to solve the technical problem that the embodiment of the present invention can solve and achieve the technical effect achieved by the embodiment of the present invention. For example, in some application scenarios, the digital audio player 102 may not be disposed in the master chip 10, and after the decoder 101 decodes and obtains multiple channels of channel data, the re-encoder 103 directly re-encodes the channel data according to the audio data transmission method provided by the embodiment of the present invention, and sends the re-encoded data to the slave device 20 side by using the I2S channel. Alternatively, those skilled in the art may also apply the audio data transmission method provided in the embodiment of the present invention to other devices besides a television to solve the same technical problem. The invention is not limited in this respect.
The first embodiment is as follows:
as shown in fig. 6, an embodiment of the present invention provides an audio data transmission method, which specifically includes:
s301, decoding the audio data to be processed to generate m paths of original channel data.
For example, in the main control board shown in fig. 5, step S301 may be performed by the decoder 101.
S302, preprocessing the m paths of original sound channel data respectively to generate audio data of the m paths of sound channels.
The method for preprocessing the m paths of original channel data specifically comprises the following steps: and respectively modulating the AVI, DRC, EQ and other parameters of the m paths of original channel data to generate the audio data of the m paths of channels.
For example, in the main control board shown in fig. 5, step S302 may be performed by the digital audio player 102.
In some application scenarios, when the audio data of the m channels can be acquired by other methods, the contents of S301 and S302 may not be executed. The embodiments of the present invention may not be limited thereto.
S303, acquiring audio data of m channels.
For example, in fig. 5, the re-encoder 103 acquires audio data of m channels.
And S304, transmitting the audio data of the m channels to a data receiving end by using the n channels of the I2S channels. Wherein m > n.
In one implementation, as shown in fig. 6, before performing S304, the method further includes: s305, comparing the sizes of m and n, and if m is larger than n, executing S304; otherwise, the audio data of the m channels are transmitted according to the transmission method in the prior art. In the embodiment of the present invention, it is considered that when a device (such as a television) plays and transmits audio data, the number of channels included in different audio data is different. Therefore, in the embodiment of the present invention, before the audio data is transmitted, it can be determined through S305 whether the audio data needs to be re-encoded according to the content of S304. When the re-encoding is not required, S304 is not performed to directly transmit the audio data by using the existing method. Thereby improving the efficiency of audio data transmission.
In the embodiment of the present invention, it is considered that, if the number of I2S channels can be decoupled from the number of channels of the audio data to be processed, instead of using a manner that one I2S channel is used to transmit audio data of only one channel in the prior art, a manner that one I2S channel transmits more than one channel data may be used to perform data transmission, that is, a limited number of I2S channels is used to transmit audio data with a larger number of channels. Therefore, on the premise of not increasing the number of I2S channels, the channel data with more channels can be transmitted, and the playing of stereo surround sound is realized.
In an implementation manner, it is considered in the embodiment of the present invention that, currently, when audio data is transmitted through an I2S channel, due to the limitation of the data amount of original audio data itself, audio data of one channel in the original audio data cannot completely occupy a transmission bit of the I2S channel, which may result in a waste of transmission resources of the I2S channel. For example, in the timing diagram of the I2S channel shown in fig. 7, SDATA has data transmitted only in two BCLK periods after WS switching (as shaded in the figure), and the other data bits have no data transmitted, which results in the waste of transmission resources of the I2S channel.
Specifically, for example, assume that the sampling frequency of the original audio data is 48KHz, and the sampling bit width is 16 bits; WS of the I2S channel used was 48khz, bclk =2 x 48khz x 32bit =3.072mhz. That is, in one frame clock cycle, when WS =0, 32 bits of left channel data can be transmitted; WS =1, 32bit of right channel data may be transmitted. The sample bit width of the original audio data is only 16 bits, which results in 16 bits of data being wasted in each frame clock cycle.
In view of the above situation, the embodiment of the present invention contemplates that the unused data bits can be utilized, so as to increase the transmission efficiency of the I2S channel, and thus more channel data can be transmitted by using the limited I2S channel. Further in an implementation manner, S304 specifically includes:
s304a1, the m channel sub-data is encoded to generate n channels of encoded data.
Each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
And S304a2, in a preset frame clock period, utilizing n paths of I2S channels to send n paths of coded data to a data receiving end through n paths of I2S channels.
Illustratively, the original audio data of 7.1 channels is taken as an example. As shown in FIG. 8, if the original audio data is 16bit @48KHz (i.e., the audio sampling frequency is 48KHz, and the sampling bit width is 16 bits). According to the existing transmission method, 5I 2S channels are required for transmission. Specifically, as shown in fig. 8, in an I2S channel (i.e., one of WS is "0" or "1") of the first I2S channel, data of Ch0 channel is transmitted; transmitting data of the Ch1 channel in another I2S channel of the first I2S channel (i.e., when WS is another value of "0" or "1"); and the rest is repeated until the data of the Ch8 sound channel is transmitted in one I2S channel of the fifth I2S channel, and the other I2S channel can be idle.
And if WS of the I2S channel used at this time is 48khz, bclk =2 × 48khz × 32bit =3.072mhz. That is, in one frame clock period, at most 32-bit left channel data and 32-bit right channel data can be transmitted.
Therefore, as shown in fig. 8, the channel sub-data of the six channels Ch0, ch2, ch3, ch5, ch6, and Ch8 in the same audio sampling period may be placed at the upper bits of the six I2S channels included in the three I2S channels, respectively, and the channel sub-data of the remaining channels Ch1, ch4, and Ch7 may be split and padded to the remaining bits of the six I2S channels. Therefore, the effect of transmitting the audio data of 8 channels by using three channels of I2S channels on the premise of not changing the clock frequency of the I2S channels is achieved.
Furthermore, in this embodiment of the present invention, S304a1 may specifically include: and respectively taking the n channel subdata in the m channel subdata as high-order data of the n-path coded data, and supplementing the channel subdata of the rest channels in the m channel subdata after the n channel subdata to generate the n-path coded data.
Specifically, supplementing the channel sub data of the remaining channels of the m channel sub data after the n channel sub data specifically includes:
splitting the channel subdata of each channel in the rest channels into at least one data segment; then, at least one data segment is supplemented after the n channel sub data, respectively.
In the embodiment of the invention, on the premise of not increasing the number of I2S channels and not reducing the total data amount of m channel subdata, the m channel subdata is sent out through one frame clock period of n I2S channels by changing the data structure of the m channel subdata. So as to play more realistic stereo surround sound.
In an implementation manner, it is considered that in the process of splitting and combining data, an operation error is more likely to occur, and further data loss is caused. For example, in fig. 8, ch1, ch4, and Ch7 are divided into two 8-bit data segments, and the two 8-bit data segments need to be recombined on the data receiving side. In this process, data corruption is easily caused. Therefore, in the embodiment of the present invention, the n channel sub-data serving as the high-order data of the n channels of encoded data specifically include:
channel subdata of n channels of highest importance among the m channels.
Specifically, for example, among the 7.1 channels, there are a top right channel TopR, a top left channel TopL, a surround left channel SL, a surround right channel SR, a front left channel L, a front right channel R, a subwoofer channel Woofer, and a center channel Conter. Further, the channel sub-data of the n channels with the highest importance level may include a top right channel TopR, a top left channel TopL, a surround left channel SL, a surround right channel SR, a front left channel L, a front right channel R, and 6 channels. Further, the subwoofer channel Woofer and the center channel Conter are supplemented with the channel sub-data of the upper 6 channels. The advantage of this is that when the clock signal is disturbed or abnormal, the main sound track and the surround sound, sky sound collected are not lost or wrong. Of course, the splitting mode and the receiving and combining mode are not fixed.
In an implementation manner, it is considered that in some application scenarios, the number of remaining bits of the I2S channel is too small, and even after splitting the channel sub-data, the channel sub-data still cannot be completely filled into the number of remaining bits. For example, taking the original audio data of 7.1 channels of 32bit @48khz as an example, if 3 channels of I2S channels which transmit 32bit left and right channels at most within one frame clock cycle are adopted for transmission. At this time, the 6I 2S channels included in the 3I 2S channels can just accommodate 6 channel data transmission, and there is no remaining bit. At this time, in order to transmit the m channel sub-data through the n I2S channels, the maximum amount of data that can be transmitted in one frame clock cycle may be increased by increasing the serial clock frequency BCLK of the I2S channel or by changing the sampling method of BCLK. Further, in the embodiment of the present invention, as shown in fig. 9, before executing S304a1, the embodiment of the present invention further includes:
and S304a3, increasing the frequency of the serial clock of the n paths of I2S channels. Or the sampling mode of the serial data of the n paths of I2S channels is switched from the serial clock single-edge acquisition mode to the serial clock double-edge acquisition mode.
Specifically, as shown in fig. 10, it is assumed that the original audio data is 12-channel audio data (specifically including Ch0, ch1, ch2, ch3, ch4, ch5, ch6, ch7, ch8, ch9, ch10, and Ch11 in the figure) with a sampling frequency of 48KHz and a sampling bit width of 16 bits. If 3 paths of I2S channels of 16bit left and right sound channels are transmitted at most in one frame clock period T for transmission. Only exactly 6 channels of channel sub-data (Ch 0, ch1, ch4, ch5, ch8, ch10 in fig. 10) can be transmitted within one frame clock period T. At this time, in order to ensure that the channel subdata of 12 channels can be transmitted at the normal playing rate, the frequency of the serial clock BCLK of the n I2S channels may be increased, or the sampling mode of the serial data SDATA of the n I2S channels may be switched from the BCLK single-edge acquisition mode to the BCLK double-edge acquisition mode, so as to increase the amount of transmission data within one frame clock cycle.
Specifically, taking fig. 11 as an example, it is assumed that the timing of the serial clock is as shown in BCLK _1 before the frequency or sampling manner of the serial clock is increased. The corresponding SDATA samples the BCLK rising edge. At this time, the timing of the serial data is shown as SDATA _1 in the figure, and at this time, in one frame clock cycle T, BCLK _1 has 8 rising edge triggers, and 8bit data of SDATA _1 is correspondingly transmitted.
Then, if the signal frequency of the serial clock BCLK is doubled, the timing of the serial data is shown as SDATA _2 in the figure, and BCLK _2 has 16 rising edge triggers within one frame clock period T, which correspondingly transfers 16bit data of SDATA _ 2. It can be seen that the amount of data that can be transmitted at the same time is doubled.
In addition, the sampling mode of serial data SDATA can be changed, so that the transmission data volume can be increased. Specifically, as shown in fig. 11, the frequency of the serial clock BCLK _3 at this time is not changed from BCLK _1, but the sampling pattern of the transfer data SDATA is changed from BCLK rising edge sampling to BCLK double edge sampling. At this time, 16-bit data of SDATA _3 can be correspondingly transmitted within one frame clock cycle T.
Continuing with the example shown in fig. 10, when the frequency of serial clock BCLK of n I2S lanes is doubled, or the sampling mode of serial data SDATA is switched from the serial clock single-edge acquisition mode to the double-edge acquisition mode, the amount of data that can be transmitted in one frame clock WS period T 'may be doubled (in this case, the time length of period T is the same as that of period T'). Taking channel I2S D0 in the figure as an example, before changing the frequency of BCLK or changing the sampling mode of SDATA, the channel can transmit channel sub-data of two channels Ch0 and Ch1 within one frame clock period T; and then, channel sub-data of four channels of Ch0, ch1, ch2, ch3 may be transmitted.
In one implementation, as shown in fig. 9, before performing S304a1 or performing S304a3, the method includes:
s304a4, determining whether the data capacity included in the serial data can accommodate m channel sub-data within one frame clock period according to the sampling mode of the current serial clock BCLK and serial data SDATA. If yes, directly executing S304a1; if not, go to S304a3.
In another implementation manner, as shown in fig. 9, in an embodiment of the present invention, S304 may further include:
and S304b, transmitting the m channel subdata to a data receiving end by using a plurality of frame clock cycles of the n paths of I2S channels.
Each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
Illustratively, for example, as shown in fig. 12, when 12 channels of channel data need to be transmitted using 3I 2S channels, channel sub-data of two channels, ch0 and Ch1, may be transmitted separately using each I2S channel in a first frame clock period T1 (as shown in the figure, channel sub-data of two channels, ch4 and Ch5, are transmitted using the I2S D0 channel in the T1 period), channel sub-data of two channels, ch8 and Ch9, are transmitted using the I2S D2 channel in the I2S 1 period), and channel sub-data of the remaining channels may be transmitted in a next frame clock period T2 (as shown in the figure, channel sub-data of two channels, ch2 and Ch3, are transmitted using the I2S D0 channel, channel sub-data of two channels, ch6 and Ch7, are transmitted using the I2S D1 channel in the T2 period), and channel sub-data of two channels, ch10 and Ch11, are transmitted using the I2S D2 channel in the T2 period).
The second embodiment:
an embodiment of the present invention further provides a data transmission method, as shown in fig. 13, which specifically includes:
s401, obtaining audio data of m channels. Wherein m is more than 4 and less than or equal to 8.
With regard to the specific execution steps and the effects of S401, reference may be made to the contents of S303 in the above embodiment.
S402, coding the m channel subdata to generate 4 channels of coded data.
Regarding the specific execution steps and the effects of S402, reference may be made to the contents of S304a1 in the above embodiment.
And S403, in a preset frame clock period, transmitting the 4 paths of encoded data to a data receiving end through the 4 paths of I2S channels by using the 4 paths of I2S channels.
Specifically, 4 channel subdata of the m channel subdata are respectively used as high-order data of the 4-channel encoded data, and channel subdata of the rest channels of the m channel subdata are supplemented after the 4 channel subdata to generate the 4-channel encoded data.
Specifically, the channel sub-data of the remaining channels in the m channel sub-data may be sequentially supplemented after different channel sub-data in the 4 channel sub-data.
Illustratively, as shown in fig. 14, the channel sub-data of four channels, L, C, rs (or Rrs), and Lfh (or Lrh), are respectively placed at the upper bits of four I2S channels of two I2S channels (I2S D0 and I2S D1 in the figure). Then, the channel sub-data of the four channels of R, LFE, ls (or Lrs) and Rfh are sequentially supplemented to the channel sub-data of the four channels of L, C, rs (or Rrs) and Lfh (or Lrh).
For other specific steps and effects of S403, reference may be made to the content of S304a2 in the above embodiment.
In one implementation, after the audio data of the m channels is acquired at S401, the method further includes:
s404, the m channel subdata is sent to a data receiving end by using a plurality of frame clock cycles of the 4 paths of I2S channels.
Illustratively, as shown in fig. 15, in the first frame clock period T1, the channel sub-data of four channels L, R, rs (or Rrs), ls (or Lrs) are transmitted through 4I 2S channels included in two I2S channels (I2S D0 and I2S D1 in the figure). Then, in the second frame clock period T2, the remaining four channels C, LFR, rs (or Rrs), ls (or Lrs) are transmitted.
Regarding other specific steps performed in S404 and the effects thereof, reference may be made to the content of S304b in the above embodiment.
The following introduces applications of the first embodiment and the second embodiment with reference to practical application scenarios:
as shown in fig. 16, when a device such as a television needs to play audio data. First, the first processing chip decodes the sound source data by a Decoder (Decoder) (corresponding to step S301 in the above embodiment), and then pre-processes the decoded audio data of each channel by a Digital Audio Player (DAP) in the second processing chip (corresponding to step S302 in the above embodiment). In an implementation manner, the functions of the first processing chip and the second processing chip may also be implemented by one or more DSP chips.
And then, the preprocessed audio data is required to be sent to a power amplifier for playing. In one implementation, audio data of multiple sound channels may be mixed into data of two sound channels by an Audio Mixing (MIX) technique, and then played through a power amplifier or other devices. But this can lose the quality of the audio data. Furthermore, in another implementation manner, as shown in the figure, the third processing chip is used to process the audio data in the manner of the first embodiment and the second embodiment, and then the lossless transmission of the audio data is realized by using less I2S channels.
When the above embodiments of the present invention are applied to devices such as a television, the following describes the application of the first embodiment and the second embodiment in conjunction with the working process of the devices:
1. when the target audio data needs to be played, the upper layer opens the multi-channel APK application, and the notified mode is communicated to the middleware and the Drive layer. Specifically, the protocol architecture of the device is shown in fig. 17.
2. After receiving the upper layer instruction, the middleware opens the multi-channel related code.
3. The driver layer will enumerate the underlying hardware connected Audio devices, such as to a USB device, and turn on the USB.
4. Audio files are input into the Audio equipment, and after Decode decoding is carried out on the Audio files through a chip, source data are on the bottom layer.
5. And the driving layer requests the middle layer to enumerate the USB equipment, and whether the data needs to be sent out or not.
6. The middleware requests the upper layer to receive the AUDIO data and determine whether to send the AUDIO data.
7. And the upper layer responds to inform data transmission.
8. The response instruction informs the middle layer, and the middle layer sends a multi-channel data (Dolby ATMOS, DRC and the like) processing mode to the driving layer to carry out post-processing on the data on the bottom layer.
9. And sound effect post-processing, wherein the driving layer receives the data compressed by the middle layer, packages the data into a USB data format mode for processing, and transmits the processed data to the power amplifier through a USB channel.
10. If the I2S device is enumerated, after the intermediate layer receives the sending instruction, the intermediate layer sends the processing mode of multi-channel data (dolby ATMOS, DRC, etc.) and the audio data transmission method provided by the above-mentioned embodiment of the present invention to the driver layer, so that the driver layer performs processing such as splitting, combining, etc. on the data for the bottom layer according to the audio data transmission method provided by the above-mentioned embodiment of the present invention, and then directly transmits the data through the I2S.
11. 16bit 8ch audio data are subjected to data splitting arrangement in a Drive layer, and the SW bass and the 16bit are split into SwH 8bit, swL 8bit, CH8bit and CL8bit in a middle layer.
In the above process, it can be seen that, in an implementation manner, in order not to redesign the main control board, the transmission of the audio data may be implemented by using the USB interface. However, this method requires that audio data is packaged according to the USB interface protocol and then transmitted. However, this method has a large delay and is not suitable for the transmission process of audio data played synchronously. The audio data transmission method provided by the embodiment of the invention has the beneficial effects of saving the system cost, reducing the delay time and the like.
Example three:
an embodiment of the present invention further provides an audio data transmission method, which is specifically shown in fig. 18. The method comprises the following steps:
and S501, receiving transmission data sent by n paths of I2S channels.
In one implementation, after the audio data transmission method provided in the first embodiment or the second embodiment is performed, the content included in S501 is specifically configured to be performed after S304.
Illustratively, in the slave device 20 shown in fig. 5, the content of steps S501 and S502 below is executed by the audio coprocessor 201.
And S502, converting the transmission data sent by the n paths of I2S channels into audio data of m paths of sound channels. Wherein m is more than n.
Specifically, in order to enable the audio coprocessor 201 to convert the transmission data sent by the n channels of I2S channels into the audio data of the m channels, before performing S502, the method further includes:
s503, receiving the coding information sent by the data sending end. So as to split the audio data of m channels from the transmission data sent by the n channels of I2S channels according to the decoding mode corresponding to the coding information.
In one implementation, S501 specifically includes: and receiving n paths of coded data sent by n paths of I2S channels in a preset frame clock period. S502 specifically includes: the n channels of encoded data are decoded to generate m channels of sub data.
Each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
In one implementation, as shown in fig. 19, S502 specifically includes:
s502a1, acquiring n channel subdata from the high-order data of n channels of coded data in sequence.
S502a2, obtaining channel sub-data of the m channel sub-data except the n channel sub-data from the remaining data of the n channels of encoded data.
The n channel sub-data specifically includes: channel subdata of n channels of highest importance among the m channels.
Specifically, the above-mentioned S502 can be used as a reverse decoding process of the steps of S304a1, S304a2, etc. in the above-mentioned first embodiment, and the technical problems to be solved and the advantageous effects to be achieved are the same as those in the above-mentioned first embodiment. This is not described in detail.
In another implementation manner, as shown in fig. 19, S502 specifically includes:
s502b1, receiving transmission data of a plurality of frame clock periods sent by n paths of I2S channels;
s502b2, converting the transmission data sent by the n channels of I2S channels into audio data of m channels, which specifically includes:
s502b3, generating m pieces of channel sub data by using the transmission data of a plurality of frame clock cycles.
Each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
Specifically, the step S502 may be performed as a reverse process of the steps S304a1, S304a2, S304b, etc. in the first embodiment, and the technical problems to be solved and the beneficial effects to be achieved are the same as those in the first embodiment. This is not described in detail.
In one implementation, when the n I2S channels have independent clock signals (including clocks such as BCLK, WS, and MCLK), if the n encoded data are decoded to generate m channel sub-data while receiving data transmitted by the n I2S channels, more system resources are occupied, thereby affecting normal operation of the device.
Furthermore, in this embodiment of the present invention, if the n I2S channels have independent clock signals, the decoding is performed on the n encoded data to generate m channel sub-data, which specifically includes:
and storing the data received from the n paths of I2S channels in the audio sampling period in a cache, and decoding the encoded data in the cache after the audio sampling period is ended to generate m channel subdata.
Furthermore, if the n I2S channels use a common clock signal, in order to increase the utilization rate of the CPU, the n I2S channels are used to receive data and decode the received data. Specifically, decoding the received data includes: the received data is split and combined to generate m pieces of channel subdata.
The following describes the audio data transmission method provided in the third embodiment with reference to an example:
specifically, the internal I2S architecture modes of different chip schemes are different, and four groups of signals including MCLK, BCLK, WS and SDATA are arranged in an I2S protocol, wherein MCLK is a main clock signal. BCLK clock signal, WS channel select, SDATA is channel data. The chip architecture is different, and the form of I2S transmission is also different.
In one approach, the n I2S channels may use the same clock signal. At this time, when receiving transmission data sent by n I2S channels, a form of splitting while receiving is adopted.
Specifically, the output channels of the I2S channels of different chips are different, for example, MSD858, the core end may support 2I 2S channel outputs, but the 2I 2S channels share one MCLK, BCLK, and WS, that is, the single CLK collects, the single CLK is sent from the core end to the external audio coprocessor 201, and the audio coprocessor 201 collects, according to the single CLK, the audio data of m channels sent from the core end (specifically, the main chip 10 shown in fig. 5) according to different coding arrangement forms. The audio coprocessor 201 may be an XMOS chip.
Specifically, for example, 1696t @48hkz- - -24bit @48hkz, it is assumed that audio data of 8 channels are transmitted in the 3I 2S channels. The operation of the XMOS chip as the audio coprocessor 201 is described below:
1. and after the data are sent, the data reach an XMOS terminal, and the XMOS chip starts to rearrange the data after receiving the data.
2. Because 3I 2S channels are simultaneously sent to the receiving end, the XMOS platform driving software is disassembled according to the coding information provided by the cassette mechanism end.
3. As shown in fig. 20, 16-bit data of the front left and right channels are respectively intercepted from the 3 channels of I2S channels (i.e., the front 16-bit data of two I2S channels included in one I2S channel is intercepted), and 16-bit data corresponding to Ch0, ch1, ch2, ch3, ch4, and Ch5 channels are restored. Respectively placed in 6 channels comprised by the first 3I 2S channels of the XMOS chip.
4. As shown in FIG. 20, the data of the synthesized Ch6 of the last 8 bits received by the two I2S channels in the I2S _0 channel is put into one channel included in the fourth I2S channel of the XMOS.
5. As shown in FIG. 20, the data of the synthesized Ch7 of the last 8 bits data received by the two I2S channels of the I2S _1 channel is put into another channel included in the fourth I2S channel of the XMOS.
For another example, take 16bit @48hkz — 16bit @96hkz as an example, suppose that audio data having 12 channels is transmitted in the 3-channel I2S channel. The operation of the XMOS chip as the audio coprocessor 201 is described below:
firstly, in order to increase the sampling frequency to 96KHz, one way is to increase the frequency of BCLK, and the other way is to do data acquisition by adopting a BCLK double-edge sampling way without changing the original sampling frequency. Specifically, the method comprises the following steps:
1. and after the data are sent, the data reach an XMOS terminal, and the XMOS chip starts to rearrange the data after receiving the data.
2. Because the 3 paths of I2S channel data are simultaneously sent to the receiving end, the XMOS platform driving software is disassembled according to the multichannel data coding sequence provided by the core end.
3. As shown in fig. 21, the I2S channel data on each path is disassembled into data by a clock waveform supplied from the data transmitting end. The XMOS end also adopts double-edge sampling to extract the I2S _0 data into 4ch 16bit data, and the data are respectively stored into the output I2S _ a and the output I2S _ b.
4. As shown in fig. 21, the data on I2S _1 is also subjected to double edge sampling, and 4ch 16bit data is extracted and put into the outputs I2S _ c and I2S _ d, respectively.
5. As shown in fig. 21, the data on I2S2 is also subjected to double edge sampling, and 4ch 16bit data is extracted and put into the outputs I2S _ e and I2S _ f, respectively.
The XMOS stores the received and split data in an I2S channel (specifically, I2S _ a, I2S _ b, I2S _ c, I2S _ d, I2S _ e, and I2S _ f in fig. 21) for output, and stores a clock signal itself to generate a 48KHZ clock signal by itself, and under the clock signal, transmits the multi-channel data to a back-end power amplifier chip to perform sound conversion.
For another example, different chip schemes have different internal I2S architectures, and the I2S protocol includes four sets of signals, MCLK, BCLK, WS, and SDATA, where MCLK is the main clock signal. BCLK clock signal, WS channel select, SDATA is channel data. The chip architecture is different, and the form of I2S transmission is also different. Nova 72671 and subsequent products are exemplified below. The core chip supports 3 paths of I2S output, if 3 paths of I2S share one path of CLK, in the data transmission process, once the CLK shared by the I2S goes wrong, the received data in the 3 paths of I2S data transmission is wrong, and abnormal sound and other conditions may occur, wherein the CLK wrong source: software delays, interference of peripheral signals, hardware circuitry, etc.
For example, when an XMOS chip receives and splits 24bit data at a data sending end, if clk is abnormally jittered, data errors may be caused. If the CLK is jittered at the 16-bit low level, the bit number is read from 0 to 1, the subsequent XMOS splits the 24-bit data and the 16-bit data is abnormal, the data is not the original multi-channel data, and under the condition of sharing one path of CLK, 3 paths of I2S data are influenced at the same time, abnormal sound and the like occur to the multi-channel, so the architecture mode has the disadvantages, but the advantages of the architecture mode are that the cost requirement in the chip is saved.
The internal structure of the 3I 2S chips output by the NT72673 adopted by the patent is that each group of I2S channels has independent MCLK, WS, BCK and SDATA, but more importantly, the MCLK synchronization of the 3I 2S channels is ensured.
(1) One method is to wait for transmission, i.e. to wait for 3I 2S channel data to be transmitted simultaneously after all the data has been processed.
(2) Initializing and sending new sound, collecting the first bit of 3 paths of initialization audio data, and taking the first bit valid signal as MCLK of waiting sound.
Since the 3I 2S channels are common to the CLK, the transmission data is taken by the 3I 2S channels. Therefore, data errors caused by the exception of one path of clk are greatly reduced.
With reference to the following example, the following describes a process of storing received data in a buffer and encoding encoded data in the buffer after a sampling period is finished in the embodiment of the present invention:
take 16bit @48hkz- - -24bit @48hkz as an example:
1. the data transmitting end (specifically, an SOC chip) transmits all the 24-bit data transmitted by each channel in the 3 paths of I2S channels to the XMOS chip, the XMOS chip does not perform simultaneous collection and disassembly, and the XMOS chip caches the 24-bit data in the buffer.
2. And the data transmitting end provides the XMOS chip with the coding arrangement mode that the 8ch data of the data transmitting end is changed into the 6ch data. The XMOS chip receives the encoded information transmitted by the data transmitting end, as described in S503.
3. After the three groups of data are cached, due to the fact that clock signals exist inside the XMOS end, the first bit of the 3 paths of data is collected to be effective MCLK signals respectively, and the first 16bit effective data of the 24bit data are collected. Taking fig. 20 as an example, the collected 16-bit effective data are combined to obtain audio data of Ch0, ch1, ch2, ch3, ch4, ch5, and 6 channels.
4. As shown in fig. 20, after acquiring 24-bit first-order valid data of 3I 2S channels as valid MCLK signals, the last 8-bit data of the 24-bit data is acquired, and data combination is performed again to obtain audio data of Ch6, ch7, and 2 channels.
5. The restored original multichannel 5.1.2 data is transmitted to a power amplification end through 4 paths of I2S channels (specifically including I2S _ a, I2S _ b, I2S _ c and I2S _ d in the figure), so that the aim of the multichannel sound effect is fulfilled.
Take 16bit @48hkz- - -16bit @96hkz as an example, that is, the sampling frequency is raised to 96KHz by raising the frequency of BCLK or adopting a dual-edge sampling manner. Specifically, the method comprises the following steps:
1. the data sending end (specifically, the SOC chip) transmits all data sent by each channel in the 3 paths of I2S channels to the XMOS chip, and the XMOS chip does not perform the simultaneous receiving and disassembling, but buffers the data in the buffer.
2. And the data transmitting terminal provides the code arrangement mode and the double-edge sampling mode to the XMOS chip. The XMOS chip receives the encoded information transmitted by the data transmitting end, as described in S503.
3. The XMOS chip takes the first bit valid data of the 3 paths of I2S as a reference MCLK, and the sampling frequency is set to be 48KHZ.
4. As shown in fig. 21, data transmitted by 3I 2S channels are simultaneously put into a stack, the sampling frequency is set to 48KHz, and a double-edge sampling mode is adopted. And (3) respectively sampling 16bit data corresponding to each channel in Ch0, ch1 \8230andCh 11 in 12 channels.
5. The 12ch 1qubit data is respectively sent to 6 paths of I2S channels (specifically including I2S _ a, I2S _ b, I2S _ c and I2S _ d in the figure) according to the original data distribution format.
6. And the internal of the XMOS chip automatically waits for tones and adjusts the clock synchronization of 6 paths of I2S channels. And after C synchronization, the XMOS chip simultaneously sends the data to the power amplifier end through 6 paths of I2S channels.
Example four:
the invention provides an audio data transmission device, which is used for executing the audio data transmission methods provided in the first embodiment and the second embodiment of the invention, and as shown in fig. 22, is a schematic diagram of a possible structure of an audio data transmission device 60 provided in the embodiment of the invention. Wherein, the device includes: acquisition section 601 and transmission section 602.
An obtaining unit 601, configured to obtain audio data of m channels.
A sending unit 602, configured to send the audio data of the m channels to a data receiving end by using n channels of I2S channels; wherein m is more than n.
Optionally, the sending unit 602 specifically includes an encoding subunit 6021 and a sending subunit 6022. Wherein:
an encoding subunit 6021, configured to encode the m channel sub-data to generate n channels of encoded data; each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period;
a sending subunit 6022, configured to send the n channels of encoded data to the data receiving end through the n channels of I2S in the preset frame clock period by using the n channels of I2S channels.
Optionally, the sending subunit 6022 is specifically configured to respectively use n channel sub-data of the m channel sub-data as high-order data of the n-channel encoded data, and supplement the channel sub-data of the remaining channels of the m channel sub-data after the n channel sub-data, so as to generate the n-channel encoded data.
Optionally, the n channel sub-data specifically includes: channel sub-data of n channels having the highest degree of importance among the m channels.
Optionally, the audio data transmission apparatus 60 further includes: a sampling frequency adjustment unit 603.
The sampling frequency adjusting unit 603 is configured to increase the frequency of the serial clock of the n paths of I2S channels before the sending unit 602 sends the n paths of encoded data to the data receiving end by using the n paths of I2S channels; or the sampling mode of the serial data of the n paths of I2S channels is switched from the serial clock single-edge acquisition mode to the serial clock double-edge acquisition mode.
Optionally, the sending subunit 6022 is specifically configured to send the m channel sub-data to the data receiving end by using multiple frame clock cycles of the n I2S channels; each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
The audio data transmission device 60 according to the embodiment of the present invention may be divided into functional modules or functional units according to the above method, for example, each functional module or functional unit may be divided according to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module or a functional unit. The division of the modules or units in the embodiments of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of using an integrated unit, fig. 23 shows a schematic diagram of a possible configuration of the audio data transmission apparatus according to the above embodiment. The audio data transmission device 70 includes: a processing module 701, a communication module 702 and a storage module 703. The processing module 701 is configured to control and manage the actions of the audio data transmission apparatus 70, for example, the processing module 701 is configured to support the audio data transmission apparatus 70 to execute the processes S301-S304 in fig. 6 or fig. 9. The communication module 702 is used to support the communication of the audio data transmission device 70 with other entities. The memory module 703 is used to store program codes and data of the audio data transmission apparatus.
The processing module 701 may be a processor or a controller, such as a Central Processing Unit (CPU), a general-purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. The communication module 702 may be a transceiver, a transceiver circuit or a communication interface, etc. The storage module 703 may be a memory.
When the processing module 701 is the processor shown in fig. 24, the communication module 702 is the transceiver shown in fig. 24, and the storage module 703 is the memory shown in fig. 24, the audio data transmission device according to the embodiment of the present invention may be the following audio data transmission device 80.
Referring to fig. 24, the audio data transmission apparatus 80 includes: a processor 801, a transceiver 802, a memory 803, and a bus 804.
The processor 801, the transceiver 802, and the memory 803 are connected to each other through a bus 804; the bus 804 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Processor 801 may be a Central Processing Unit (CPU), a microprocessor, an Application-Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to control the execution of programs in accordance with the present invention.
The Memory 803 may be a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integrated with the processor.
The memory 802 is used for storing application program codes for implementing the present invention, and is controlled by the processor 801. The transceiver 802 is used for receiving content input by an external device, and the processor 601 is used for executing application program codes stored in the memory 803, thereby implementing an audio data transmission method provided in the first embodiment and the second embodiment of the present invention.
Example five:
fig. 25 is a schematic diagram of a possible structure of an audio data transmission apparatus 90 according to an embodiment of the present invention, and is used for performing an audio data transmission method according to a third embodiment of the present invention. Wherein, the device includes: receiving section 901 and converting section 902.
A receiving unit 901, configured to receive n paths of transmission data sent by the I2S channels.
A conversion unit 902, configured to convert transmission data sent by n channels of I2S channels into audio data of m channels; wherein m is more than n.
Optionally, the receiving unit 901 is specifically configured to receive n paths of encoded data sent by n paths of I2S channels within a preset frame clock period.
A conversion unit 902, specifically configured to decode n channels of encoded data to generate m channel sub-data; each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
Optionally, the converting unit 902 is specifically configured to sequentially obtain n channel sub-data from the high-order data of the n channels of encoded data; and obtaining the channel subdata except the n channel subdata in the m channel subdata from the residual data of the n paths of coded data.
Optionally, the n channel sub-data specifically includes: channel subdata of n channels of highest importance among the m channels.
Optionally, the receiving unit 901 is specifically configured to receive transmission data sent by n I2S channels in multiple frame clock cycles.
A conversion unit 902, specifically configured to generate m channel sub-data by using transmission data of multiple frame clock cycles; each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
Optionally, the converting unit 902 is further specifically configured to, if the n paths of I2S channels have independent clock signals, store data received from the n paths of I2S channels in the buffer memory in the audio sampling period, and decode encoded data in the buffer memory after the audio sampling period is ended, to generate m channel sub-data.
The audio data transmission device 90 according to the embodiment of the present invention may be divided into functional modules or functional units according to the method examples described above, for example, each functional module or functional unit may be divided corresponding to each function, or two or more functions may be integrated into one processing module. The integrated module may be implemented in a form of hardware, or may be implemented in a form of a software functional module or a functional unit. The division of the modules or units in the embodiments of the present invention is schematic, and is only a logic function division, and there may be another division manner in actual implementation.
In the case of using an integrated unit, fig. 26 shows a schematic view of a possible configuration of the audio data transmission apparatus according to the above-described embodiment. The audio data transmission apparatus 100 includes: a processing module 1001, a communication module 1002 and a storage module 1003. The processing module 1001 is used for controlling and managing the actions of the audio data transmission apparatus 100, for example, the processing module 1001 is used for supporting the audio data transmission apparatus 100 to execute the processes S501-S503 in fig. 18 or fig. 19. The communication module 1002 is used to support communication between the audio data transmission apparatus 100 and other entities. The storage module 1003 is used to store program codes and data of the audio data transmission apparatus.
The processing module 1001 may be a processor or a controller, such as a Central Processing Unit (CPU), a general-purpose processor, a Digital Signal Processor (DSP), an application-specific integrated circuit (ASIC), a Field Programmable Gate Array (FPGA), or other programmable logic device, a transistor logic device, a hardware component, or any combination thereof. Which may implement or perform the various illustrative logical blocks, modules, and circuits described in connection with the disclosure. A processor may also be a combination of computing functions, e.g., comprising one or more microprocessors, a DSP and a microprocessor, or the like. The communication module 1002 may be a transceiver, a transceiver circuit or a communication interface, etc. The storage module 1003 may be a memory.
When the processing module 1001 is the processor shown in fig. 27, the communication module 1002 is the transceiver shown in fig. 27, and the storage module 1003 is the memory shown in fig. 27, the audio data transmission device according to the embodiment of the present invention may be the audio data transmission device 110 as follows.
Referring to fig. 27, the audio data transmission apparatus 110 includes: a processor 1101, a transceiver 1102, a memory 1103, and a bus 1104.
Wherein, the processor 1101, the transceiver 1102 and the memory 1103 are connected with each other through a bus 1104; the bus 1104 may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The bus may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown, but this does not mean that there is only one bus or one type of bus.
Processor 1101 may be a Central Processing Unit (CPU), microprocessor, application-Specific Integrated Circuit (ASIC), or one or more Integrated circuits configured to control the execution of programs in accordance with the present invention.
The Memory 1103 may be a Read-Only Memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an Electrically Erasable Programmable Read-Only Memory (EEPROM), a Compact Disc Read-Only Memory (CD-ROM) or other optical Disc storage, optical Disc storage (including Compact Disc, laser Disc, optical Disc, digital versatile Disc, blu-ray Disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer, but is not limited to these. The memory may be self-contained and coupled to the processor via a bus. The memory may also be integrated with the processor.
The memory 1103 is used for storing application program codes for executing the present invention, and the processor 1101 controls the execution. The transceiver 1102 is configured to receive content input by an external device, and the processor 1101 is configured to execute application program codes stored in the memory 1103, thereby implementing the audio data transmission method provided in the third embodiment of the present invention.
Example six:
an embodiment of the present invention further provides a television, including the audio data transmission device provided in the fourth embodiment and/or the audio data transmission device provided in the fifth embodiment.
It should be understood that, in various embodiments of the present invention, the sequence numbers of the above-mentioned processes do not mean the execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present invention.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the technical solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented using a software program, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the invention are all or partially effected when the computer program instructions are loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions can be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another computer readable storage medium, for example, the computer instructions can be transmitted from one website, computer, server, or data center to another website, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.) means. The computer-readable storage medium can be any available medium that can be accessed by a computer or can comprise one or more data storage devices, such as a server, a data center, etc., that can be integrated with the medium. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid State Disk (SSD)), among others.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (8)

1. An audio data transmission method, comprising:
acquiring audio data of m channels;
transmitting the audio data of the m channels to a data receiving end by using n channels of I2S channels; wherein m is more than n;
the sending, by using n I2S channels, the audio data of the m channels to a data receiving end specifically includes:
coding the m sound channel subdata to generate n paths of coded data; each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period;
in a preset frame clock period, the n paths of I2S channels are utilized to send the n paths of coded data to a data receiving end through the n paths of I2S channels;
the method comprises the steps that channel subdata of six sound channels of Ch0, ch2, ch3, ch5, ch6 and Ch8 in the same audio sampling period is respectively placed at the high bits of six I2S channels included in three I2S channels, and channel subdata of the rest sound channels of Ch1, ch4 and Ch7 is split and filled to the rest bits of the six I2S channels;
the encoding the m channel sub-data to generate n channels of encoded data specifically includes:
and respectively taking the n channel subdata in the m channel subdata as high-order data of the n-path coded data, and supplementing the channel subdata of the rest channels in the m channel subdata behind the n channel subdata to generate the n-path coded data.
2. The audio data transmission method according to claim 1, wherein before said transmitting the n encoded data to a data receiving end using the n I2S channels, the method further comprises:
increasing the frequency of the serial clock of the n paths of I2S channels;
or, before the n paths of encoded data are transmitted to a data receiving end by using the n paths of I2S channels, the method further includes:
and switching the sampling mode of the serial data of the n paths of I2S channels from a serial clock single-edge acquisition mode to a serial clock double-edge acquisition mode.
3. The audio data transmission method according to claim 1, wherein the sending the audio data of the m channels to a data receiving end by using n I2S channels specifically includes:
sending the m sound channel subdata to a data receiving end by using a plurality of frame clock cycles of n paths of I2S channels; each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period.
4. An audio data transmission method, characterized in that,
receiving transmission data sent by n paths of I2S channels;
converting the transmission data sent by the n paths of I2S channels into audio data of m paths of sound channels; wherein m is more than n;
the receiving of the transmission data sent by the n paths of I2S channels specifically includes:
receiving n paths of encoded data sent by n paths of I2S channels in a preset frame clock period;
the converting the transmission data sent by the n paths of I2S channels into audio data of m paths of channels specifically includes:
decoding the n paths of coded data to generate m sound channel subdata; wherein, each channel subdata in the m channel subdata comprises channel data of one channel in the m channels in an audio sampling period;
in a preset frame clock period, the n paths of I2S channels are utilized to send the n paths of coded data to a data receiving end through the n paths of I2S channels;
the method comprises the steps that channel subdata of six channels including Ch0, ch2, ch3, ch5, ch6 and Ch8 in the same audio sampling period are respectively placed at the high bits of six channels of I2S channels included in three channels of I2S channels, and channel subdata of the residual channels Ch1, ch4 and Ch7 is split and filled to the residual bits of the six channels of I2S channels;
if the n I2S channels have independent clock signals, decoding the n encoded data to generate m channel sub-data, specifically including:
and storing the data received from the n paths of I2S channels in a cache in the audio sampling period, and decoding the encoded data in the cache after the audio sampling period is finished to generate m channel subdata.
5. The audio data transmission method according to claim 4, wherein the decoding n channels of encoded data to generate m channel sub-data specifically includes:
sequentially acquiring n sound channel subdata from the high-order data of the n paths of coded data;
and obtaining the channel subdata except the n channel subdata in the m channel subdata from the residual data of the n paths of coded data.
6. The audio data transmission method according to claim 4, wherein the receiving transmission data sent by n I2S channels specifically includes:
receiving transmission data of a plurality of frame clock periods sent by the n paths of I2S channels;
the converting the transmission data sent by the n channels of I2S channels into audio data of m channels specifically includes:
generating m sound channel subdata by utilizing the transmission data of the plurality of frame clock periods; wherein, each channel subdata in the m channel subdata respectively includes channel data of one channel in the m channels in an audio sampling period.
7. An audio data transmission device is characterized by comprising a processor, a memory, a bus and a communication interface; wherein the memory is used for storing computer-executable instructions, the processor is connected with the memory through the bus, and when the audio data transmission device runs, the processor executes the computer-executable instructions to cause the audio data transmission device to execute the audio data transmission method according to any one of claims 1 to 3.
8. An audio data transmission device is characterized by comprising a processor, a memory, a bus and a communication interface; wherein the memory is used for storing computer-executable instructions, the processor is connected with the memory through the bus, and when the audio data transmission device runs, the processor executes the computer-executable instructions to cause the audio data transmission device to execute the audio data transmission method according to any one of claims 4 to 6.
CN201910615836.5A 2019-07-09 2019-07-09 Audio data transmission method and device Active CN112218019B (en)

Priority Applications (6)

Application Number Priority Date Filing Date Title
CN201910615836.5A CN112218019B (en) 2019-07-09 2019-07-09 Audio data transmission method and device
PCT/CN2020/070887 WO2021004045A1 (en) 2019-07-09 2020-01-08 Method for transmitting audio data of multichannel platform, apparatus thereof, and display device
PCT/CN2020/070891 WO2021004047A1 (en) 2019-07-09 2020-01-08 Display device and audio playing method
PCT/CN2020/070890 WO2021004046A1 (en) 2019-07-09 2020-01-08 Audio processing method and apparatus, and display device
PCT/CN2020/070902 WO2021004048A1 (en) 2019-07-09 2020-01-08 Display device and audio data transmission method
PCT/CN2020/070929 WO2021004049A1 (en) 2019-07-09 2020-01-08 Display device, and audio data transmission method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910615836.5A CN112218019B (en) 2019-07-09 2019-07-09 Audio data transmission method and device

Publications (2)

Publication Number Publication Date
CN112218019A CN112218019A (en) 2021-01-12
CN112218019B true CN112218019B (en) 2023-03-14

Family

ID=74048223

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910615836.5A Active CN112218019B (en) 2019-07-09 2019-07-09 Audio data transmission method and device

Country Status (1)

Country Link
CN (1) CN112218019B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113014998B (en) * 2021-02-03 2023-08-04 深圳创维-Rgb电子有限公司 Audio output method, device, television and computer readable storage medium
CN114697401A (en) * 2022-03-14 2022-07-01 广州广哈通信股份有限公司 Audio data transmission method
CN115278458B (en) * 2022-07-25 2023-03-24 邓剑辉 Multi-channel digital audio processing system based on PCIE interface

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079265A (en) * 2007-07-11 2007-11-28 北京中星微电子有限公司 Voice signal processing system
JP2012518939A (en) * 2009-02-23 2012-08-16 コア ロジック,インコーポレイテッド Audio data transmission method and apparatus
WO2015126956A1 (en) * 2014-02-21 2015-08-27 Summit Semiconductor, Llc Synchronization of audio channel timing
CN106911987A (en) * 2017-02-21 2017-06-30 珠海全志科技股份有限公司 Main control end, equipment end, the method and system of transmission multichannel audb data
CN109660933A (en) * 2019-01-30 2019-04-19 北京视通科技有限公司 A kind of device of simultaneous transmission multi-channel analog audio

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079265A (en) * 2007-07-11 2007-11-28 北京中星微电子有限公司 Voice signal processing system
JP2012518939A (en) * 2009-02-23 2012-08-16 コア ロジック,インコーポレイテッド Audio data transmission method and apparatus
WO2015126956A1 (en) * 2014-02-21 2015-08-27 Summit Semiconductor, Llc Synchronization of audio channel timing
CN106911987A (en) * 2017-02-21 2017-06-30 珠海全志科技股份有限公司 Main control end, equipment end, the method and system of transmission multichannel audb data
CN109660933A (en) * 2019-01-30 2019-04-19 北京视通科技有限公司 A kind of device of simultaneous transmission multi-channel analog audio

Also Published As

Publication number Publication date
CN112218019A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112218019B (en) Audio data transmission method and device
US6349285B1 (en) Audio bass management methods and circuits and systems using the same
US6665409B1 (en) Methods for surround sound simulation and circuits and systems using the same
US20050204077A1 (en) Method and apparatus for CD with independent audio functionality
EP1643487B1 (en) Audio decoding apparatus
US20060152398A1 (en) Digital interface for analog audio mixers
CN101178921A (en) Method and system for a flexible multiplexer and mixer
CN109660933A (en) A kind of device of simultaneous transmission multi-channel analog audio
CN112218020B (en) Audio data transmission method and device for multi-channel platform
US11025406B2 (en) Audio return channel clock switching
US6804655B2 (en) Systems and methods for transmitting bursty-asnychronous data over a synchronous link
CN102917141A (en) Test method, test device and test system for evaluating voice quality
WO2021004049A1 (en) Display device, and audio data transmission method and device
CN112216310B (en) Audio processing method and device and multi-channel system
US11514921B2 (en) Audio return channel data loopback
US7000138B1 (en) Circuits and methods for power management in a processor-based system and systems using the same
US6946982B1 (en) Multi-standard audio player
JP4621368B2 (en) Controller and method for controlling interface with data link
JPH1153841A (en) Sound signal processing device and sound signal processing method
US10433060B2 (en) Audio hub and a system having one or more audio hubs
CN102326346A (en) Method and apparatus for transmitting audio data
CN112216290A (en) Audio data transmission method and device and playing equipment
CN208739395U (en) Audio board and digital sound console
CN102881302A (en) Method and system for processing audio signals in a central audio hub
CN101202876A (en) Method for implementing synchronization of audio and picture by using audio frequency and video frequency composite channel in DVR

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant