CN114697817A - Audio data processing system and electronic device - Google Patents

Audio data processing system and electronic device Download PDF

Info

Publication number
CN114697817A
CN114697817A CN202011624764.XA CN202011624764A CN114697817A CN 114697817 A CN114697817 A CN 114697817A CN 202011624764 A CN202011624764 A CN 202011624764A CN 114697817 A CN114697817 A CN 114697817A
Authority
CN
China
Prior art keywords
audio data
audio
frame
data
clock signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202011624764.XA
Other languages
Chinese (zh)
Other versions
CN114697817B (en
Inventor
赵学文
师鹏飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN202011624764.XA priority Critical patent/CN114697817B/en
Publication of CN114697817A publication Critical patent/CN114697817A/en
Application granted granted Critical
Publication of CN114697817B publication Critical patent/CN114697817B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Communication Control (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The application provides an audio data processing system and an electronic device. First, the control processing device acquires first audio data. And then, the control processing device encapsulates the first audio data according to the adopted preset standard mode to obtain second audio data comprising a plurality of frames with the same length, and the second audio data is suitable for being transmitted on the adopted bus. And then the control processing device sends the second audio time into the bus frame by frame, and the interface conversion chip receives the second audio data from the bus frame by frame. And finally, the interface conversion chip modulates the second audio data according to the type of the digital audio transmission channel to obtain third audio data which can be used for transmission on the digital audio transmission channel. Thereby enabling the transfer of audio data with a single bus.

Description

Audio data processing system and electronic device
Technical Field
The application belongs to the technical field of digital audio, and particularly relates to an audio data processing system and electronic equipment.
Background
The collection, processing and transmission of sound data are important components of multimedia technology. Numerous digital audio systems have entered the consumer market, such as digital audio tapes, digital sound processors. For equipment and manufacturers, the standardized information transmission structure can improve the adaptability of the system. For example, left and right channel data can be switched under the control of a clock signal according to a transmission standard, so that the dual-channel audio data can be transmitted by using a bus; or transmitting multi-channel audio data using a plurality of buses, wherein each bus transmits audio data of two channels. Currently, there is no general method that can be well implemented for transmitting dual and multi-channel audio data using a single bus.
Disclosure of Invention
In view of this, the present application provides an audio data processing system, which in a first aspect of the present application comprises a control processing means and an interface conversion means, the control processing means and the interface conversion means being connected via a bus for data transmission. Firstly, a control processing device acquires first audio data, encapsulates the first audio data according to a preset standard mode, and obtains second audio data after encapsulation, wherein the second audio data comprises a plurality of frames with the same length, and the format of the second audio data is suitable for being transmitted on a bus. Then, the control processing device sends a plurality of frames in the second audio data to the bus one by one for transmission according to the clock signal followed by the bus, namely according to the period of the frame clock signal and the number of times of change of the bit clock signal; the frequency of the frame clock signal is less than that of the bit clock signal, and the number of times of change of the bit clock signal is not less than twice of the bit length of one frame of the second audio data within a half cycle of the frame clock signal, so that it is ensured that the bus can transmit one frame of the second audio data within the half cycle of the frame clock signal. Next, the interface conversion means receives a plurality of frames of the second audio data from the bus one by one according to the period of the frame clock signal and the number of times the bit clock signal changes, so that the interface conversion means receives the second audio data. Finally, the interface conversion device modulates the second audio data according to the type of the digital audio transmission channel to obtain third audio data, and the third audio data has a data form capable of being transmitted on the digital audio transmission channel.
The system provided by the first aspect of the present application enables the control processing device to send data to the interface conversion chip via a single bus, because the control processing device provided by the present application adaptively packages the first audio data, so that the second audio data obtained after packaging has a data format suitable for transmission via the bus. The bus may be an inter-IC sound (I2S) bus or the like.
Based on the system provided by the first aspect of the present application, a process of controlling the processing device to transmit the second audio data according to the frame clock signal and the bit clock signal is described as follows:
first, the control processing means starts to transmit a certain frame in the second audio data after the read bit clock signal is changed a preset number of times in the first half of one frame clock signal period, which may be referred to as a first frame. Then, the control processing means starts to transmit the next frame after the first frame in the second audio data after the read bit clock signal is changed a preset number of times in the latter half of the frame clock period, and this frame may be referred to as a second frame. The second frame is a next frame in the second audio data after the first frame. Therefore, the process is repeated, and the control processing device can realize that the second audio data is sent to the bus frame by frame for transmission.
Based on the process of sending the second audio data by the control processing device, a process of receiving the second audio data by the interface conversion device according to the frame clock signal and the bit clock signal is described as follows:
first, the interface conversion apparatus starts receiving a first frame of the second audio data after the read bit clock signal is changed a preset number of times in a first half of a period of a frame clock signal. Then, the interface conversion device starts receiving the second frame in the second audio data after the read bit clock signal changes a preset number of times in the second half of the frame clock period. Thus, by repeating the above-described process, the interface conversion apparatus can receive the second audio data frame by frame from the bus.
In the process of transmitting the second audio data by the control processing device and receiving the second audio data by the interface conversion device, the preset number of times may be two, that is, after the bit signal changes for the second time in each half frame clock period, the first bit in one frame of data starts to be transmitted or received, and then the subsequent bits are sequentially transmitted or received every time the bit clock signal changes for two times in the half frame clock period until the transmission or reception of the frame of data is completed in the half frame clock period. Repeating the process can complete the transmission or reception of all frames.
Based on the system provided in the first aspect of the present application, a manner of encapsulating the first audio data according to the adopted predetermined standard manner is introduced below, and this manner is referred to as a first predetermined manner:
and the control processing device packages the first audio data according to the IEC60958 standard to obtain second audio data. The format of the second audio data includes:
one frame in the second audio data comprises two subframes with the same length, which are called subframe A and subframe B, wherein the subframe A is positioned before the subframe B; the sub-frame A and the sub-frame B both contain head codes, but the head codes in the sub-frame A and the sub-frame B are different; subframe a contains 16-bit samples of the first audio data, and subframe B also contains 16-bit samples of the first audio data.
Based on the above-mentioned method of performing encapsulation according to IEC60985 standard, when the first audio data is two-channel audio data that is not encoded and compressed, that is, the first audio data includes the first-channel audio data and the second-channel audio data, the format of the encapsulated second audio data further includes:
the samples of the first audio data contained in the subframe a are samples of first channel audio data in the first audio; the samples of the first audio data included in the subframe B are samples of the second channel audio data in the first audio data; the sub-frame a and the sub-frame B in the same frame contain samples of the two-channel audio data of the first audio data in the same time period, that is, samples of the same bit position of the first audio data.
Therefore, when the first audio data is the two-channel audio data which is not compressed by coding, the control processing device packages the first audio data according to IEC60958, and the second audio data obtained after packaging has a data format suitable for being transmitted through a bus.
Based on the system provided in the first aspect of the present application, another way of encapsulating the first audio data according to the adopted predetermined standard manner is described below, and this way is referred to as a second predetermined manner:
the control processing device firstly encapsulates the first audio data according to the IEC61937 standard to obtain middle-stage data; and then, packaging the middle-stage data according to the IEC60958 standard to obtain second audio data.
The format of the intermediate stage data includes: the middle-stage data comprises a header field and a data field, and the data field is first audio data or one section of data in the first audio data; the length of the middle stage data should be an integer multiple of 16 bits.
The format of the second audio data includes: one frame in the second audio data comprises two subframes with the same length, which are called subframe A and subframe B, wherein the subframe A is positioned before the subframe B; the sub-frame A and the sub-frame B both contain head codes, but the head codes in the sub-frame A and the sub-frame B are different; subframe a contains 16-bit samples of the intermediate stage data, and subframe B also contains 16-bit samples of the intermediate stage data.
Based on the above way of firstly performing encapsulation according to the IEC61937 standard and then performing encapsulation according to the IEC60985 standard, when the first audio data is multi-channel audio data encoded and compressed by a multi-channel audio coding algorithm, for example, the first audio data is in a format of ". ac 3",. dts ",. mpeg", and the format of the encapsulated second audio data further includes: the sub-frame a and the sub-frame B are included in the same frame, and the sampling of the middle stage data included in the sub-frame a and the sampling of the middle stage data included in the sub-frame B are two adjacent samples in the middle stage data.
When the first audio data is multi-channel audio data which is encoded and compressed by a multi-channel audio encoding algorithm, the control processing device packages the first audio data according to IEC61937 and IEC60958 standards in sequence, and second audio data obtained after packaging has a data format suitable for being transmitted through an I2S bus; and the intermediate stage data obtained by packaging according to the IEC61937 standard has a format suitable for further packaging by adopting the IEC60958 standard
Based on the first preset mode and the second preset mode, the second audio data obtained by encapsulating the first audio data by the control processing device are both data which are suitable for being transmitted through a bus and comprise a plurality of frames with the same length, and the standard according to which the encapsulation is carried out is a standard which can be analyzed by equipment at the other end of the digital audio transmission channel, so that the interface conversion device can send the data to the equipment at the other end of the digital audio transmission channel only by simply modulating the data received through the bus, the requirement of the system on the computing capability of the interface conversion device is further reduced, a cheaper chip can be adopted as a chip in the interface conversion device, and the cost for producing the system is reduced.
Based on the system provided by the first aspect of the present application, the digital audio transmission system is a digital audio transmission line; the interface conversion device comprises an interface conversion chip and a digital audio interface, wherein the digital audio interface can be inserted with the digital audio transmission line.
In a possible form of the system, the interface conversion chip may be an FPGA.
In another possible form of the system, the digital audio transmission line is a sony/philips digital audio interface (S/PDIF) transmission line, and the modulation performed by the interface conversion apparatus on the received second audio data may be Biphase Mark Code (BMC) modulation, so that the data obtained after modulation is suitable for transmission through the S/PDIF transmission line.
In another possible form of the system, the control processing device may be a CPU, and the system is a chip module formed by the CPU and the interface conversion chip.
Based on the system provided by the first aspect of the present application, the system further includes an audio device, and the third audio data is transmitted between the interface conversion device and the audio device through a digital audio transmission channel.
In a second aspect, the present application provides an electronic device, which includes various embodiments of the first aspect as described above.
Based on the electronic device provided in the second aspect of the present application, the electronic device may be a large screen device or a PC device, such as an intelligent large screen, an intelligent television, a notebook computer, a desktop computer, an all-in-one machine, and the like.
Drawings
FIG. 1 is a schematic diagram of an I2S bus transmission signal according to an embodiment of the present disclosure;
fig. 2 is a schematic diagram of an interface conversion apparatus according to an embodiment of the present application;
fig. 3A is a schematic diagram of an application scenario provided in an embodiment of the present application;
fig. 3B is a schematic diagram of an internal structure of an application scenario according to an embodiment of the present application;
fig. 3C is a schematic diagram of another application scenario provided in an embodiment of the present application;
fig. 3D is a schematic diagram of an internal structure of another application scenario provided in an embodiment of the present application;
fig. 4 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
fig. 5 is a schematic diagram illustrating data transmission from the CPU to the speaker via the interface conversion chip according to an embodiment of the present application;
fig. 6 is a message diagram of an IEC standard according to which the encapsulation provided by an embodiment of the present application can be implemented;
fig. 7 is a message diagram of another IEC standard according to which the encapsulation provided by an embodiment of the present application can be implemented;
fig. 8 is a functional block diagram of an interface conversion apparatus according to an embodiment of the present application;
fig. 9 is a flow chart of audio data transmission according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
To facilitate understanding of the embodiment of the present invention, a brief description of an inter-IC sound (I2S) bus involved in the present invention will be given.
An integrated circuit built-in audio bus (hereinafter, referred to as "12S bus") is a bus standard established by philips for audio data transmission between digital audio devices, and is widely used in various multimedia systems, generally for transmitting binaural audio data.
Fig. 1 is a schematic diagram of an I2S bus for transmitting signals, and as shown in fig. 1, an I2S bus transmits at least three signals: a frame clock signal (LRCK), a bit clock signal (BCLK), and Serial Data (SDATA).
The frame clock signal is used to distinguish the type of the current transmission channel, for example, when the frame clock signal is at a low level (LRCK is 0), the left channel is transmitted, and when the frame clock signal is at a high level (LRCK is 1), the right channel is transmitted; or, when the frame clock signal is at a high level (when LRCK is 1), the left channel is transmitted, and when the frame clock signal is at a low level (when LRCK is 0), the right channel is transmitted; half of the time in one frame clock period is to transmit the left channel and the other half is to transmit the right channel.
The bit clock signal is also called serial clock Signal (SCLK), and is used to provide a clock signal for transmitting digital audio data, that is, for each bit of data of the digital audio, there is a corresponding bit clock signal pulse, and usually, the frequency of the bit clock signal is 2 × sampling frequency × sampling number of bits.
The serial data is used to provide transmitted audio data, and the data length (word size) that can be transmitted in a half frame clock period can be generally 16 bits, 24 bits, 32 bits or other data bits, and does not exceed the number of times of change of the bit clock signal in a half frame clock period. The Most Significant Bit (MSB) of the data is transmitted first, typically beginning at the second bit clock signal pulse after the frame clock signal change. The Least Significant Bit (LSB) of the data depends on the significand n of the data, and the significands of the receiving end and the transmitting end may be different, for example, if the number of significands that the receiving end can handle is less than the number of significands transmitted by the transmitting end, the receiving end may discard the unnecessary lower bits in the data frame; if the receiving end can process more significant bits than the transmitting end sends, the receiving end can make up the rest bits by itself, for example, make up to zero. The synchronization mechanism can facilitate interconnection of digital audio devices without causing data skew. The serial data may also be a data transmission channel containing two data, namely Serial Data Input (SDIN) and Serial Data Output (SDOUT), where SDIN and SDOUT are respectively responsible for data transmission in different directions, so as to implement bidirectional data transmission of devices at two ends of the I2S bus.
In addition, sometimes in order to make the system time more synchronous, a main clock signal (MCLK), also called system clock (SysCLK), is additionally transmitted to provide a clock main signal of the processor, which may be 256 times or 284 times the sampling frequency.
The I2S bus can realize the transmission of the two-channel audio data by switching left and right channel data under the control of a clock signal.
The data transmission method of the I2S bus and how the I2S realizes the transmission of the binaural audio data are described above. Next, in order to facilitate understanding of the embodiments of the present solution, a brief description of the technical problem solved by the present solution is provided herein.
With the coming of the world of everything interconnection, more and more intelligent household appliances are installed in the homes of people. Various intelligent large screens and intelligent televisions gradually replace televisions with single functions, and become key electronic equipment for people to enjoy games, video entertainment and audio-visual feasts at home.
On the chip of such large-screen products, various digital audio interfaces are usually provided, and audio devices such as sound boxes and home theaters can be connected with the large-screen chip through corresponding digital audio transmission lines. When a user wants to get an immersive audio experience like a cinema at home, the user can configure a home cinema with multi-channel audio decoding playback capability.
For example, a 5.1 channel home theater typically includes a main body, a center speaker, a subwoofer speaker, a front left speaker, a front right speaker, a surround left speaker, and a surround right speaker, and can play six channels of audio data. The user sets the audio output of the large screen to a corresponding 5.1 channel mode and connects the large screen with a host in the home theater via a digital audio transmission line, so that the host receives the audio output from the large screen. The host transmits the audio to each sound box in a wired and/or wireless mode, so that multi-channel audio is played. The digital audio interface generally includes a sony/philips digital audio interface (S/PDIF), a high definition multimedia interface (HDMI-ARC), and an HDMI/ARC.
In the above process, after receiving the setting of the output multi-channel audio from the user, the large screen outputs multi-channel audio data in a multi-channel audio compression file format, such as dolby surround audio-3 (AC 3), digital cinema system (DTS), Moving Picture Experts Group (MPEG) and other file formats. Then, the large screen transmits the multi-channel audio data to a host of the home theater through a digital audio transmission line connected with a digital audio interface on the large screen chip. The host of the home theater has corresponding decoding capability, so that six sound channels are restored through decoding processing, and then the six sound channels are converted into Pulse Code Modulation (PCM) code streams. Furthermore, the host of the home theater sends the PCM code streams of the six sound channels to the six sound boxes respectively in a wired and/or wireless manner. And after digital-to-analog conversion and power amplification are carried out on the received PCM code stream by each sound box, the audio frequency of the sound channel is played through a loudspeaker. Thereby realizing the playing of multi-channel audio.
Besides the cooperation with audio equipment, provide outstanding audio frequency and experience, the application kind in the big screen product of intelligence is also abundant more and more. Nowadays, besides watching television, users can also play network video by using an intelligent large screen, and play electronic games with various gorgeous pictures by using the intelligent large screen, which puts new requirements on the computing capability of a chip in the intelligent large screen. At present, a low-performance chip special for a television is generally used for an intelligent large screen on the market, along with the development of the times, more and more application programs run in the intelligent large screen are more and more complex, the low-performance chip special for the television cannot well support various applications, and a user can experience the blockage of a large-screen operating system, so that the user experience is reduced.
A feasible solution to the above problem is to use a chip dedicated to mobile phones instead of the original chip dedicated to televisions. The computing power of the chip special for the mobile phone is generally superior to that of the chip special for the television, various application programs can be smoothly operated, and better use experience is brought to users.
However, since the handset-specific chip was originally designed for use with a handset, which is not a professional audio device, the audio devices in the handset are typically only a speaker and a microphone. The mobile phone is not connected with various professional audio devices through a digital audio transmission line so as to meet the requirement of transmitting multi-channel audio.
Therefore, at the beginning of the design of the chip special for the mobile phone, the audio transmission requirements of the chip special for the mobile phone, the loudspeaker and the microphone are only needed to be met, so that the audio interface which can be provided by the chip special for the mobile phone is extremely lack of compared with a chip special for a television. For example, an audio interface on a dedicated chip of a mobile phone usually includes only two to three I2S interfaces, one of which needs to be assigned to a microphone, and the other of which may need to be reserved. Therefore, in the chip dedicated to the mobile phone, there is usually only one I2S interface available for the audio interface of the speaker (i.e., audio output).
Therefore, in order to replace the television special chip used by the smart large screen by the mobile phone special chip, the problem caused by the lack of the audio interface of the mobile phone special chip must be solved. That is, the chip dedicated to the mobile phone does not have interfaces such as S/PDIF, HDMI-ARC, etc. usually included in the chip dedicated to the television, and cannot be directly connected to the audio device through the digital audio transmission line. The chip special for the mobile phone is provided with an audio interface for audio output, and only one I2S interface is provided. Therefore, the problem of realizing data transmission between the mobile phone special chip and the audio equipment by using an I2S interface must be solved.
As can be seen from the foregoing description of the I2S bus, the I2S bus can implement transmission of binaural audio data by switching left and right channel data under the control of a frame clock signal according to the I2S bus protocol. If one wants to transmit multi-channel audio over an I2S bus, one possible approach is to use multiple I2S interfaces. For example, six-channel audio is transmitted using three I2S interfaces, eight-channel audio is transmitted using four I2S interfaces, with each I2S interface transmitting two channels of audio data. However, this method still cannot solve the problem of multi-channel audio transmission in the case of only one I2S interface.
In order to solve the above problems, the present solution is to design an interface conversion device to implement the interface conversion function. As shown in fig. 2, the interface conversion device may include an interface conversion chip and a digital audio interface, the interface conversion chip is connected to the mobile phone dedicated chip through an I2S bus, and the digital audio interface may be inserted into a digital audio transmission line so as to be connected to an audio device through the digital audio transmission line. The special chip of the mobile phone and the interface conversion chip are configured by corresponding software and hardware, and can transmit multi-channel audio through an I2S bus. Therefore, the interface conversion device and the special chip for the mobile phone are taken as a whole and have the capability of carrying out multi-channel audio transmission with external audio equipment. The configuration modes of the dedicated chip and the interface conversion chip for the mobile phone will be described in detail in the following embodiments, which are not described herein again.
In view of the above, the present solution provides a method for enabling transmission of multi-channel audio using a single I2S bus interface. Next, embodiments of the present solution will be described in turn with reference to the drawings.
Fig. 3A-3B are schematic diagrams of application scenarios of an embodiment provided by the present solution.
As shown in fig. 3A, the smart large screen 301 and the sound box 302 are connected by a digital audio transmission line 303, and audio data is transmitted from the smart large screen 301 to the sound box 302 through the digital audio transmission line 303, so that the sound box 302 plays audio.
Fig. 3B, corresponding to fig. 3A, further illustrates the internal structure of the smart large screen involved in this scenario. As shown in fig. 3B, the smart large screen may include a Central Processing Unit (CPU) and an interface conversion device, and the interface conversion device has an internal architecture as shown in fig. 2. The CPU is used as an operation and control core of the intelligent large screen and executes various functions such as information processing, program operation and the like. The CPU typically includes various interfaces to various buses so that the CPU can communicate with other modules or devices. For example, in this embodiment, the CPU includes an I2S interface, and is connected to the interface conversion chip in the interface conversion device through an I2S bus, and the CPU can send data to the interface conversion chip through the I2S bus. The digital audio interface in the interface conversion device can be inserted into the digital audio transmission line. The interface conversion chip can process the data into a form which can be transmitted by a digital audio transmission line, and then the data is sent to the sound box through the digital audio transmission line through the digital audio interface.
Optionally, the interface conversion chip may further have an interface conversion function in the opposite direction, that is, the interface conversion chip receives audio data transmitted from the digital audio transmission line, converts the audio data into a form that can be transmitted by an I2S bus, and transmits the converted data to the CPU through an I2S bus. Optionally, the interface conversion chip may further have the interface conversion function in the two directions at the same time, so as to support bidirectional data conversion. The scheme does not limit the data transmission direction which can be supported by the interface conversion chip, and the data transmission direction can be unidirectional or bidirectional and does not exceed the range of the scheme.
In the scene as shown in fig. 3A and fig. 3B, the interface conversion device is installed inside the intelligent large screen, and the user can directly insert the digital audio transmission line into the digital audio interface on the intelligent large screen without additionally providing an external audio adapter, so that the interface conversion function of audio transmission is realized in a user-unaware manner, and the user experience is further improved.
In the scenarios shown in fig. 3A and 3B, the CPU and the interface conversion device may also jointly form a module, and the module as a whole has the capability of performing multi-channel audio transmission with an external audio device.
Optionally, in another embodiment application scenario provided by this embodiment, as shown in fig. 3C and fig. 3D, the smart large screen 301 is connected to the docking station 304, the docking station 304 and the loudspeaker box 302 are connected through the digital audio transmission line 303, the audio data is converted into a data form that can be transmitted through the digital audio transmission line 303 via the docking station 304, and then transmitted to the loudspeaker box 302 via the digital audio transmission line 303, so that the loudspeaker box 302 plays audio. The docking station in this embodiment is usually installed with an interface conversion device as shown in fig. 2, so as to realize the function of interface conversion, and similarly, the function of interface conversion may be in opposite directions or bidirectional, all without departing from the scope of this solution.
In the scenarios shown in fig. 3C and fig. 3D, the interface conversion function of the smart large screen is limited, and at this time, the connection with more types of transmission lines can be realized by providing an external docking station, so as to expand the interface types of the smart large screen.
In an application scenario of the present solution, optionally, the smart large screen may also be other types of electronic devices including a CPU, including but not limited to a television, a smart television, a personal computer, a notebook computer, an all-in-one machine, a multimedia player, a mobile phone, a tablet, and a smart watch.
The electronic device may have a structure as shown in fig. 4. Electronic device 400 may include processor 402, display screen 4011, camera 4012, keys 4013, audio module 4014, sensor module 4015, communication module 4016, storage module 4017, power management module 4018, and the like. It is to be understood that the illustrated structure of the embodiment of the invention is not to be construed as a specific limitation to the electronic device 400. In other embodiments of the present application, electronic device 400 may include more or fewer components than shown, or some components may be combined, some components may be split, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
Alternatively, the CPU may also be other chip devices with information processing and program running functions, such as a system on chip (SoC), without departing from the scope of the present disclosure.
Alternatively, the interface conversion chip may be a chip in the form of a customized chip, a general-purpose chip, a Field Programmable Gate Array (FPGA), or the like, and the present solution does not limit the type of chip used herein. Optionally, when the FPGA is used as the interface conversion chip, on one hand, the overhead of designing a customized chip can be saved, and on the other hand, the FPGA is generally lower in price than various general chips on the market, so that the cost of the interface conversion chip is reduced, and the cost of the whole scheme is further reduced.
Optionally, the type of digital audio interface includes, but is not limited to, an S/PDIF, HDMI-ARC, USB interface, and the type of digital audio transmission line includes, but is not limited to, an S/PDIF transmission line, an HDMI-ARC transmission line, a USB transmission line.
Alternatively, the interface conversion device and the speaker, or the docking station and the speaker, may transmit data in a wireless form, including but not limited to bluetooth or Wi-Fi, in addition to a wired form, such as a digital audio transmission line. At this time, the interface conversion device or the docking station further includes a corresponding bluetooth module, a Wi-Fi module, and the like, so that the function of transmitting data to the speaker and receiving data from the speaker through a wireless data transmission channel is provided. Similarly, the sound box also includes corresponding bluetooth and Wi-Fi modules, so as to transmit data to and receive data from the interface conversion device or the docking station through a wireless data transmission channel.
Optionally, the docking station in fig. 3C and 3D may also be other devices with the above-mentioned interface conversion function, including but not limited to an adapter, a commutator, a bridge, and a repeater.
Alternatively, the speakers may also be other types of audio devices, including but not limited to home theater, home theater host, bluetooth speakers, smart speakers, microphones, voice assistants, headphones.
It should be clear that although the scenes depicted in fig. 3A-3D include one speaker, the application scope of the present invention is not limited to the scene of one speaker, but can also include scenes of two or more speakers, so as to achieve better stereo and surround type multi-channel audio playing experience. For example, the sound boxes in the drawings may be home theaters capable of providing a good audio experience, in which case, the digital audio transmission line is usually connected to a host in the home theater, and the host is connected to a plurality of sound boxes located near the smart large screen and/or at each corner of the room in a wired and/or wireless manner.
The method provided by the scheme can be applied to the scenes and various optional scenes without exceeding the scope of the scheme.
Based on the application scenario of the present solution described in the above embodiments, an implementation of the present solution is described in detail below with reference to the accompanying drawings.
Fig. 5 schematically shows a solution proposed in the present application when the digital audio transmission line is an S/PDIF transmission line. As shown in fig. 5, first, the CPU acquires first audio data 501, and the first audio data 501 may be multi-channel audio data that is encoded and compressed by a multi-channel audio encoding algorithm. The multi-channel audio coding algorithm may be AC3, DTS, MPEG, etc., and accordingly, the first audio data 501 may be an audio file in the format of ". AC 3",. DTS ",. MPEG", etc. The multi-channel audio coding algorithm compresses the multi-channel audio coding into one data stream, and the compressed data stream does not distinguish which channel a byte belongs to. The audio consumer device, such as a speaker, receives the data stream, decodes the data stream, i.e., decompresses the data stream, and reproduces the audio data of multiple channels from the data stream for playing.
Alternatively, the first audio data 501 may be audio data of a period of time next to the current playing time of the music playing software in the smart large screen, where the period of time may be 5 milliseconds, 10 milliseconds, and the like. The CPU may retrieve the piece of audio data from the operating system application layer.
Then, the CPU encapsulates the first audio data 501 according to a preset standard encapsulation manner to obtain second audio data 502. The predetermined standard packaging method may be a packaging method established by referring to a relevant standard of the International Electrotechnical Commission (IEC), such as IEC61937 and IEC 60958. The specific process of packaging by using IEC61937 and IEC60958 will be described in detail in the following embodiments, which are not described herein. It should be clear that the present solution is not limited to what standard the encapsulation is based on, as long as the encapsulated data is suitable for transmission via I2S bus and the audio consumer device can decapsulate according to the standard. The above encapsulation may be achieved by adding an extra data field in the first audio data 501, so that the length of the second audio data 502 is usually larger than the length of the first audio data 501. The additional data field may contain descriptive information about the first audio data 501, such as the file format of the first audio data 501, and the like, depending on the selected standard.
The second audio data 502 may comprise a plurality of frames (frames), wherein one frame may be transmitted in a half frame clock period of the I2S bus, so as to implement that the plurality of frames constituting the second audio data 502 are sent to the I2S interface frame by frame, and transmitted to the interface conversion chip via the I2S bus.
The above process of sending a frame into the I2S bus can be understood in conjunction with fig. 1 and the foregoing description of the I2S bus. As shown in fig. 1, the I2S bus may transmit the left channel when the frame clock signal is low, the right channel when the frame clock signal is high, or vice versa. When the I2S bus is used to transmit the frame, the first bit of the frame may be transmitted after the second pulse of the bit clock signal changes within half the frame clock period of I2S, and the subsequent bits of the frame may be transmitted sequentially with each other change of the bit clock signal, completing the transmission of the last bit of the frame within the half frame clock period. Repeating the above process, the CPU may feed several frames in sequence into the I2S bus. It should be clear that the present solution is not limited to starting the transmission of the first bit of the frame from the second pulse change of the bit clock signal, and that a custom configuration conforming to the I2S protocol may be made by those skilled in the art.
The interface conversion chip receives a plurality of frames transmitted via the I2S bus frame by frame via the I2S interface, so that the interface conversion chip receives the second audio data 503.
The above-described process of receiving a frame corresponds to the above-described process of transmitting a frame. That is, the interface conversion chip starts receiving the first bit of the frame after the second pulse of the bit clock signal changes within half the frame clock period of I2S, then sequentially receives the subsequent bits of the frame with the change of the bit clock signal, and completes receiving the last bit of the frame within the half frame clock period. Repeating the above process, the interface conversion chip may receive several frames in turn from the I2S bus. Accordingly, the present scheme is not limited to starting to receive the first bit of the frame from after the several pulse changes of the bit clock signal, as long as it corresponds to the rule followed by the CPU to transmit the frame.
The interface conversion chip performs Biphase Mark Code (BMC) modulation on the second audio data 503 to obtain third audio data 504. The third audio data 504 is audio data that can be transmitted by the S/PDIF transmission line. As shown in fig. 5, the digital audio transmission line used in the present embodiment is an S/PDIF transmission line, and the SPDIF transmission line includes a serial data line and does not include a clock signal line. The BMC-coded data may be transmitted through a serial data line without depending on a clock signal, and thus, the interface conversion chip may perform BMC-coded modulation on the second audio data 503 in this embodiment. It should be clear that the present solution does not limit the processing of the second audio data 503 by the interface conversion chip, as long as the processed third audio data 504 can be transmitted by the used digital audio transmission line.
The interface conversion chip transmits the third audio data 504 through the S/PDIF interface via the S/PDIF transmission line. As can be seen from the foregoing description of the S/PDIF transmission line, the SPDIF transmission line includes a serial data line and does not include a clock signal line, so that the process of transmitting the serial data to the S/PDIF transmission line is simple. Those skilled in the art can understand how to implement the above-mentioned process of feeding into the S/PDIF transmission line according to the above description, and therefore, the detailed description thereof is omitted here.
The audio consumer device (in this embodiment, a loudspeaker) receives the third audio data 504 transmitted through the S/PDIF transmission line through the S/PDIF interface.
Further, the sound box sequentially performs demodulation, decapsulation, decompression, digital-to-analog conversion, power amplification, and the like on the third audio data 504, converts the third audio data into an electrical signal, and drives a speaker to play audio. As shown in fig. 5, the above processes may be implemented by the functional modules in the sound box. The demodulation may be a BMC demodulation so that the speaker acquires the second audio data 503. The decapsulation may be according to a standard manner according to which the CPU performs the decapsulation, so that the sound box obtains the first audio data. The decompression may be to perform corresponding multi-channel audio decoding according to the format of the first audio data, so that the sound box obtains the audio data of each channel in the multi-channel audio. The audio data of each sound channel can be audio data in the form of a PCM code stream, and the PCM code stream forms an electrical signal after digital-to-analog conversion and power amplification so as to drive a loudspeaker for playing the sound channel audio to play the audio.
Alternatively, the audio consumer device may be a home theater, in which case the host in the home theater is connected to the S/PDIF interface via the S/PDIF transmission line. As described above, the host completes the above-mentioned demodulation, decapsulation, decompression, and other processing, and obtains the audio data in the form of the PCM stream for each channel. And then the host sends each PCM code stream to a plurality of sound boxes in the home theater in a wired and/or wireless mode, so that the surrounding type multi-channel audio playing is realized.
Generally, the audio consumer device (in this embodiment, a sound box) shown in fig. 5 has functions inherently included in commercially available devices capable of playing and/or recording multi-channel audio, and the present solution is introduced herein for the sake of visual description. It should be clear that the method designed by the scheme can be realized without additional software and hardware configuration on the audio consumer terminal equipment.
Next, a specific process of an embodiment of the present invention for encapsulating the first audio data 501 according to IEC61937 and IEC60958 standards is described in detail with reference to fig. 6 and 7.
The IEC60958 standard is described with reference to FIG. 6. IEC60958 is an interface standard for transferring digital audio, and IEC60958 transfers a clock signal and a data signal simultaneously by one line. IEC60958 is commonly used to transmit binaural audio data, and the message format is shown in fig. 6.
According to the IEC60958 standard, one block (block)601 may be composed of 192 frames (frames) 602, and each frame 602 may be composed of two sub-frames (sub-frames) 603. Each frame 602 may contain a set of samples (samples) for two channels, e.g., a set of samples for channel a, channel B. A sample may refer to a small segment of the entire segment of audio data. Channel a in this set of samples may be contained in sub-frame a and channel B may be contained in sub-frame B.
The data length of the subframe 603 may be 32 bits, and the subframe 603 may include a header (preamble)604, auxiliary data (auxiliary)605, audio data 606, and a 4-bit information and check code 607. Wherein:
the header 604 may be located at the beginning 0 to 3 bits of the subframe 603 to indicate the beginning of a subframe 603. The header 604 may have three shapes, which respectively indicate that the subframe 603 is a channel a sampled signal, a channel B sampled signal, or a starting subframe 603 of a block (for channel a).
The assistance data 605 may be located at bits 4 to 7 of the subframe 603. The purpose of this block is to convey some information that the user adds himself to, but it is now common to use these 4 bits to convey the extra sample bits when the sample exceeds 20 bits. For example, when a 24-bit sample is to be transmitted, the 4 bits may be used to store the last 4 bits of the sample.
The audio data 606 may be located at bits 8 to 27 of the sub-frame 603 for storing actual sampled data, and has a length of 20 bits. The audio data 606 may be transmitted in a LSB first manner. When the sample length is less than 20 bits, the LSB bits not used may be set to zero. For example, when a user wants to transmit a 16-bit sample, bits 8 to 11 can be zero-padded with the 12 th to 27 th bit positions (LSB at bit 12).
The information and check code 607 may be located in bits 28 to 31 of the subframe 603, and may include:
(1) a valid bit (V). Which may be located at the 28 th bit of the subframe 603, may be used to set whether the data in the subframe 603 is correct. If set to 0, this may indicate that the data in the subframe 603 is correct and acceptable. If set to 1, it means that the receiving end should ignore the subframe 603. For example, when the CD carousel reads CD data, if a certain sample cannot be read, the valid bit in the subframe 603 representing the sample is set to 1;
(2) user bit (U). May be located at bit 29 of the subframe 603 and may be a user-defined bit. Each group of samples transmits 1 bit until 192 groups of samples are transmitted to form 192 bit information, and two sound channels can respectively have a group of 192 bit user information;
(3) a channel status bit (C). May be located at bit 30 of subframe 603 and may be used to indicate two channel states. For example, a value of 0 may represent that the used transport channel state information is in a general home (consumer) structure, and a value of 1 may represent that the used transport channel state information is in a professional (professional) structure;
(4) parity check bits (P). The parity check bits may be located at 31 th bit of the subframe 603, and may be used to determine whether an odd number of bits have errors, and may be simply checked using even parity check (even parity check).
The IEC61937 standard is next introduced in connection with fig. 7. Compared to IEC60958, which is typically used to transfer only two channels of audio data, IEC61937 may transfer more complex audio data, e.g. compressed multi-channel audio data in the format of ". ac 3", ". dts", ". mpeg", etc., the message format is shown in fig. 7.
According to the IEC61937 standard, the data (data)701 may include a header code (preamble)703 and payload data (data payload) 704. The data 701, encapsulated according to the IEC61937 specification, may be transmitted in a channel by adding padding (stuffing) 702. Padding 702 may be used to align data 701 with a clock signal.
Where the payload data 704 may be the compressed multi-channel audio data or a small piece of data in the compressed multi-channel audio data.
The header 703 may have a length of 64 bits and may include Pa, Pb, Pc, and Pd, which may have lengths of 16 bits, respectively. Pa, Pb may be used for synchronization, and their values may be fixed fields according to the specification. Pc may contain 7 bits for indicating the data type, 1 bit for indicating the data validity, 5 bits for indicating information irrelevant to the data type, 3 bits for indicating the sequence number of the payload data 704, and the like. Pd may be used to record the bit length of the payload data 704.
Based on the above descriptions of the IEC60958 standard and the IEC61937 standard, the method for encapsulation in an embodiment provided by the present solution is described in detail below.
The method provided by the embodiment comprises the following steps:
in a first step, the compressed multi-channel audio data or a segment of data in the compressed multi-channel audio data (which may correspond to the first audio data 501 in fig. 5 or the payload data 704 in fig. 7) is first encapsulated according to the IEC61937 standard, so as to obtain an intermediate encapsulated audio data (which may correspond to the data 701 in fig. 7).
Secondly, the intermediate encapsulated audio data is divided into 2N samples (which may correspond to the 12 th to 27 th bit data of the subframe 603 in fig. 6) according to a fixed bit length L (for example, 16 bits), and then encapsulated according to the IEC60958 standard one by one to obtain N frames (which may correspond to the second audio data 502 in fig. 5 or the block 601 in fig. 6), thereby completing encapsulation.
Alternatively, the N may be 192. If N is 192 and the fixed bit length L is 16 bits, the following calculation can be performed: the length of the payload data 704 may be (16 bits × 192 × 2-16 bits × 4) ═ 6080 bits. That is, when the CPU obtains a segment of audio data from the application layer of the operating system, 6080 bits may be cut from the segment of audio data to perform the encapsulation, so that the above-mentioned processes of splitting and so on do not have the remaining bits or the vacant bits, which may make the above-mentioned encapsulation process more conveniently performed in the CPU.
After the encapsulation is completed, N frames are obtained, and as described in the foregoing embodiment corresponding to fig. 5, the CPU may further send the frames to the interface conversion chip through the I2S bus based on the clock signal of the I2S bus. Furthermore, the interface conversion chip processes the received data and converts the data into data which can be transmitted by a digital audio transmission line; when the digital audio transmission line used is an S/PDIF transmission line, the processing performed by the interface conversion chip may be BMC modulation.
The method provided by the scheme enables the packaging process to be completed in the CPU, and the data obtained after packaging according to the method can be conveniently suitable for transmission through an I2S bus, so that the interface conversion chip can obtain the data which can be transmitted by a digital audio transmission line only by carrying out simpler processing, and the interface conversion device can complete the function of interface conversion. Therefore, the method reduces the requirement on the computing capacity of the interface conversion chip, so that a chip (such as an FPGA) with lower cost can be used as the interface conversion chip in the scheme, and the cost of the whole scheme is further reduced.
The method in the embodiments provided in this embodiment may be implemented by writing a computer program or a set of computer instructions having corresponding functions, and those skilled in the art may implement development of the computer program or the set of computer instructions according to the description provided in this application.
In the embodiment shown in fig. 5, the system applied in the present scheme may be a system including an intelligent large screen and a sound box, as shown in fig. 3A. At this time, the interface conversion device and the CPU are installed inside the intelligent large screen. Interface conversion chip and CPU can also constitute a chip module jointly for the big screen of intelligence possesses on the whole and carries out multichannel audio transmission's ability with external audio equipment. In the embodiment shown in fig. 5, the system applied in the present scheme may also be a system including an intelligent large screen, a docking station and a sound box, as shown in fig. 3C. At this moment, the docking station is provided with the interface conversion chip or the interface conversion device, and the function of interface conversion is provided for multi-channel audio transmission between the intelligent large screen and the loudspeaker box. It should be clear that the system to which the present solution is applied may also be a system formed by multiple optional scenarios as described above, which is not beyond the scope of the present solution.
The above embodiment describes an implementation method of the present solution by taking multichannel audio, that is, audio with a number of channels equal to or greater than three, as an example. It should be clear that the audio data that can be transmitted by the present scheme is not limited to multi-channel audio data, and may also be transmitted as mono-channel or two-channel audio data.
In one embodiment provided by the present scheme, the first audio data 501 may be two-channel audio data. At this time, the first audio data 501 may be original audio data that has not been subjected to channel compression coding, that is, the data of the first audio data 501 may directly distinguish two channels of audio data. As can be seen from the above description of the IEC60958 standard, the IEC60958 standard is generally used for transmitting binaural audio data, and therefore, the process of encapsulating binaural audio data by the CPU may be to encapsulate the uncompressed encoded binaural audio data according to the IEC60958 standard to obtain a plurality of frames. Further, the CPU sends the frames obtained through the encapsulation process to the I2S bus one by one for transmission, and the subsequent processes are as described above and will not be described herein again.
Specifically, in the encapsulation process of the present embodiment, each time a left channel sample and a right channel sample are obtained from uncompressed and encoded binaural audio data, the left channel sample may be transmitted through the subframe a, and the right channel sample may be transmitted through the subframe B. That is, when the audio data is the binaural audio data, the CPU may omit the process of encapsulating the data according to the IEC61937 standard, and directly encapsulate the binaural audio data according to the IEC60958 standard. Therefore, the audio consumer terminal equipment can also omit the corresponding decapsulation processing process according to the IEC61937 standard, and only needs to decapsulate the received data according to the IEC60958 standard corresponding to the encapsulation process in the CPU. Therefore, the steps of encapsulation in the CPU and decapsulation in the audio consumer device can be simplified from two steps to one step, which is convenient for the CPU and the audio consumer device to process audio.
Similarly, when the first audio data 501 is mono audio data, the above-described packing method of binaural audio data may also be referred to, and the packing according to IEC61937 is not performed but only the packing according to IEC60958 is performed.
Optionally, when the first audio data 501 is mono audio data or dual audio data, the first audio data may also be packaged according to the method described above when the audio data is multi-channel audio, that is, the first audio data is packaged according to IEC61937 standard, and then the second audio data is packaged according to IEC60958 standard.
It should be clear that the above embodiments describe the possible packaging manners of the present scheme for mono, two-channel, and multi-channel audio, which are only examples of a possible method and are not intended to be limiting. For the single-channel, two-channel, and multi-channel audio encapsulation manner, as long as the second audio data 502 obtained after encapsulation is suitable for transmission through the I2S bus, and the audio consumer device can perform corresponding decapsulation processing according to the encapsulation processing procedure, the scope of the present solution is not exceeded.
It should be clear that the above encapsulation method according to the IEC60958 standard and the IEC61937 standard is only one possible method provided by the present solution, not the only one. Those skilled in the art can implement the encapsulation process by applying other standards or by applying a custom manner according to the overall scheme provided by the scheme, as long as the second audio data 502 obtained after encapsulation is suitable for transmission through the I2S bus, and the audio consumer device can perform corresponding decapsulation processing according to the encapsulation processing procedure, which is not beyond the scope of the scheme.
In an embodiment provided by the present disclosure, an interface conversion apparatus is provided, as shown in fig. 8, each function of the interface conversion apparatus may be implemented by each functional module. The interface conversion device can comprise an I2S bus interface module, a data modulation module and a data transmission channel interface module. Wherein:
the I2S bus interface module is used for receiving data from the I2S bus, and specifically, the I2S bus interface module can receive data from the serial data lines of the I2S bus frame by frame according to the change of the frame clock signal and the bit clock signal of the I2S bus. The above frame-by-frame receiving process is as described in the previous embodiment, and will not be described repeatedly here.
The data modulation module is used for modulating the data received from the I2S bus to obtain the data which can be transmitted by the digital transmission channel.
The data transmission channel interface module is used for sending the data which can be transmitted by the data transmission channel into the data transmission channel for transmission. The data transmission channel may be a wired and/or wireless form of transmission channel, the wired form of data transmission channel may include, but is not limited to, an S/PDIF transmission line, an HDMI-ARC transmission line, and the wireless form of data transmission channel may include, but is not limited to, bluetooth, Wi-Fi.
The functions of the functional modules in fig. 8 may be implemented by a computer program or a set of computer instructions with corresponding functions, which can be written by those skilled in the art in conjunction with the description of the foregoing embodiments.
The interface conversion apparatus in fig. 8 may have the architecture as shown in fig. 2. Optionally, when the interface conversion chip in the interface conversion device is an FPGA, the design and production of a customized interface conversion chip can be reduced, and the overhead of a high-price chip with higher computing performance is omitted, so that the scheme can provide a low-cost interface conversion device.
In an embodiment provided by the present disclosure, an electronic device is provided, where the electronic device has a capability of transmitting audio data of a single channel, a dual channel, and multiple channels with an external audio device, or has a capability of transmitting audio data of a single channel, a dual channel, and multiple channels to an external audio device. The electronic equipment comprises the CPU and the interface conversion device, and an interface conversion chip in the CPU and the interface conversion device can jointly form a chip module. The electronic devices include, but are not limited to, smart large screens, smart televisions, desktop computers, notebook computers, all-in-one machines, tablets, mobile phones, and the like.
In an embodiment provided by the present disclosure, a system is provided, which includes the above electronic device.
In one embodiment provided by the present disclosure, a method as shown in fig. 9 is provided, which includes steps 901 to 906, each of which is shown in fig. 9 and will not be described repeatedly herein. Optionally, the method of this embodiment may further include step 907 (not shown in fig. 9): the audio device receives third audio data through the digital audio transmission channel.
In an embodiment provided by the present solution, a system is provided, which may comprise an apparatus as in any of the embodiments described above or which may perform a method as in any of the embodiments described above.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application, and are intended to be included within the scope of the present application.
Finally, it should be noted that: the above description is only an embodiment of the present application, but the scope of the present application is not limited thereto, and any changes or substitutions within the technical scope of the present disclosure should be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (15)

1. An audio data processing system, characterized in that the system comprises a control processing device and an interface conversion device, wherein the control processing device and the interface conversion device are connected through a bus;
the control processing device acquires first audio data, and encapsulates the first audio data according to a preset standard mode to obtain second audio data comprising a plurality of frames with the same bit length;
the control processing device sends the frame in the second audio data through the bus according to the preset times of change of a second clock signal based on the period of a first clock signal; the frequency of the first clock signal is less than the frequency of the second clock signal; the number of times of change of the second clock signal within a half cycle of the first clock signal is not less than twice the bit length of the frame;
the interface conversion device receives the frame in the second audio data through the bus according to the preset times of the change of the second clock signal based on the period of the first clock signal;
and the interface conversion device modulates the second audio data according to the type of the digital audio transmission channel to obtain third audio data for transmission on the digital audio transmission channel.
2. The audio data processing system of claim 1, comprising:
the control processing device reads the level change times of the second clock signal in a first half period of the first clock signal, determines that the read change times reaches the preset times, and sends a first frame through the bus;
the control processing device reads the level change times of the second clock signal in a second half period of the first clock signal, determines that the read change times reaches the preset times, and sends a second frame through the bus;
the first half cycle and the second half cycle together constitute one cycle of the first clock signal;
the first frame and the second frame are any two consecutive frames in the second audio data.
3. The audio data processing system of claim 2, comprising:
the interface conversion device reads the level change times of the second clock signal in the first half cycle of the first clock signal, determines that the read change times reaches the preset times, and receives the first frame through the bus;
and the interface conversion device reads the level change times of the second clock signal in the second half period of the first clock signal, determines that the read change times reaches the preset times, and receives the second frame through the bus.
4. The audio data processing system of claim 3, wherein the predetermined number of times is two.
5. The audio data processing system according to claim 1, wherein the encapsulating according to the preset standard manner to obtain the second audio data including a plurality of frames with the same bit length comprises: packaging the first audio data according to a first preset standard mode to obtain second audio data, wherein the format of the second audio data comprises:
the frame in the second audio data comprises a first type of sub-frame and a second type of sub-frame;
the first type of subframe and the second type of subframe both comprise a header code, and the header code of the first type of subframe is different from the header code of the second type of subframe;
the first class of sub-frames and the second class of sub-frames each contain samples of the first audio data, the samples having a length of 16 bits.
6. The audio data processing system according to claim 1, wherein said encapsulating according to the preset standard manner to obtain the second audio data including a plurality of frames with the same bit length comprises: packaging the first audio data according to a second preset standard mode to obtain middle-stage data; packaging the intermediate stage data according to a first preset standard mode to obtain second audio data;
the format of the intermediate stage data includes:
the middle stage data comprises a header field; the bit length of the middle phase data is an integral multiple of 16;
the format of the second audio data includes:
the frame in the second audio data comprises a first type of subframe and a second type of subframe;
the first type of subframe and the second type of subframe both comprise a header code, and the header code of the first type of subframe is different from the header code of the second type of subframe;
the first-class subframe and the second-class subframe both contain samples of the middle-phase data, and the length of the samples is 16 bits.
7. The audio data processing system of claim 5, wherein:
the first audio data is dual-channel audio data, and the dual-channel audio data comprises first channel audio data and second channel audio data;
the format in which the second audio data is encapsulated includes:
the first class of subframes contains samples of the first channel audio data;
the second class of subframes comprises samples of the second channel audio data;
the frame includes samples of the first channel audio data and the second channel audio data at the same bit positions.
8. The audio data processing system of claim 6, wherein:
the first audio data is multi-channel audio data which is encoded and compressed by a multi-channel audio encoding algorithm, and the multi-channel audio data comprises audio data of three or more channels;
the format in which the second audio data is encapsulated includes:
the first-class subframe in the frame comprises a first sample of the middle-stage data, the second-class subframe comprises a second sample of the middle-stage data, and the first sample and the second sample are two adjacent samples in the middle-stage data.
9. The audio data processing system of claim 1, wherein the digital audio transmission channel is a digital audio transmission line; the interface conversion device comprises an interface conversion chip and a digital audio interface, wherein the digital audio interface is used for being inserted into the digital audio transmission line.
10. The audio data processing system of claim 9, wherein the interface conversion chip is an FPGA.
11. The audio data processing system of claim 9, wherein the digital audio transmission line is an S/PDIF transmission line;
the interface conversion device carries out BMC modulation on the second audio data to obtain third audio data which can be used for being transmitted on the S/PDIF transmission line.
12. The audio data processing system of claim 9, wherein the control processing means is a CPU; the system is a chip module formed by the CPU and the interface conversion chip.
13. The audio data processing system of claim 1, wherein:
also includes an audio device;
and the interface conversion device and the audio equipment transmit the third audio data through the digital audio transmission channel.
14. An electronic device comprising any one of the systems of claims 1-13.
15. The electronic device of claim 14, being a large screen device or a PC device.
CN202011624764.XA 2020-12-30 2020-12-30 Audio data processing system and electronic device Active CN114697817B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011624764.XA CN114697817B (en) 2020-12-30 2020-12-30 Audio data processing system and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011624764.XA CN114697817B (en) 2020-12-30 2020-12-30 Audio data processing system and electronic device

Publications (2)

Publication Number Publication Date
CN114697817A true CN114697817A (en) 2022-07-01
CN114697817B CN114697817B (en) 2023-06-02

Family

ID=82133669

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011624764.XA Active CN114697817B (en) 2020-12-30 2020-12-30 Audio data processing system and electronic device

Country Status (1)

Country Link
CN (1) CN114697817B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721678A (en) * 2022-09-29 2023-09-08 荣耀终端有限公司 Audio data monitoring method, electronic equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5815583A (en) * 1996-06-28 1998-09-29 Intel Corporation Audio serial digital interconnect
US5999220A (en) * 1997-04-07 1999-12-07 Washino; Kinya Multi-format audio/video production system with frame-rate conversion
KR20060008152A (en) * 2004-07-23 2006-01-26 주식회사 한네트디지털 Xtdm transmitting and receiving module, and router using the same
US20100158043A1 (en) * 2008-11-28 2010-06-24 Bodo Martin J Method and apparatus for reformatting and retiming digital telecommunications data for reliable retransmission via USB
CN102736999A (en) * 2011-03-28 2012-10-17 雅马哈株式会社 Audio data inputting apparatus and audio data outputting apparatus
US20170031862A1 (en) * 2015-07-31 2017-02-02 Seloco, Inc. Dual-bus semiconductor chip processor architecture
CN106911987A (en) * 2017-02-21 2017-06-30 珠海全志科技股份有限公司 Main control end, equipment end, the method and system of transmission multichannel audb data
CN106936847A (en) * 2017-04-11 2017-07-07 深圳市米尔声学科技发展有限公司 The processing method and processor of voice data
CN208707932U (en) * 2018-08-24 2019-04-05 北京猎户星空科技有限公司 A kind of audio interface conversion circuit, audio collecting system and electronic equipment
CN110838298A (en) * 2019-11-15 2020-02-25 闻泰科技(无锡)有限公司 Method, device and equipment for processing multi-channel audio data and storage medium
CN110837487A (en) * 2019-09-24 2020-02-25 福建星网智慧科技股份有限公司 System and method for collecting and playing multichannel audio based on I2S bus
CN112136176A (en) * 2018-05-23 2020-12-25 索尼公司 Transmission device, transmission method, reception device, and reception method

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5815583A (en) * 1996-06-28 1998-09-29 Intel Corporation Audio serial digital interconnect
US5999220A (en) * 1997-04-07 1999-12-07 Washino; Kinya Multi-format audio/video production system with frame-rate conversion
KR20060008152A (en) * 2004-07-23 2006-01-26 주식회사 한네트디지털 Xtdm transmitting and receiving module, and router using the same
US20100158043A1 (en) * 2008-11-28 2010-06-24 Bodo Martin J Method and apparatus for reformatting and retiming digital telecommunications data for reliable retransmission via USB
CN102736999A (en) * 2011-03-28 2012-10-17 雅马哈株式会社 Audio data inputting apparatus and audio data outputting apparatus
US20170031862A1 (en) * 2015-07-31 2017-02-02 Seloco, Inc. Dual-bus semiconductor chip processor architecture
CN106911987A (en) * 2017-02-21 2017-06-30 珠海全志科技股份有限公司 Main control end, equipment end, the method and system of transmission multichannel audb data
CN106936847A (en) * 2017-04-11 2017-07-07 深圳市米尔声学科技发展有限公司 The processing method and processor of voice data
CN112136176A (en) * 2018-05-23 2020-12-25 索尼公司 Transmission device, transmission method, reception device, and reception method
CN208707932U (en) * 2018-08-24 2019-04-05 北京猎户星空科技有限公司 A kind of audio interface conversion circuit, audio collecting system and electronic equipment
CN110837487A (en) * 2019-09-24 2020-02-25 福建星网智慧科技股份有限公司 System and method for collecting and playing multichannel audio based on I2S bus
CN110838298A (en) * 2019-11-15 2020-02-25 闻泰科技(无锡)有限公司 Method, device and equipment for processing multi-channel audio data and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
林嘉等: "基于I~2S接口的FPGA的音频数据传输", 《电气技术》 *
王小稳: "多通道音频采集前端硬件设计", 《科技创新与应用》 *
程炼等: "一种用DSP实现音频和视频信号处理的方法", 《应用科技》 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116721678A (en) * 2022-09-29 2023-09-08 荣耀终端有限公司 Audio data monitoring method, electronic equipment and medium

Also Published As

Publication number Publication date
CN114697817B (en) 2023-06-02

Similar Documents

Publication Publication Date Title
US9448959B2 (en) Two-wire communication protocol engine
CN100421412C (en) Method and system for allocating medium data at different network
KR100367000B1 (en) PC-based codec device for multichannel audio/speech and data with multimedia engine and input/output modules for multimedia data processing
WO2020182020A1 (en) Audio signal playback method and display device
CN104834623A (en) Audio playing method and audio playing device
CN107277691B (en) Multi-channel audio playing method and system based on cloud and audio gateway device
CN111034225B (en) Audio signal processing method and apparatus using ambisonic signal
US11025406B2 (en) Audio return channel clock switching
CN103237259A (en) Audio-channel processing device and audio-channel processing method for video
TW202215863A (en) Audio signal rendering method, apparatus, device and computer readable storage medium
CN114697817B (en) Audio data processing system and electronic device
CN107431859A (en) The radio broadcasting of the voice data of encapsulation with control data
US8514929B2 (en) Combined audio/video/USB device
JP4352409B2 (en) Multimedia coded data separation and transmission device
CN107925656A (en) Sending device and the method for controlling it
US11514921B2 (en) Audio return channel data loopback
CN106210843A (en) The play handling method of TV and device
WO2024001447A1 (en) Audio processing method, chip, apparatus, device, and computer-readable storage medium
CN211509211U (en) Audio control device and audio playing system
CN111405356A (en) Audio control device, audio playing system and method
WO2017043378A1 (en) Transmission device, transmission method, reception device, and reception method
CA2408802C (en) Generating separate analog audio programs from a digital link
WO2023051367A1 (en) Decoding method and apparatus, and device, storage medium and computer program product
US11544032B2 (en) Audio connection and transmission device
KR101046584B1 (en) Audio data processing device and control method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant