WO2021004045A1 - 一种多声道平台音频数据传输方法及其装置、显示设备 - Google Patents

一种多声道平台音频数据传输方法及其装置、显示设备 Download PDF

Info

Publication number
WO2021004045A1
WO2021004045A1 PCT/CN2020/070887 CN2020070887W WO2021004045A1 WO 2021004045 A1 WO2021004045 A1 WO 2021004045A1 CN 2020070887 W CN2020070887 W CN 2020070887W WO 2021004045 A1 WO2021004045 A1 WO 2021004045A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
data
channels
channel
audio data
Prior art date
Application number
PCT/CN2020/070887
Other languages
English (en)
French (fr)
Inventor
黄飞
李见
Original Assignee
海信视像科技股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from CN201910614701.7A external-priority patent/CN112216310B/zh
Priority claimed from CN201910613254.3A external-priority patent/CN112216290A/zh
Priority claimed from CN201910615836.5A external-priority patent/CN112218019B/zh
Priority claimed from CN201910616404.6A external-priority patent/CN112218016B/zh
Priority claimed from CN201910659488.1A external-priority patent/CN112218020B/zh
Priority claimed from CN201910710346.3A external-priority patent/CN112218210B/zh
Application filed by 海信视像科技股份有限公司 filed Critical 海信视像科技股份有限公司
Publication of WO2021004045A1 publication Critical patent/WO2021004045A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/44Receiver circuitry for the reception of television signals according to analogue transmission standards
    • H04N5/60Receiver circuitry for the reception of television signals according to analogue transmission standards for the sound signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/025Systems for the transmission of digital non-picture data, e.g. of text during the active part of a television frame
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones

Definitions

  • This application relates to the field of multimedia technology, in particular to a method and device for transmitting audio data on a multi-channel platform, and a display device.
  • the core end of the TV device only supports three I2S audio channels, and the audio data is transmitted to the power amplifier device through the three I2S audio channels; and according to the audio processing method in the related technology, it is in each I2S audio channel In the left and right channels of the TV, each channel of audio data is transmitted. That is to say, in the related technology, one I2S audio channel on the TV device can transmit two channels of audio data, so the core end of the TV device can at most Transmission of six-channel audio data cannot meet the requirements of a multi-channel audio surround system environment.
  • the purpose of this application is to provide a method and device for realizing multi-channel platform audio data transmission on a TV chip, and a display device.
  • This application provides a multi-channel platform audio data transmission method, including:
  • All audio coded data are received, and the information is arranged according to the combination, and the received audio coded data is decoded and restored to obtain audio data of multiple channels.
  • the placing audio data of more than three channels into one audio channel, and generating audio coded data includes:
  • the processing of audio data of multiple channels further includes:
  • the arranging the information according to the combination, decoding and restoring the received audio data to obtain audio data of multiple channels includes:
  • audio data of a predetermined sampling bit width and/or sampling rate is intercepted from a predetermined position in the audio coded data, and attribute information of the corresponding channel is added to the intercepted audio data.
  • before outputting all audio coded data and combination arrangement information through more than two audio channels includes:
  • the method further includes:
  • This application also provides a multi-channel platform audio data transmission device, including:
  • a main chip and an audio coprocessor chip, the main chip includes:
  • a decoder configured to receive mixed audio data, analyze and identify the mixed audio data, to obtain audio data of multiple channels;
  • a re-encoder is connected to the decoder, and is used to process the audio data of multiple channels, and place the audio data of more than two channels into one channel according to a predetermined combination arrangement In the audio channel, generate audio coded data;
  • Audio channels the number of the audio channels is more than two, and each of the audio channels is connected to the re-encoder for outputting all audio coding data and combination arrangement information;
  • the audio coprocessor chip is connected to all audio channels, and is used to receive audio coded data in all audio channels, arrange the information according to the combination, and decode and restore the received audio coded data to obtain multiple Audio data of the channel.
  • the device further includes:
  • a power amplifier which is connected to the audio coprocessor chip, and is used for playing and outputting audio data of multiple channels.
  • the re-encoder includes:
  • Transmission audio adjustment module the transmission audio adjustment module is used to change the sampling bit width and/or sampling rate of a piece of transmission audio data in the audio channel, and according to the changed sampling bit width and/or sampling rate, the transmission Put audio data of a suitable number of channels in the audio data;
  • An audio data splitting module which is used to split audio data of a predetermined channel.
  • the audio coprocessor chip includes:
  • the buffer module the buffer module is used to buffer all received audio data.
  • This application also provides a display device, including:
  • the display screen is configured to present image data
  • the speaker is configured to reproduce sound data
  • a decoder configured to receive mixed audio data, analyze and identify the mixed audio data, to obtain audio data of multiple channels;
  • a re-encoder is connected to the decoder, and is used to process the audio data of multiple channels, and place the audio data of more than two channels into one channel according to a predetermined combination arrangement In the audio channel, generate audio coded data;
  • each of the audio channels is connected to the re-encoder, and is used to output all audio coded data and combination arrangement information;
  • the audio processor is configured to receive audio coded data in all audio channels, arrange the information according to the combination, decode and restore the received audio coded data, to obtain audio data of multiple channels, and output to The speaker.
  • the re-encoder includes:
  • Transmission audio adjustment module the transmission audio adjustment module is used to change the sampling bit width and/or sampling rate of a piece of transmission audio data in the audio channel, and according to the changed sampling bit width and/or sampling rate, the transmission Put audio data of a suitable number of channels in the audio data;
  • An audio data splitting module which is used to split audio data of a predetermined channel.
  • the audio processor further includes a buffer module, and the buffer module is used to buffer all received audio data.
  • This application organizes the received audio data of multiple channels, and puts more than three channels of audio data in one I2S audio channel, so that more than eight channels can be supported on the core end of the TV device
  • the transmission of channel audio data satisfies the demand for multi-channel audio playback.
  • Figure 1 is a schematic diagram of the distribution of speakers for television equipment
  • FIG. 2 is a schematic diagram of a working flow of an embodiment of a method for transmitting audio data on a multi-channel platform according to the present application;
  • FIG. 3 is a schematic diagram of a working flow of an embodiment of a method for transmitting audio data on a multi-channel platform according to the present application
  • FIG. 4 is a schematic diagram of the working principle of data collection in an embodiment of the audio data transmission method of the multi-channel platform of the application;
  • FIG. 5 is a schematic diagram of the working principle of data restoration in an embodiment of the audio data transmission method of the multi-channel platform of the application;
  • FIG. 6 is a schematic diagram of the working principle of data collection in an embodiment of the audio data transmission method of the multi-channel platform of the application;
  • FIG. 7 is a schematic diagram of a working flow of an embodiment of a method for transmitting audio data on a multi-channel platform according to this application;
  • FIG. 8 is a schematic diagram of the working principle of data collection in an embodiment of the audio data transmission method of the multi-channel platform of the application.
  • FIG. 9 is a schematic diagram of the working principle of data restoration in an embodiment of a method for transmitting audio data on a multi-channel platform according to this application.
  • FIG. 10 is a schematic diagram of the logical structure of an embodiment of an audio data transmission device for a multi-channel platform according to the present application.
  • FIG. 11 is a schematic diagram of a display device provided in Embodiment 1 of the present application.
  • FIG. 12 is a block diagram of the hardware configuration of the display device provided in Embodiment 1 of the present application.
  • first and second are only used for descriptive purposes, and cannot be understood as indicating or implying relative importance or implicitly indicating the number of indicated technical features. Therefore, the features defined with “first” and “second” may explicitly or implicitly include one or more of the features. In the description of this application, “multiple” means two or more than two, unless otherwise specifically defined.
  • connection should be interpreted broadly unless otherwise clearly specified and limited.
  • it can be a fixed connection or a detachable connection.
  • Connect, or connect in one piece It can be a mechanical connection or an electrical connection.
  • It can be directly connected, or indirectly connected through an intermediate medium, and it can be a communication between two elements or an interaction relationship between two elements.
  • a multi-channel surround sound system in the television device that is, a left channel L, right channel R, left surround SL, right surround SR, left Sky TOPL, right sky TOPR, center center, subwoofer (not shown) speakers constitute a 5.1.2 eight-channel surround playback system in the related technology; and in practical applications, it can also be based on the sound of audio data Increase the number of external speakers appropriately to form a multi-channel surround playback system.
  • the following takes a television device as an example to describe in detail a multi-channel platform audio data transmission method of the present application. Please refer to FIG. 2.
  • the method includes:
  • the mixed audio data includes at least: left channel L, right channel R, left surround SL, right surround SR, left sky TOPL, right sky TOPR, center Center, subwoofer Woofer; these eight channels of audio data; here, the decoder first decodes the mixed audio data, and then Recognize the decoded audio data respectively, and obtain audio data of eight channels.
  • the audio data of each channel is processed by dynamic range control (DRC, Dynamic range control) and cloud-based optimized storage, and arranged according to a predetermined combination, and the audio data of more than two channels is put into one audio channel , Generate audio coded data.
  • DRC dynamic range control
  • Dynamic range control Dynamic range control
  • cloud-based optimized storage and arranged according to a predetermined combination, and the audio data of more than two channels is put into one audio channel , Generate audio coded data.
  • the left channel L, right channel R, left surround SL, and three channels of audio data are integrated into one audio channel I2S0 to obtain one channel of audio coded data, and the right surround SR, left sky TOPL, and right sky TOPR three
  • the audio data of two channels is integrated into another audio channel I2S1 to obtain another audio coded data.
  • I2S signals include: MCLK, BCLK, SDATA, WS
  • the serial clock BCLK corresponds to each bit of digital audio data, and BCLK has 1 pulse.
  • the frame clock WS is used to switch the data of the left and right channels.
  • WS is "1" means that the data of the left channel is being transmitted, and "0" means that the data of the right channel is being transmitted.
  • the frequency of WS is equal to the sampling frequency.
  • Serial data SDATA is audio data expressed in twos complement.
  • the MCLK master clock is the system clock. The purpose is to enable better synchronization between systems.
  • the highest bit of the data always appears at the WS change, that is, the second BCLK pulse after the start of a frame.
  • This synchronization mechanism makes the interconnection of digital audio equipment more convenient, and will not cause data misalignment.
  • the field (channel) selection (WS) command selection line indicates the channel being transmitted.
  • WS 0, indicating that the data of the left channel is being transmitted.
  • WS can change on the rising or falling edge of the serial clock, and the WS signal does not need to be symmetrical.
  • WS changes on the rising edge of the clock signal.
  • WS always changes one clock cycle before the highest bit transmission, so that the slave device can get the time synchronized with the serial data being transmitted, and the receiving end can store the current command and clear space for the next command.
  • the audio data of the left channel L is Placed in the 16bit of the front end of the left channel of I2S0, intercept the 16bit of the front end of the left channel of I2S0 to obtain the audio data of the left channel L.
  • MCLK is the main clock signal
  • BCLK is the clock signal
  • WS is the channel selection
  • Data is the channel data. Therefore, based on different chip architectures, the form of I2S transmission is also different.
  • two receiving solutions are listed:
  • the receiving end adopts the form of receiving and dismantling.
  • Different chips have different I2S output channels.
  • the two I2S outputs on the core end share one MCLK, BCLK, WS, that is, a single CLK acquisition.
  • the single-channel CLK is sent from the core end to the audio coprocessor chip, and the audio coprocessor chip collects the audio data of multiple channels sent by the core end in different encoding arrangements according to the single-channel CLK, and the sampling rate is different. Since the audio data is sent to the receiving end at the same time, the audio coprocessor chip is disassembled according to the multi-channel data encoding sequence provided by the movement end while receiving the audio data.
  • the movement chip 3 I2S shares one CLK, there will be an error in the data transmission process. Once the I2S shared CLK error, the received data of the 3 I2S data transmission will be wrong, and there may be abnormal sound. occur.
  • the sources of CLK error are: software delay, peripheral signal interference, hardware circuit, etc.
  • receiving method 2 Another receiving method is provided in this embodiment, namely receiving method 2:
  • the main chip of the TV device can only output 3 channels of I2S data, that is, at most 6 channels of audio data transmission can be realized, and in this application, at least 8 channels of audio data output must be realized. Therefore, in this solution, 8 channels of audio data are re-encoded to obtain 6 channels of audio coded data.
  • the 6-channel audio coded data is transmitted to an audio coprocessor chip, which can decode and restore it into 8-channel audio data according to the coding rules of the re-encoder, and transmit it to the power amplifier, So as to achieve the effect of multi-channel output.
  • the mixed audio data includes at least: left channel L, right channel R, left surround SL, right surround SR, left sky TOPL, right sky TOPR, center Center, subwoofer Woofer; these eight channels of audio data; here, the decoder first decodes the mixed audio data, and then Recognize the decoded audio data respectively, and obtain audio data of eight channels.
  • cache1 L (16bit
  • cache2 R (16bit
  • cache3 LS (16bit)
  • cache4 RS (16bit)
  • cache5 FTL (16bit
  • cache6 FTR (16bit)
  • cache7 SWH (high 8bit)
  • Cache8 SWL (low 8bit)
  • cache9 CH (high 8bit)
  • cache10 CL (low 8bit).
  • the specifics are: changing the original 16bit@48Khz audio encoding data from the original I2S output to 24bit@48KHZ audio encoding data. In this embodiment, it keeps the sampling frequency unchanged, and changes the sampling bit width from the original 16bit to 24bit. .
  • the left and right channels in I2S0 have a bit width of 24 bits.
  • step S205 Cache the generated audio coded data, and determine whether audio coded data is generated in all audio channels that need to be transmitted. If all audio coded data are generated, go to step S205, and perform step S205 for all audio coded data through the audio channel And combine and arrange information for output; if no audio coding data is generated, wait for all audio channels that need to be transmitted to generate audio coding data.
  • each group of I2S has a separate MCLK, WS, BCK, and Data, it is more important to ensure that the MCLK of the 3-channel I2S is synchronized.
  • the working mode of waiting to be sent is adopted, which is to wait for the 3 channels of I2S data to be processed and sent at the same time; to initiate a new sound, collect the first bit of the 3 channels of initialization tone, and use the first valid signal as the waiting The MCLK of the tone.
  • the 3 CLKs are shared, and the 3 CLKs are used to fetch and send data separately, which greatly reduces the data error caused by an abnormal CLK.
  • each audio channel can transmit three audio data, so in this embodiment, up to 9 channels of audio data can be transmitted;
  • the audio data to be transmitted is 8 channels, as shown in Figure 4, one channel of audio data can be placed in each of the left and right channels of an audio channel, and the split audio data is not added.
  • the main chip transmits all the 24bit data in the 3 I2S audio channels to the audio coprocessor chip.
  • the audio coprocessor chip does not receive and disassemble, but it caches the 24bit data in the buffer;
  • the core end of the main chip provides the audio co-processor chip with an encoding arrangement that converts 8-channel audio data into 6-channel encoded data;
  • audio data with a predetermined sampling bit width is intercepted from a predetermined position in the audio coded data, and attribute information of the corresponding channel is added to the intercepted audio data.
  • attribute information of the corresponding channel is added to the intercepted audio data.
  • the first valid data is a valid MCLK signal.
  • 8bit data is collected, and the data combination is performed again to obtain audio data such as center sound and heavy bass.
  • the restored original multi-channel 5.1.2 data is sent to the power amplifier through 4-channel I2S to achieve the purpose of multi-channel sound effects.
  • the sampling bit width can be adjusted from 16bit to 32bit without adjusting the sampling frequency. Therefore, 4 channels of audio signals can be transmitted in one I2S audio channel.
  • the way of encoding can ensure that 2 channels of I2S support up to 8 channels of audio data, and all the multiple channels of audio data are transmitted to the co-audio processor.
  • the core end informs the co-audio processor of the encoding method, and the co-audio processor will use 2 channels of I2S.
  • the input audio data is restored, through its own multi-channel I2S, respectively transmitted to the corresponding power amplifier chip, and output the sound of 5.1.2 sound effect.
  • only 2 I2S audio channels need to be used, which has good scalability.
  • a working mode of adjusting the sampling frequency without adjusting the sampling bit width is used to describe in detail a multi-channel platform audio data transmission method of the present application, that is, it maintains the sampling bit width to 16 bits and changes the original sampling frequency
  • a working mode of adjusting the sampling frequency without adjusting the sampling bit width is used to describe in detail a multi-channel platform audio data transmission method of the present application, that is, it maintains the sampling bit width to 16 bits and changes the original sampling frequency
  • the mixed audio data includes: left channel L, right channel R , Left surround SL, right surround SR, left sky TOPL, right sky TOPR, center center, heavy bass Woofer; these eight channels of audio data; here, the decoder first decodes the mixed audio data, and then The decoded audio data are identified respectively, and eight channels of audio data are obtained respectively.
  • the WS can collect up to 32bit left and right channel data, change There are two ways of sampling frequency. One is to change the frequency of BCLK for single-edge acquisition, and the other is to change the frequency of BCLK and use double-edge acquisition.
  • the data collected in this way is L channel, 32bit data is collected, and R channel is 32bit data.
  • the acquisition rate is 96KHz.
  • step S504 Cache the generated audio coded data, and determine whether audio coded data is generated in all audio channels that need to be transmitted. If all audio coded data are generated, then go to step S504, and perform step S504 for all audio coded data through the audio channel And combine and arrange information for output; if no audio coding data is generated, wait for all audio channels that need to be transmitted to generate audio coding data.
  • each group of I2S has a separate MCLK, WS, BCK, and Data, it is more important to ensure that the MCLK of the 3-channel I2S is synchronized.
  • the working mode of waiting to be sent is adopted, which is to wait for the 3 channels of I2S data to be processed and sent at the same time; to initiate a new sound, collect the first bit of the 3 channels of initialization tone, and use the first valid signal as the waiting The MCLK of the tone.
  • the 3 CLKs are shared, and the 3 CLKs are used to fetch and send data separately, which greatly reduces the data error caused by an abnormal CLK.
  • each audio channel can transmit three audio data, so in this embodiment, audio data of up to 12 channels can be transmitted.
  • the movement end transmits all 3 channels of I2S and data to the audio coprocessor chip, the audio coprocessor chip does not receive and disassemble, but it caches the data in the buffer;
  • the core end provides this encoding arrangement and double-edge sampling to the audio coprocessor chip.
  • audio data with a predetermined sampling bit width is intercepted from a predetermined position in the audio coded data, and attribute information of the corresponding channel is added to the intercepted audio data.
  • the audio coprocessor chip uses the first valid data of 3 channels of I2S as the reference MCLK, the sampling frequency is set to 48KHZ; puts the 3 channels of I2S data into the stack at the same time, the sampling frequency is set to 48KHz, and adopts double-edge sampling Method; and sample the 16bit data of ch1, ch2--ch12 respectively; according to the original data distribution format, the 12-channel 16-bit data is sent to the 6-channel I2S channel audio coprocessor chip to automatically generate the wait tone, and adjust the 6-channel I2S CLK is synchronized; after CLK is synchronized, the audio coprocessor chip is sent to the power amplifier through 6 I2S at the same time.
  • the WS can collect up to 32bit left and right channel data. Changing the sampling frequency is There are two methods, one is to change the frequency of BCLK for single-edge acquisition, the other is to change the frequency of BCLK and use double-edge acquisition.
  • the data collected in this way is L channel, 24bit data is collected, R channel is 24bit data, and the acquisition rate is 96KHz , Encoding in this way can ensure that 2 channels of I2S support up to 8 channels of audio data, and all multiple channels of audio data are transmitted to the co-audio processor, and the core end informs the co-audio processor of the encoding method, and the audio data entered by the co-audio To restore, through its own multi-channel I2S, respectively pass to the corresponding power amplifier chip, output the sound of 5.1.2 sound effect.
  • this application is set in a television device, which is a multi-channel platform audio data transmission device, which includes:
  • a main chip 810 and an audio coprocessor chip 820, the main chip 810 includes:
  • the decoder 811 is used to receive mixed audio data, and parse and identify the mixed audio data to obtain audio data of each channel;
  • the mixed audio data includes: left channel L , Right Channel R, Left Surround SL, Right Surround SR, Left Sky TOPL, Right Sky TOPR, Center Center, Subwoofer Woofer and other eight channels of audio data.
  • the decoder first decodes the mixed audio data, and then respectively recognizes the decoded audio data to obtain eight channels of audio data respectively;
  • the re-encoder 812 which is connected to the decoder 811, is used to process the audio data of each channel and arrange the audio data of more than three channels in a predetermined combination arrangement. Put it into an audio channel to generate audio coded data.
  • it can be: putting audio data of more than three channels into one audio channel to generate audio coded data; such as integrating left channel L, right channel R, left surround SL, and three channels of audio data
  • audio coded data such as integrating left channel L, right channel R, left surround SL, and three channels of audio data
  • one audio coded data is obtained, and the audio data of three channels of right surround SR, left sky TOPL, and right sky TOPR are integrated into another audio channel I2S1 to obtain another audio coded data;
  • Audio channels 813 the number of the audio channels 813 is more than two, and each of the audio channels 813 is connected to the re-encoder 812 for outputting all audio coding data and combination arrangement information;
  • the audio coprocessor chip 820 is connected to all audio channels 813, and is used to receive audio coded data in all audio channels 813, arrange the information according to the combination, and decode and restore the received audio coded data, Obtain the audio data of each channel; since the combined arrangement information records the position of the audio data of each channel in the audio channel and the audio coded data, different parts of the audio coded data in each audio channel are recorded according to the combined arrangement information After intercepting, you can get the audio data of the corresponding channel. For example, if the audio data of the left channel L is placed in the 16bit of the front end of the left channel of I2S0, then the 16bit of the front end of the left channel of I2S0 will be intercepted. Audio data to the left channel L.
  • the device further includes:
  • the power amplifier 830 is connected to the audio co-processor chip 820, and is used to play and output the audio data of each channel.
  • the re-encoder 812 includes:
  • the transmission audio adjustment module is used to change the sampling bit width and/or sampling rate of one transmission audio data in the audio channel 813, and in accordance with the changed sampling bit width and/or sampling rate, Put the audio data of the appropriate number of channels in the transmission audio data;
  • An audio data splitting module which is used to split audio data of a predetermined channel.
  • the audio coprocessor chip 820 includes:
  • the buffer module the buffer module is used to buffer all received audio data.
  • FIG. 11 is a schematic diagram of a display device provided in Embodiment 1 of the present application.
  • the present application also provides a display device, which at least includes: a display screen 91 configured to present image data; and a speaker 92 configured to reproduce sound data .
  • the display device may further include: a backlight assembly 94 located below the display screen 91.
  • a backlight assembly 94 located below the display screen 91.
  • the backlight assembly may include an LED light bar or a light panel that automatically emits light.
  • the display device may further include: a back plate 95.
  • the back plate 95 is stamped to form some convex structures, and components such as speakers 92 are fixed on the convex structures by screws or hooks.
  • the display device may further include: a rear case 98, which is covered on the back of the display screen 91 to hide the backlight assembly 94, the speaker 92 and other display device components, which has a beautiful effect.
  • the display device may further include: a main board 96 and a power supply board 97, which can be arranged as two boards independently, or they can be combined on one board.
  • the display device further includes a remote control 93.
  • FIG. 12 is a block diagram of the hardware configuration of the display device provided in Embodiment 1 of the present application.
  • the display device 200 may include a tuner and demodulator 220, a communicator 230, a detector 240, an external device interface 250, a controller 210, a memory 290, a user input interface, a video processor 260-1, and audio Processor 260-2, display screen 280, audio input interface 272, power supply.
  • the tuner and demodulator 220 which receives broadcast and television signals through wired or wireless means, can perform modulation and demodulation processing such as amplification, mixing and resonance, and is used to demodulate the television channel selected by the user from multiple wireless or cable broadcast and television signals
  • modulation and demodulation processing such as amplification, mixing and resonance
  • the audio and video signals carried in the frequency, and additional information (such as EPG data signals).
  • the tuner and demodulator 220 can be selected by the user and controlled by the controller 210 to respond to the TV channel frequency selected by the user and the TV signal carried by the frequency.
  • the tuner and demodulator 220 can receive signals in many ways according to different TV signal broadcasting systems, such as terrestrial broadcasting, cable broadcasting, satellite broadcasting, or Internet broadcasting; and according to different modulation types, it can be digital modulation or alternatively. Analog modulation method; and according to different types of received TV signals, analog and digital signals can be demodulated.
  • the tuner demodulator 220 may also be in an external device, such as an external set-top box.
  • the set-top box outputs TV audio and video signals through modulation and demodulation, and inputs them to the display device 200 through the input/output interface 250.
  • the communicator 230 is a component for communicating with external devices or external servers according to various communication protocol types.
  • the communicator 230 may include a WIFI module 231, a Bluetooth communication protocol module 232, a wired Ethernet communication protocol module 233 and other network communication protocol modules or near field communication protocol modules.
  • the display device 200 may establish a control signal and a data signal connection with an external control device or content providing device through the communicator 230.
  • the communicator may receive the control signal of the remote controller 100 according to the control of the controller.
  • the detector 240 is a component of the display device 200 for collecting signals from the external environment or interacting with the outside.
  • the detector 240 may include a light receiver 242, a sensor used to collect the intensity of ambient light, which can adaptively display parameter changes by collecting ambient light, etc.; it may also include an image collector 241, such as a camera, a camera, etc., which can be used to collect external Environmental scenes, as well as gestures used to collect user attributes or interact with users, can adaptively change display parameters, and can also recognize user gestures to achieve the function of interaction with users.
  • the detector 240 may further include a temperature sensor.
  • the display device 200 may adaptively adjust the display color temperature of the image.
  • the color temperature of the display device 200 when the temperature is relatively high, the color temperature of the display device 200 can be adjusted to be relatively cool; when the temperature is relatively low, the color temperature of the display device 200 can be adjusted to be relatively warm.
  • the detector 240 may also include a sound collector, such as a microphone, which may be used to receive the user's voice, including the voice signal of the user's control instruction for controlling the display device 200, or to collect environmental sound for Recognizing the environmental scene type, the display device 200 can adapt to the environmental noise.
  • a sound collector such as a microphone
  • the external device interface 250 provides a component for the controller 210 to control data transmission between the display device 200 and other external devices.
  • the external device interface can be connected to external devices such as set-top boxes, game devices, notebook computers, etc. in a wired/wireless manner, and can receive external devices such as video signals (such as moving images), audio signals (such as music), and additional information (such as EPG). ) And other data.
  • the external device interface 250 may include: a high-definition multimedia interface (HDMI) terminal 251, a composite video blanking synchronization (CVBS) terminal 252, an analog or digital component terminal 253, a universal serial bus (USB) terminal 254, red, green, and blue ( RGB) terminal (not shown in the figure) and any one or more.
  • HDMI high-definition multimedia interface
  • CVBS composite video blanking synchronization
  • USB universal serial bus
  • RGB red, green, and blue
  • the controller 210 controls the work of the display device 200 and responds to user operations by running various software control programs (such as an operating system and various application programs) stored on the memory 290.
  • various software control programs such as an operating system and various application programs
  • the controller 210 includes a random access memory RAM 213, a read only memory ROM 214, a graphics processor 216, a CPU processor 212, a communication interface 218, and a communication bus.
  • RAM213 and ROM214, graphics processor 216, CPU processor 212, and communication interface 218 are connected by a bus.
  • the graphics processor 216 is used to generate various graphics objects, such as icons, operation menus, and user input instructions to display graphics. Including an arithmetic unit, which performs operations by receiving various interactive commands input by the user, and displays various objects according to display attributes. As well as including a renderer, various objects obtained based on the arithmetic unit are generated, and the rendering result is displayed on the display screen 280.
  • the CPU processor 212 is configured to execute operating system and application program instructions stored in the memory 290. And according to receiving various interactive instructions input from the outside, to execute various applications, data and content, so as to finally display and play various audio and video content.
  • the CPU processor 212 may include multiple processors.
  • the multiple processors may include one main processor and multiple or one sub-processors.
  • the main processor is used to perform some operations of the display device 200 in the pre-power-on mode, and/or to display images in the normal mode.
  • the communication interface may include the first interface 218-1 to the nth interface 218-n. These interfaces may be network interfaces connected to external devices via a network.
  • the controller 210 may control the overall operation of the display device 200. For example, in response to receiving a user command for selecting a UI object to be displayed on the display screen 280, the controller 210 may perform an operation related to the object selected by the user command.
  • the object may be any one of the selectable objects, such as a hyperlink or an icon.
  • Operations related to the selected object for example: display operations connected to hyperlink pages, documents, images, etc., or perform operations corresponding to the icon.
  • the user command for selecting the UI object may be a command input through various input devices (e.g., mouse, keyboard, touch pad, etc.) connected to the display device 200 or a voice command corresponding to the voice spoken by the user.
  • the memory 290 includes storing various software modules for driving and controlling the display device 200.
  • various software modules stored in the memory 290 include: a basic module, a detection module, a communication module, a display control module, a browser module, and various service modules.
  • the basic module is the underlying software module used for signal communication between various hardware in the display device 200 and sending processing and control signals to the upper module.
  • the detection module is a management module used to collect various information from various sensors or user input interfaces, and perform digital-to-analog conversion and analysis management.
  • the voice recognition module includes a voice analysis module and a voice command database module.
  • the display control module is a module for controlling the display screen 280 to display image content, and can be used to play information such as multimedia image content and UI interfaces.
  • the communication module is a module used for control and data communication with external devices.
  • the browser module is a module used to perform data communication between browsing servers.
  • the service module is a module used to provide various services and various applications.
  • the memory 290 is also used to store and receive external data and user data, images of various items in various user interfaces, and visual effect diagrams of focus objects.
  • the user input interface is used to send a user's input signal to the controller 210, or to transmit a signal output from the controller to the user.
  • the control device (such as a mobile terminal or remote control) can send input signals input by the user, such as a power switch signal, a channel selection signal, and a volume adjustment signal, to the user input interface, and then the user input interface forwards the input signal to the control.
  • the control device may receive output signals such as audio, video, or data output from the user input interface processed by the controller, and display the received output signal or output the received output signal as audio or vibration.
  • the user may input user commands on the graphical user interface (GUI) displayed on the display screen 280, and the user input interface receives the user input commands through the graphical user interface (GUI).
  • GUI graphical user interface
  • the user can input a user command by inputting a specific sound or gesture, and the user input interface recognizes the sound or gesture through the sensor to receive the user input command.
  • the video processor 260-1 is used to receive video signals, and perform video data processing such as decompression, decoding, scaling, noise reduction, frame rate conversion, resolution conversion, and image synthesis according to the standard codec protocol of the input signal.
  • the video signal directly displayed or played on the display screen 280.
  • the video processor 260-1 includes a demultiplexing module, a video decoding module, an image synthesis module, a frame rate conversion module, a display formatting module, and the like.
  • the demultiplexing module is used to demultiplex the input audio and video data stream. For example, if MPEG-2 is input, the demultiplexing module will demultiplex into a video signal and an audio signal.
  • the video decoding module is used to process the demultiplexed video signal, including decoding and scaling.
  • An image synthesis module such as an image synthesizer, is used to superimpose and mix the GUI signal generated by the graphics generator with the zoomed video image according to user input or itself to generate an image signal for display.
  • Frame rate conversion module used to convert the frame rate of the input video, such as converting the frame rate of the input 24Hz, 25Hz, 30Hz, 60Hz video to the frame rate of 60Hz, 120Hz or 240Hz, where the input frame rate can be compared with the source
  • the video stream is related, and the output frame rate can be related to the update rate of the display.
  • the input has a usual format, such as frame insertion.
  • the display formatting module is used to change the signal output by the frame rate conversion module into a signal that conforms to a display format such as a display, such as format conversion of the signal output by the frame rate conversion module to output RGB data signals.
  • the display screen 280 is used to receive image signals input from the video processor 260-1, to display video content and images, and a menu control interface.
  • the display screen 280 includes a display screen component for presenting a picture and a driving component for driving image display.
  • the displayed video content can be from the video in the broadcast signal received by the tuner and demodulator 220, or from the video content input by the communicator or the external device interface.
  • the display screen 280 simultaneously displays a user manipulation interface UI generated in the display device 200 and used to control the display device 200.
  • a driving component for driving the display is also included.
  • the display screen 280 is a projection display, it may also include a projection device and a projection screen.
  • the audio processor 260-2 is used to receive audio signals, and perform decompression and decoding according to the standard codec protocol of the input signal, as well as audio data processing such as noise reduction, digital-to-analog conversion, and amplification processing, and the result can be in the speaker 272 The audio signal to be played.
  • the audio output interface 270 is used to receive the audio signal output by the audio processor 260-2 under the control of the controller 210.
  • the audio output interface may include a speaker 272 or output to an external audio output terminal 274 of a generator of an external device, such as : External audio terminal or headphone output terminal, etc.
  • the video processor 260-1 may include one or more chips.
  • the audio processor 260-2 may also include one or more chips.
  • the video processor 260-1 and the audio processor 260-2 may be separate chips, or they may be integrated with the controller 210 in one or more chips.
  • the power supply is used to provide power supply support for the display device 200 with power input from an external power supply under the control of the controller 210.
  • the power supply may include a built-in power supply circuit installed inside the display device 200, or may be a power supply installed outside the display device 200, such as a power interface that provides an external power supply in the display device 200.
  • An embodiment of the present application also provides a display device, including:
  • the display screen is configured to present image data
  • the speaker is configured to reproduce sound data
  • a decoder configured to receive mixed audio data, analyze and identify the mixed audio data, to obtain audio data of multiple channels;
  • a re-encoder is connected to the decoder, and is used to process the audio data of multiple channels, and place the audio data of more than two channels into one channel according to a predetermined combination arrangement In the audio channel, generate audio coded data;
  • each of the audio channels is connected to the re-encoder, and is used to output all audio coded data and combination arrangement information;
  • the audio processor is configured to receive audio coded data in all audio channels, arrange the information according to the combination, decode and restore the received audio coded data, to obtain audio data of multiple channels, and output to The speaker.
  • the re-encoder includes:
  • Transmission audio adjustment module the transmission audio adjustment module is used to change the sampling bit width and/or sampling rate of a piece of transmission audio data in the audio channel, and according to the changed sampling bit width and/or sampling rate, the transmission Put audio data of a suitable number of channels in the audio data;
  • An audio data splitting module which is used to split audio data of a predetermined channel.
  • the audio processor further includes a buffer module, and the buffer module is used to buffer all received audio data.
  • An embodiment of the present application also provides a display device, including:
  • the display screen is configured to present image data
  • the speaker is configured to reproduce sound data
  • a controller configured to receive mixed audio data, and parse and identify the mixed audio data to obtain audio data of multiple channels;
  • All audio coded data are received, and the information is arranged according to the combination, and the received audio coded data is decoded and restored to obtain audio data of multiple channels.
  • the audio data of multiple channels after being decoded and restored is input into the speaker for sound reproduction.
  • the controller is further configured to change the sampling bit width and/or sampling rate of a piece of audio coded data in the audio channel, and in accordance with the changed sampling bit width and/or sampling rate, Audio data of a suitable number of channels is placed in the audio coded data.
  • the controller is further configured to split audio data of a predetermined channel and renumber the audio data of all channels.
  • the controller is further configured to arrange the information according to the combination, to intercept audio data with a predetermined sampling bit width and/or sampling rate from a predetermined position in the audio encoding data, and to obtain the intercepted audio data.
  • the audio data is added with the attribute information of the corresponding channel.
  • the controller is further configured to buffer the generated audio coded data, and determine whether audio coded data is generated in all audio channels that need to be transmitted, and if all audio coded data are generated, Then start to output all audio coded data and combination arrangement information through the audio channel; if no audio coded data is generated, wait for all audio channels that need to be transmitted to generate audio coded data.
  • the controller is further configured to buffer all received audio data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Stereophonic System (AREA)

Abstract

本申请涉及一种多声道平台音频数据传输方法及其装置、显示设备,包括:接收混合音频数据并进行解析与识别,得到多个声道的音频数据;按预定的组合排列方式,将三个以上声道的音频数据置入一路音频通道中,生成音频编码数据;通过两路以上的音频通道进行输出;接收所有音频编码数据,并进行解码并复原,得到多个声道的音频数据。

Description

一种多声道平台音频数据传输方法及其装置、显示设备
本专利申请要求于2019年7月9日提交的、申请号为201910614701.7;于2019年7月9日提交的、申请号为201910613160.6;于2019年7月9日提交的、申请号为201910613254.3;于2019年7月9日提交的、申请号为201910615836.5;于2019年7月9日提交的、申请号为201910616404.6;于2019年8月2日提交的、申请号为201910710346.3;于2019年7月9日提交的、申请号为2019106185413;于2019年7月22日提交的、申请号为201910659488.1的中国专利申请的优先权,该申请的全文以引用的方式并入本文中。
技术领域
本申请涉及多媒体技术领域,具体涉及一种多声道平台音频数据传输方法以及装置、显示设备。
背景技术
随着社会的经济的发展,人们对音频的播放效果的要求也越来越高。为了追求更好的音效,采用多声道环绕播放是一种现今常见的音频播放方式;故相关技术中大部分音源文件都包含有多个声道的音频数据。
但在相关技术中电视设备上的机芯端仅支持三路I2S音频通道,音频数据通过三路I2S音频通道传输至功放设备中;而按相关技术中音频处理方式,其在每一路I2S音频通道的左、右声道中,各传输一个声道的音频数据,也就是 说,相关技术中电视设备上一路I2S音频通道能传输两个声道的音频数据,故电视设备的机芯端最多能传输六个声道的音频数据,其不能满足多声道音频环绕系统环境的需求。
申请内容
为克服上述缺陷,本申请的目的即在于提供一种在电视芯片上实现多声道平台音频数据传输方法及其装置、显示设备。
本申请提供一种多声道平台音频数据传输方法,包括:
接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出;
接收所有音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据。
在一些实施例中,所述将三个以上声道的音频数据置入一路音频通道中,生成音频编码数据包括:
改变所述音频通道中一条音频编码数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条音频编码数据中置入相适应数量声道的音频数据。
在一些实施例中,所述将多个声道的音频数据进行处理还包括:
将预定的声道的音频数据进行拆分,并将所有声道的音频数据进行重新编号。
在一些实施例中,所述按所述组合排列信息,将所接收到的音频数据进行解码并复原,得到多个声道的音频数据包括:
按照所述组合排列信息,从音频编码数据中的预定位置上截取预定采样位宽和/或采样速率的音频数据,并为所截取到的音频数据添加上为对应声道的属性信息。
在一些实施例中,所述通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出之前包括:
对所生成的音频编码数据进行缓存,判断所有的需要进行传输的音频通道中是否均生成有音频编码数据,若均生成有音频编码数据,则开始通过音频通道,对所有音频编码数据和组合排列信息进行输出;若未生成有音频编码数据,则等待所有的需要进行传输的音频通道均生成有音频编码数据。
在一些实施例中,所述接收所有音频通道中的音频数据之后还包括:
对所有接收到的音频数据进行缓存。
本申请还提供一种多声道平台音频数据传输装置,包括:
主芯片和音频协处理器芯片,所述主芯片包括:
解码器,所述解码器用于接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
重编码器,所述重编码器与所述解码器相连接,用于将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
音频通道,所述音频通道的数量为两路以上,且每路所述音频通道均与所述重编码器相连接,用于对所有音频编码数据和组合排列信息进行输出;
所述音频协处理器芯片与所有音频通道相连接,用于接收所有音频通道中的音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据。
在一些实施例中,所述装置还包括:
功放器,所述功放器与所述音频协处理器芯片相连接,用于对将多个声道的音频数据进行播放输出。
在一些实施例中,所述重编码器包括:
传输音频调整模块,所述传输音频调整模块用于改变所述音频通道中一条传输音频数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条传输音频数据中置入相适应数量声道的音频数据;
音频数据拆分模块,所述音频数据拆分模块用于将预定的声道的音频数据进行拆分。
在一些实施例中,所述音频协处理器芯片包括:
缓存模块,所述缓存模块用于对所有接收到的音频数据进行缓存。
本申请还提供一种显示设备,包括:
显示屏,被配置为呈现图像数据;
扬声器,被配置为再现声音数据;
解码器,被配置为接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
重编码器,所述重编码器与所述解码器相连接,用于将多个声道的音频数 据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
多路音频通道,每路所述音频通道均与所述重编码器相连接,用于对所有音频编码数据和组合排列信息进行输出;
音频处理器,被配置为接收所有音频通道中的音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据,并输出到所述扬声器。
在一些实施例中,所述重编码器包括:
传输音频调整模块,所述传输音频调整模块用于改变所述音频通道中一条传输音频数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条传输音频数据中置入相适应数量声道的音频数据;
音频数据拆分模块,所述音频数据拆分模块用于将预定的声道的音频数据进行拆分。
在一些实施例中,所述音频处理器还包括缓存模块,所述缓存模块用于对所有接收到的音频数据进行缓存。
本申请对所接收到的多个声道音频数据进行整理,并在一路I2S音频通道中置入三个以上声道的音频数据,使得在电视设备上的机芯端上能支持多于八个声道音频数据的传输,很好地满足了对多声道音频播放的需求。
附图说明
为了易于说明,本申请由下述的较佳实施例及附图作详细描述。
图1为电视设备的音箱分布示意图;
图2为本申请多声道平台音频数据传输方法一个实施例的工作流程示意图;
图3为本申请多声道平台音频数据传输方法一个实施例的工作流程示意图;
图4为本申请多声道平台音频数据传输方法一个实施例中的数据采集的工作原理示意图;
图5为本申请多声道平台音频数据传输方法一个实施例中的数据复原的工作原理示意图;
图6为本申请多声道平台音频数据传输方法一个实施例的数据采集的工作原理示意图;
图7为本申请多声道平台音频数据传输方法一个实施例的工作流程示意图;
图8为本申请多声道平台音频数据传输方法一个实施例中的数据采集的工作原理示意图;
图9为本申请多声道平台音频数据传输方法一个实施例中的数据复原的工作原理示意图;
图10为本申请多声道平台音频数据传输装置一个实施例的逻辑结构示意图;
图11是本申请实施例一提供的显示设备示意图;
图12是本申请实施例一提供的显示设备的硬件配置框图。
具体实施方式
为了使本申请的目的、技术方案及优点更加清楚明白,以下结合附图及实施例,对本申请进一步详细说明。应当理解,此处所描述的具体实施例仅仅用以解释本申请,并不用于限定本申请。
在本申请的描述中,需要理解的是,术语“中心”、“纵向”、“横向”、“长 度”、“宽度”、“厚度”、“上”、“下”、“前”、“后”、“左”、“右”、“竖直”、“水平”、“顶”、“底”、“内”、“外”、“顺时针”、“逆时针”等指示的方位或位置关系为基于附图所示的方位或位置关系,仅是为了便于描述本申请和简化描述,而不是指示或暗示所指的装置或元件必须具有特定的方位、以特定的方位构造和操作,因此不能理解为对本申请的限制。此外,术语“第一”、“第二”仅用于描述目的,而不能理解为指示或暗示相对重要性或者隐含指明所指示的技术特征的数量。由此,限定有“第一”、“第二”的特征可以明示或者隐含地包括一个或者更多个所述特征。在本申请的描述中,“多个”的含义是两个或两个以上,除非另有明确具体的限定。
在本申请的描述中,需要说明的是,除非另有明确的规定和限定,术语“安装”、“相连”、“连接”应做广义理解,例如,可以是固定连接,也可以是可拆卸连接,或一体地连接。可以是机械连接,也可以是电连接。可以是直接相连,也可以通过中间媒介间接相连,可以是两个元件内部的连通或两个元件的相互作用关系。对于本领域的普通技术人员而言,可以根据具体情况理解上述术语在本申请中的具体含义。
请参看图1,在本申请中,可以通过在该电视设备中设有多声道环绕音响,即电视机内设有左声道L,右声道R,左环绕SL,右环绕SR,左天空TOPL,右天空TOPR,中置Center,重低音(未图示)的音箱构成相关技术中的5.1.2八声道的环绕播放系统;而在实际应用中,其还可以根据音频数据的声道数量,适当地增加外置音箱的数量,以构成多声道的环绕播放系统。
下面以一个电视设备为例对本申请的一种多声道平台音频数据传输方法进行具体描述,请参阅图2,其方法包括:
S101.识别得到每一个声道的音频数据
接收外部声源所发送的混合音频数据,并对所述混合音频数据进行解析与识别,得到每一个声道的音频数据;所述混合音频数据中至少包含有:左声道L,右声道R,左环绕SL,右环绕SR,左天空TOPL,右天空TOPR,中置Center,重低音Woofer;这八个声道的音频数据;在此处,解码器先将混合音频数据进行解码,然后对解码后的音频数据分别进行识别,分别得到八个声道的音频数据。
S102.对音频数据进行组合生成音频编码数据
将每一个声道的音频数据进行动态范围控制(DRC,Dynamic range control)和基于云端优化的存储等处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据。
如将左声道L,右声道R,左环绕SL,三个声道的音频数据整合到一路音频通道I2S0中,得到一路音频编码数据,将右环绕SR,左天空TOPL,右天空TOPR三个声道的音频数据整合到另一路音频通道I2S1中,得到另一路音频编码数据。
或者将左声道L,右声道R和一部分的左环绕SL音频数据整合到一路音频通道I2S0中,得到一路音频编码数据;并将另一部分的左环绕SL音频数据整合到另一路的音频通道中,通过另一路的音频通道进行传输。
S103.利用两路以上音频通道传输多声道的音频数据
通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出;其中,I2S音频通道传输的相关原理如下:
首先I2S信号包括:MCLK,BCLK,SDATA,WS
(1)串行时钟BCLK即对应数字音频的每一位数据,BCLK都有1个脉冲。
BCLK的频率=2×采样频率×采样位数=2*48KHz*16bit=1.536MHz
(2)帧时钟WS用于切换左右声道的数据。WS为“1”表示正在传输的是左声道的数据,为“0”则表示正在传输的是右声道的数据。WS的频率等于采样频率。
(3)串行数据SDATA,就是用二进制补码表示的音频数据。
(4)MCLK主时钟即,系统时钟,目的是为了使系统间能够更好地同步,MCLK的频率是采样频率的256倍或384倍,48KHz*256=12.288MHz。
I2S格式的信号无论有多少位有效数据,数据的最高位总是出现在WS变化,即一帧开始后的第2个BCLK脉冲处。这就使得接收端与发送端的有效位数可以不同。如果接收端能处理的有效位数少于发送端,可以放弃数据帧中多余的低位数据;如果接收端能处理的有效位数多于发送端,可以自行补足剩余的位。这种同步机制使得数字音频设备的互连更加方便,而且不会造成数据错位。
随着技术的发展,在统一的I2S接口下,出现了多种不同的数据格式。根据SDATA数据相对于WS和BCLK的位置不同,分为左对齐、I2S格式和右对齐。为了保证数字音频信号的正确传输,发送端和接收端应该采用相同的数据格式和长度。当然,对I2S格式来说数据长度可以不同。
字段(声道)选择(WS)命令选择线表明了正在被传输的声道。
WS=0,表示正在传输的是左声道的数据。
WS=1,表示正在传输的是右声道的数据。
WS可以在串行时钟的上升沿或者下降沿发生改变,并且WS信号不需要一定是对称的。在从属装置端,WS在时钟信号的上升沿发生改变。WS总是在 最高位传输前的一个时钟周期发生改变,这样可以使从属装置得到与被传输的串行数据同步的时间,并且使接收端存储当前的命令以及为下次的命令清除空间。
S104.根据组合排列信息得到每一个声道的音频数据
接收所有音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到每一个声道的音频数据;由于组合排列信息记录有每个声道的音频数据在音频通道和音频编码数据中的位置,故根据组合排列信息在各音频通道中对音频编码数据的不同部分进行截取,即可得到对应声道的音频数据,如:左声道L的音频数据被放置于I2S0的左声道前端的16bit中,则将I2S0的左声道前端的16bit进行截取,即可到左声道L的音频数据。
由于不同芯片方案系统内部I2S架构方式不同,I2S协议中有MCLK,BCLK,WS,Data四组信号。其中MCLK为主时钟信号,BCLK为时钟信号,WS为声道选择,Data为声道数据,故基于芯片架构的不同,I2S传输的形式也不同,在一些实施例中,列举两种接收方案:
接收方式一:
当芯片I2S输出供一路CLK时,接收端采用边收边拆的形式。
不同芯片I2S输出通道不同,当机芯端的2路I2S输出是共用一路MCLK,BCLK,WS的,也就是单路CLK采集。单路CLK由机芯端发出给音频协处理器芯片,音频协处理器芯片根据单路CLK,采集机芯端按照不同编码排列形式发送出的多个声道的音频数据,其采样速率不同。由于音频数据是同时发送到接收端,因此音频协处理器芯片在接收音频数据的同时,根据机芯端提供的多声道数据编码序列进行拆解。
但是,如机芯芯片3路I2S共用一路CLK的话,就会存在数据传输过程中,一旦I2S共用的CLK出错,就会造成3路I2S数据传输被接收到的数据出错,可能存在声音异常等情况发生。其中,CLK出错来源为:软件的延迟,外围信号的干扰,硬件电路等。
为此,本实施例中提供另一种接收方式,即接收方式二:
先等待3路I2S数据全部处理好后,再同时发送;然后初始化接收的声音数据,即采集3路初始化声音数据的第一位,以第一位有效信号当做等待音的MCLK。由于3路CLK共用,故用3路CLK分别去发送数据,这样就很大程度上降低因某一路CLK异常导致其数据出错。
由于相关技术中,电视设备的主芯片中最多只能输出3路I2S数据,也就是最多实现6个声道的音频数据传输,而在本申请中至少要实现8个声道的音频数据输出,故在本方案中对8个声道的音频数据进行重编码,得到6声道的音频编码数据。
相应的,将6声道的音频编码数据传输给一个音频协处理器芯片,该芯片能够按照重编码器的编码规则,将其解码并还原成8声道的音频数据,并传输给功放器,从而实现多声道输出的效果。
下面以一个调整采样位宽,不调整采样频率的工作方式的实施例对本申请的一种多声道平台音频数据传输方法进行具体描述,即其保持采样频率为48KHZ,并将原来的采样位宽从16bit调整为24bit的工作方式,请参阅图3至图5,其包括:
S201.识别得到每一个声道的音频数据
接收外部声源所发送的混合音频数据,并对所述混合音频数据进行解析与 识别,得到每一个声道的音频数据;所述混合音频数据中至少包含有:左声道L,右声道R,左环绕SL,右环绕SR,左天空TOPL,右天空TOPR,中置Center,重低音Woofer;这八个声道的音频数据;在此处,解码器先将混合音频数据进行解码,然后对解码后的音频数据分别进行识别,分别得到八个声道的音频数据。
S202.对音频数据进行重新编号
将预定的声道的音频数据进行拆分,并将所有声道的音频数据进行重新编号;
其具体为:将解码后的8声道的音频数据进行重新编号,分别为cache1:L(16bit),cache2:R(16bit),cache3:LS(16bit),cache4:RS(16bit),cache5:FTL(16bit),cache6:FTR(16bit),cache7:SWH(高8bit位)Cache8:SWL(低8bit位),cache9:CH(高8bit位),cache10:CL(低8bit位)。
将这些分组数据存入缓冲器中,在通过驱动调用底层缓冲器中的数据到堆栈中,进行如图4所示的数据排列,通过BCLK时钟,通过I2SData传输出去;其中,为了不影响音效体验效果,一般采用将次要声道的音频数据,如中置音,重低音进行拆分,这样做的优点在与CLK受到干扰或异常时,采集的主声道以及环绕音,天空音不至于丢失以及错误。
S203.调整采样位宽生成音频编码数据
改变所述音频通道中一条音频编码数据的采样位宽,并按照改变后的采样位宽,在一条音频编码数据中置入相适应数量声道的音频数据;
其具体为:把原有一路I2S输出16bit@48Khz音频编码数据,改为24bit@48KHZ音频编码数据,在本实施例中其保持采样频率不变,更改采样位 宽,由原来的16bit变更为24bit。
以一路I2S 2ch为例,WS(LRCK)=0时采集左声道数据,WS(LRCK)=1时采集右声道数据,由于一路I2S音频通道最多可支持32bit的位宽,故其WS最多可以采集32bit左右声道数据。
在本实施例中,其将16bit的Ch0声道的音频数据,放置于I2S0中的左声道,将16bit的Ch2声道的音频数据,放置于I2S0中的右声道;并将Ch1声道的音频数据拆解为2个8bit的部分,再分别放置于I2S0中的左、右声道中;故I2S0中的左、右声道均为24bit的位宽。
S204.判断是否均生成有音频编码数据
对所生成的音频编码数据进行缓存,判断所有的需要进行传输的音频通道中是否均生成有音频编码数据,若均生成有音频编码数据,则进行步骤S205,通过音频通道,对所有音频编码数据和组合排列信息进行输出;若未生成有音频编码数据,则等待所有的需要进行传输的音频通道均生成有音频编码数据。
由于输出的3路I2S芯片内部架构是每一组I2S有单独的MCLK,WS,BCK,Data,但更为重要的是要保证3路I2S的MCLK同步。
在本实施例中采用等待发送的工作方式,就是等待3路I2S数据全部处理好后,同时发送;初始化发新声音,采集3路初始化音的第一位,以第一位有效信号当作等待音的MCLK。3路CLK共用,用3路CLK分别取发送数据,这样就很大程度上降低因某一路CLK异常导致其数据出错。
S205.利用三路音频通道传输多声道的音频数据
通过三路音频通道,对所有音频编码数据和组合排列信息进行输出;每一路音频通道可传输三路音频数据,故在本实施例中,最多可对9个声道的音频 数据进行传输;当需要传输的音频数据为8个声道时,可按图4所示,在一路音频通道的左右声道中各置入一个声道的音频数据,不加入被拆分的音频数据。
S206.接收所有音频编码数据并进行缓存
接收所有音频编码数据,对所有接收到的音频数据进行缓存。其具体为:主芯片将3路I2S音频通道中的24bit数据全部传输给音频协处理器芯片,音频协处理器芯片不进行边收边拆,其将24bit数据进行缓存在缓存器中;
主芯片机芯端将8声道的音频数据变为6声道的编码数据的编码排列方式提供给音频协处理器芯片;
S207.根据组合排列信息得到每一个声道的音频数据
按照所述组合排列信息,从音频编码数据中的预定位置上截取预定采样位宽的音频数据,并为所截取到的音频数据添加上为对应声道的属性信息。待三组数据缓存后,由于音频协处理器芯片端内部本身存在时钟信号,分别采集3路数据的第一位为有效MCLK信号,采集24bit数据的前16位有效数据,并将16位有效数据进行组合,得出主声道,环绕音,上出音的音频数据。
在采集3路24bit第一位有效数据为有效的MCLK信号,采集24bit数据后8bit数据,重新进行数据组合,得出中置音,重低音等音频数据
将还原出的原有多声道5.1.2的数据通过4路I2S输送给功放端,实现多声道音效效果的目的。
请参看图6,在另一个实施例中,还可以在不调整采样频率的情况下,将采样位宽从16bit调整为32bit,故一路I2S音频通道中可传输4个声道的音频信号,此方式编码可以保证2路I2S最多支持8个声道的音频数据,将多路音频数据全部传给协音频处理器,机芯端将编码方式告知协音频处理器,协音频处 理器将2路I2S输入的音频数据进行复原,通过其本身的多路I2S,分别传给对应的功放芯片,输出5.1.2音效效果的声音。在本实施例中,只需要利用到2路I2S音频通道,其具有良好的扩展性。
下面以一个调整采样频率,不调整采样位宽的工作方式的实施例对本申请的一种多声道平台音频数据传输方法进行具体描述,即其保持采样位宽为16bit,并将原来的采样频率从48KHz调整为96KHz的工作方式,请参阅图7至图9,其包括:
S501.识别得到每一个声道的音频数据
接收外部声源所发送的混合音频数据,并对所述混合音频数据进行解析与识别,得到每一个声道的音频数据;所述混合音频数据中包含有:左声道L,右声道R,左环绕SL,右环绕SR,左天空TOPL,右天空TOPR,中置Center,重低音Woofer;这八个声道的音频数据;在此处,解码器先将混合音频数据进行解码,然后对解码后的音频数据分别进行识别,分别得到八个声道的音频数据。
S502.调整采样频率生成音频编码数据
改变所述音频通道中一条音频编码数据的采样频率,并按照改变后的采样频率,在一条音频编码数据中置入相适应数量声道的音频数据;其具体为:采样位宽不变,采样频率提升为96KHz,其对应的BCLK频率提升为:2*96KHz*16bit=3.072MHz,每个BCLK上升沿采集的数据就变成32bit。
以一路I2S的2声道音频数据为例,WS(LRCK)=0时采集左声道数据,WS(LRCK)=1时采集右声道数据,其WS最多可以采集32bit左右声道数据,改变采样频率有两种方式,一种为更改BCLK的频率单沿采集,一种BCLK频 率不变,采用双沿采集,此种方式采集到的数据L通道,采集32bit数据,R通道采集32Bit数据,采集速率96KHz。
S503.判断是否均生成有音频编码数据
对所生成的音频编码数据进行缓存,判断所有的需要进行传输的音频通道中是否均生成有音频编码数据,若均生成有音频编码数据,则进行步骤S504,通过音频通道,对所有音频编码数据和组合排列信息进行输出;若未生成有音频编码数据,则等待所有的需要进行传输的音频通道均生成有音频编码数据。
由于输出的3路I2S芯片内部架构是每一组I2S有单独的MCLK,WS,BCK,Data,但更为重要的是要保证3路I2S的MCLK同步。
在本实施例中采用等待发送的工作方式,就是等待3路I2S数据全部处理好后,同时发送;初始化发新声音,采集3路初始化音的第一位,以第一位有效信号当作等待音的MCLK。3路CLK共用,用3路CLK分别取发送数据,这样就很大程度上降低因某一路CLK异常导致其数据出错。
S504.利用三路音频通道传输多声道的音频数据
通过三路音频通道,对所有音频编码数据和组合排列信息进行输出;每一路音频通道可传输三路音频数据,故在本实施例中,最多可对12个声道的音频数据进行传输。
S506.接收所有音频编码数据并进行缓存
接收所有音频编码数据,对所有接收到的音频数据进行缓存。其保持位宽不变,改变采样频率,此种方式传输的依旧为16bit数据48KHZ,在机芯端多路信道的进行压缩采样,由16bit48HKZ变为16bit@96KHZ信号,此种方式就可以将4ch数据通过1路I2S传输出去,3路IS输出最大可支持12个声道16bit 48KHZ数据的输出。
具体为:机芯端将3路I2S,数据全部传输给音频协处理器芯片,音频协处理器芯片不进行边收边拆,其将数据进行缓存在缓存器中;
机芯端将此编码排列方式以及双沿采样方式提供给音频协处理器芯片。
S507.根据组合排列信息得到每一个声道的音频数据
按照所述组合排列信息,从音频编码数据中的预定位置上截取预定采样位宽的音频数据,并为所截取到的音频数据添加上为对应声道的属性信息。
其具体为:音频协处理器芯片将3路I2S第一位有效数据作为基准MCLK,采样频率设为,48KHZ;将3路I2S数据同时放入堆栈中,采样频率设为48KHz,采用双沿采样方式;并分别采样出ch1,ch2--ch12的16bit数据;将此12声道16bit数据进行按照原有数据分配格式分别给6路I2S通道音频协处理器芯片内部自发等待音,调整6通道I2S的CLK同步;CLK同步后,由音频协处理器芯片通过6路I2S同时发送给功放端。
在本实施例中,若传输8声道的数据,实际上可以采用两路传输,96HZ的采样率就能实现。若更多声道的数据,可以再增加采样率即可。
在另一个实施例中,可同时对采样位宽和采样频率分别进行调整,使得采样位宽从原来的16bit调整为24bit,采样频率从原来的48KHz调整为96KHz,其对应的BCLK频率提升为:2*96KHz*24bit=4.608MHz,每个BCLK上升沿采集的数据就变成32bit。
以一路I2S 2声道为例,WS(LRCK)=0时采集左声道数据,WS(LRCK)=1时采集右声道数据,其WS最多可以采集32bit左右声道数据,改变采样频率有两种方式,一种为更改BCLK的频率单沿采集,一种BCLK频率不变,采 用双沿采集,此种方式采集到的数据L通道,采集24bit数据,R通道采集24Bit数据,采集速率96KHz,此方式编码可以保证2路I2S最多支持8声道的音频数据,将多路音频数据全部传给协音频处理器,机芯端将编码方式告知协音频处理器,协音频处入的音频数据进行复原,通过其本身的多路I2S,分别传给对应的功放芯片,输出5.1.2音效效果的声音。
请参看图10,本申请设置于电视设备中,其为一种多声道平台音频数据传输装置,其包括:
主芯片810和音频协处理器芯片820,所述主芯片810包括:
解码器811,所述解码器811用于接收混合音频数据,并对所述混合音频数据进行解析与识别,得到每一个声道的音频数据;所述混合音频数据中包含有:左声道L,右声道R,左环绕SL,右环绕SR,左天空TOPL,右天空TOPR,中置Center,重低音Woofer等八个声道的音频数据。在本实施例中,解码器先将混合音频数据进行解码,然后对解码后的音频数据分别进行识别,分别得到八个声道的音频数据;
重编码器812,所述重编码器812与所述解码器811相连接,用于将每一个声道的音频数据进行处理,并按预定的组合排列方式,将三个以上声道的音频数据置入一路音频通道中,生成音频编码数据。
其具体可以为:将三个以上声道的音频数据置入一路音频通道中,生成音频编码数据;如将左声道L,右声道R,左环绕SL,三个声道的音频数据整合到一路音频通道I2S0中,得到一路音频编码数据,将右环绕SR,左天空TOPL,右天空TOPR三个声道的音频数据整合到另一路音频通道I2S1中,得到另一路音频编码数据;
音频通道813,所述音频通道813的数量为两路以上,且每路所述音频通道813均与所述重编码器812相连接,用于对所有音频编码数据和组合排列信息进行输出;
所述音频协处理器芯片820与所有音频通道813相连接,用于接收所有音频通道813中的音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到每一个声道的音频数据;由于组合排列信息记录有每个声道的音频数据在音频通道和音频编码数据中的位置,故根据组合排列信息在各音频通道中对音频编码数据的不同部分进行截取,即可得到对应声道的音频数据,如:左声道L的音频数据被放置于I2S0的左声道前端的16bit中,则将I2S0的左声道前端的16bit进行截取,即可到左声道L的音频数据。
在本实施例中,所述装置还包括:
功放器830,所述功放器830与所述音频协处理器芯片820相连接,用于对将每一个声道的音频数据进行播放输出。
在本实施例中,所述重编码器812包括:
传输音频调整模块,所述传输音频调整模块用于改变所述音频通道813中一条传输音频数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条传输音频数据中置入相适应数量声道的音频数据;
音频数据拆分模块,所述音频数据拆分模块用于将预定的声道的音频数据进行拆分。
在本实施例中,所述音频协处理器芯片820包括:
缓存模块,所述缓存模块用于对所有接收到的音频数据进行缓存。
图11是本申请实施例一提供的显示设备示意图,参考图11,本申请还提供 一种显示设备,至少包括:显示屏91,被配置为呈现图像数据;扬声器92,被配置为再现声音数据。
在一些实施示例中,显示设备还可以包括:背光组件94,背光组件94位于显示屏91下方。通常是一些光学组件,用于供应充足的亮度与分布均匀的光源,使显示屏91能正常显示影像。在一些相关技术中,背光组件可以包括LED灯条,还可以包括自动发光的灯板。
在一些实施例中,显示设备还可以包括:背板95,通常背板95上面冲压形成一些凸包结构,扬声器92等器件,通过螺钉或者挂钩固定在凸包上。
在一些实施例中,显示设备还可以包括:后壳98,其盖设在显示屏91的背面上,以隐藏背光组件94,扬声器92等显示装置的零部件,起到美观的效果。
在一些实施例中,显示设备还可以包括:主板96和电源板97,他们可以独立成两块板子设置,也可以是合并在一块板子上。
在一些实施例中,显示设备还包括遥控器93。
图12是本申请实施例一提供的显示设备的硬件配置框图。如图12所示,显示设备200中可以包括调谐解调器220、通信器230、检测器240、外部装置接口250、控制器210、存储器290、用户输入接口、视频处理器260-1、音频处理器260-2、显示屏280、音频输入接口272、供电电源。
调谐解调器220,通过有线或无线方式接收广播电视信号,可以进行放大、混频和谐振等调制解调处理,用于从多个无线或有线广播电视信号中解调出用户所选择电视频道的频率中所携带的音视频信号,以及附加信息(例如EPG数据信号)。
调谐解调器220,可根据用户选择,以及由控制器210控制,响应用户选择的电视频道频率以及该频率所携带的电视信号。
调谐解调器220,根据电视信号广播制式不同,可以接收信号的途径有很多种,诸如:地面广播、有线广播、卫星广播或互联网广播等;以及根据调制类型不同,可以数字调制方式,也可以模拟调制方式;以及根据接收电视信号种类不同,可以解调模拟信号和数字信号。
在其他一些示例性实施例中,调谐解调器220也可在外置设备中,如外置机顶盒等。这样,机顶盒通过调制解调后输出电视音视频信号,经过输入/输出接口250输入至显示设备200中。
通信器230是用于根据各种通信协议类型与外部设备或外部服务器进行通信的组件。例如:通信器230可以包括WIFI模块231,蓝牙通信协议模块232,有线以太网通信协议模块233等其他网络通信协议模块或近场通信协议模块。
显示设备200可以通过通信器230与外部控制设备或内容提供设备之间建立控制信号和数据信号的连接。例如,通信器可根据控制器的控制接收遥控器100的控制信号。
检测器240,是显示设备200用于采集外部环境或与外部交互的信号的组件。检测器240可以包括光接收器242,用于采集环境光线强度的传感器,可以通过采集环境光来自适应显示参数变化等;还可以包括图像采集器241,如相机、摄像头等,可以用于采集外部环境场景,以及用于采集用户的属性或与用户交互手势,可以自适应变化显示参数,也可以识别用户手势,以实现与用户之间互动的功能。
在其他一些示例性实施例中,检测器240,还可包括温度传感器,如通过 感测环境温度,显示设备200可自适应调整图像的显示色温。
在一些实施例中,当温度偏高的环境时,可调整显示设备200显示图像色温偏冷色调;当温度偏低的环境时,可以调整显示设备200显示图像色温偏暖色调。
在其他一些示例性实施例中,检测器240还可包括声音采集器,如麦克风,可以用于接收用户的声音,包括用户控制显示设备200的控制指令的语音信号,或采集环境声音,用于识别环境场景类型,显示设备200可以自适应环境噪声。
外部装置接口250,提供控制器210控制显示设备200与外部其他设备间数据传输的组件。外部装置接口可按照有线/无线方式与诸如机顶盒、游戏装置、笔记本电脑等的外部设备连接,可接收外部设备的诸如视频信号(例如运动图像)、音频信号(例如音乐)、附加信息(例如EPG)等数据。
其中,外部装置接口250可以包括:高清多媒体接口(HDMI)端子251、复合视频消隐同步(CVBS)端子252、模拟或数字分量端子253、通用串行总线(USB)端子254、红绿蓝(RGB)端子(图中未示出)等任一个或多个。
控制器210,通过运行存储在存储器290上的各种软件控制程序(如操作系统和各种应用程序),来控制显示设备200的工作和响应用户的操作。
如图12所示,控制器210包括随机存取存储器RAM213、只读存储器ROM214、图形处理器216、CPU处理器212、通信接口218、以及通信总线。其中,RAM213和ROM214以及图形处理器216、CPU处理器212、通信接口218通过总线相连接。
ROM213,用于存储各种系统启动的指令。如在收到开机信号时,显示设备200电源开始启动,CPU处理器212运行ROM中系统启动指令,将存储在 存储器290的操作系统拷贝至RAM214中,以开始运行启动操作系统。当操作系统启动完成后,CPU处理器212再将存储器290中各种应用程序拷贝至RAM214中,然后,开始运行启动各种应用程序。
图形处理器216,用于产生各种图形对象,如:图标、操作菜单、以及用户输入指令显示图形等。包括运算器,通过接收用户输入各种交互指令进行运算,根据显示属性显示各种对象。以及包括渲染器,产生基于运算器得到的各种对象,进行渲染的结果显示在显示屏280上。
CPU处理器212,用于执行存储在存储器290中操作系统和应用程序指令。以及根据接收外部输入的各种交互指令,来执行各种应用程序、数据和内容,以便最终显示和播放各种音视频内容。
在一些示例性实施例中,CPU处理器212,可以包括多个处理器。多个处理器可包括一个主处理器以及多个或一个子处理器。主处理器,用于在预加电模式中执行显示设备200一些操作,和/或在正常模式下显示画面的操作。多个或一个子处理器,用于执行在待机模式等状态下的一种操作。
通信接口,可包括第一接口218-1到第n接口218-n。这些接口可以是经由网络被连接到外部设备的网络接口。
控制器210可以控制显示设备200的整体操作。例如:响应于接收到用于选择在显示屏280上显示UI对象的用户命令,控制器210便可以执行与由用户命令选择的对象有关的操作。
其中,所述对象可以是可选对象中的任何一个,例如超链接或图标。与所选择的对象有关操作,例如:显示连接到超链接页面、文档、图像等操作,或者执行与图标相对应程序的操作。用于选择UI对象用户命令,可以是通过连接 到显示设备200的各种输入装置(例如,鼠标、键盘、触摸板等)输入命令或者与由用户说出语音相对应的语音命令。
存储器290,包括存储用于驱动和控制显示设备200的各种软件模块。如:存储器290中存储的各种软件模块,包括:基础模块、检测模块、通信模块、显示控制模块、浏览器模块、和各种服务模块等。
其中,基础模块是用于显示设备200中各个硬件之间信号通信、并向上层模块发送处理和控制信号的底层软件模块。检测模块是用于从各种传感器或用户输入接口中收集各种信息,并进行数模转换以及分析管理的管理模块。
例如:语音识别模块中包括语音解析模块和语音指令数据库模块。显示控制模块是用于控制显示屏280进行显示图像内容的模块,可以用于播放多媒体图像内容和UI界面等信息。通信模块,是用于与外部设备之间进行控制和数据通信的模块。浏览器模块,是用于执行浏览服务器之间数据通信的模块。服务模块,是用于提供各种服务以及各类应用程序在内的模块。
同时,存储器290还用于存储接收外部数据和用户数据、各种用户界面中各个项目的图像以及焦点对象的视觉效果图等。
用户输入接口,用于将用户的输入信号发送给控制器210,或者,将从控制器输出的信号传送给用户。在一些实施例中,控制装置(例如移动终端或遥控器)可将用户输入的诸如电源开关信号、频道选择信号、音量调节信号等输入信号发送至用户输入接口,再由用户输入接口转送至控制器;或者,控制装置可接收经控制器处理从用户输入接口输出的音频、视频或数据等输出信号,并且显示接收的输出信号或将接收的输出信号输出为音频或振动形式。
在一些实施例中,用户可在显示屏280上显示的图形用户界面(GUI)输 入用户命令,则用户输入接口通过图形用户界面(GUI)接收用户输入命令。或者,用户可通过输入特定的声音或手势进行输入用户命令,则用户输入接口通过传感器识别出声音或手势,来接收用户输入命令。
视频处理器260-1,用于接收视频信号,根据输入信号的标准编解码协议,进行解压缩、解码、缩放、降噪、帧率转换、分辨率转换、图像合成等视频数据处理,可得到直接在显示屏280上显示或播放的视频信号。
在一些实施例中,视频处理器260-1,包括解复用模块、视频解码模块、图像合成模块、帧率转换模块、显示格式化模块等。
其中,解复用模块,用于对输入音视频数据流进行解复用处理,如输入MPEG-2,则解复用模块进行解复用成视频信号和音频信号等。
视频解码模块,用于对解复用后的视频信号进行处理,包括解码和缩放处理等。
图像合成模块,如图像合成器,其用于将图形生成器根据用户输入或自身生成的GUI信号,与缩放处理后视频图像进行叠加混合处理,以生成可供显示的图像信号。
帧率转换模块,用于对输入视频的帧率进行转换,如将输入的24Hz、25Hz、30Hz、60Hz视频的帧率转换为60Hz、120Hz或240Hz的帧率,其中,输入帧率可以与源视频流有关,输出帧率可以与显示屏的更新率有关。输入有通常的格式采用如插帧方式实现。
显示格式化模块,用于将帧率转换模块输出的信号,改变为符合诸如显示器显示格式的信号,如将帧率转换模块输出的信号进行格式转换以输出RGB数据信号。
显示屏280,用于接收源自视频处理器260-1输入的图像信号,进行显示视频内容和图像以及菜单操控界面。显示屏280包括用于呈现画面的显示屏组件以及驱动图像显示的驱动组件。显示视频内容,可以来自调谐解调器220接收的广播信号中的视频,也可以来自通信器或外部设备接口输入的视频内容。显示屏280,同时显示显示设备200中产生且用于控制显示设备200的用户操控界面UI。
以及,根据显示屏280类型不同,还包括用于驱动显示的驱动组件。或者,倘若显示屏280为一种投影显示器,还可以包括一种投影装置和投影屏幕。
音频处理器260-2,用于接收音频信号,根据输入信号的标准编解码协议,进行解压缩和解码,以及降噪、数模转换、和放大处理等音频数据处理,得到可以在扬声器272中播放的音频信号。
音频输出接口270,用于在控制器210的控制下接收音频处理器260-2输出的音频信号,音频输出接口可包括扬声器272,或输出至外接设备的发生装置的外接音响输出端子274,如:外接音响端子或耳机输出端子等。
在其他一些示例性实施例中,视频处理器260-1可以包括一个或多个芯片组成。音频处理器260-2,也可以包括一个或多个芯片组成。
以及,在其他一些示例性实施例中,视频处理器260-1和音频处理器260-2,可以为单独的芯片,也可以与控制器210一起集成在一个或多个芯片中。
供电电源,用于在控制器210控制下,将外部电源输入的电力为显示设备200提供电源供电支持。供电电源可以包括安装显示设备200内部的内置电源电路,也可以是安装在显示设备200外部的电源,如在显示设备200中提供外接电源的电源接口。
本申请实施例还提供一种显示设备,包括:
显示屏,被配置为呈现图像数据;
扬声器,被配置为再现声音数据;
解码器,被配置为接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
重编码器,所述重编码器与所述解码器相连接,用于将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
多路音频通道,每路所述音频通道均与所述重编码器相连接,用于对所有音频编码数据和组合排列信息进行输出;
音频处理器,被配置为接收所有音频通道中的音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据,并输出到所述扬声器。
在一些实施例中,所述重编码器包括:
传输音频调整模块,所述传输音频调整模块用于改变所述音频通道中一条传输音频数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条传输音频数据中置入相适应数量声道的音频数据;
音频数据拆分模块,所述音频数据拆分模块用于将预定的声道的音频数据进行拆分。
在一些实施例中,所述音频处理器还包括缓存模块,所述缓存模块用于对所有接收到的音频数据进行缓存。
具体的解码、重编码等实现步骤参考上述实施例的介绍,这里不再赘述。
本申请实施例还提供一种显示设备,包括:
显示屏,被配置为呈现图像数据;
扬声器,被配置为再现声音数据;
控制器,被配置为接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出;
接收所有音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据。
在一些实施例中,经过解码复原后的多个声道的音频数据输入到扬声器中,进行声音的再现。
在一些实施例中,所述控制器还被配置为改变所述音频通道中一条音频编码数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条音频编码数据中置入相适应数量声道的音频数据。
在一些实施例中,所述控制器还被配置为将预定的声道的音频数据进行拆分,并将所有声道的音频数据进行重新编号。
在一些实施例中,所述控制器还被配置为按照所述组合排列信息,从音频编码数据中的预定位置上截取预定采样位宽和/或采样速率的音频数据,并为所截取到的音频数据添加上为对应声道的属性信息。
在一些实施例中,所述控制器还被配置为对所生成的音频编码数据进行缓存,判断所有的需要进行传输的音频通道中是否均生成有音频编码数据,若均 生成有音频编码数据,则开始通过音频通道,对所有音频编码数据和组合排列信息进行输出;若未生成有音频编码数据,则等待所有的需要进行传输的音频通道均生成有音频编码数据。
在一些实施例中,所述控制器还被配置为对所有接收到的音频数据进行缓存。
具体的实现过程参见上述实施例的介绍,这里不再展开说明。
在本说明书的描述中,参考术语“一个实施方式”、“一些实施方式”、“示意性实施方式”、“示例”、“具体示例”、或“一些示例”等的描述意指结合实施方式或示例描述的具体特征、结构、材料或者特点包含于本申请的至少一个实施方式或示例中。在本说明书中,对上述术语的示意性表述不一定指的是相同的实施方式或示例。而且,描述的具体特征、结构、材料或者特点可以在任何的一个或多个实施方式或示例中以合适的方式结合。
以上所述仅为本申请的较佳实施例而已,并不用以限制本申请,凡在本申请的精神和原则之内所作的任何修改、等同替换和改进等,均应包含在本申请的保护范围之内。

Claims (16)

  1. 一种多声道平台音频数据传输方法,其特征在于,包括:
    接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
    将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
    通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出;
    接收所有音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据。
  2. 根据权利要求1所述的多声道平台音频数据传输方法,其特征在于,所述将三个以上声道的音频数据置入一路音频通道中,生成音频编码数据包括:
    改变所述音频通道中一条音频编码数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条音频编码数据中置入相适应数量声道的音频数据。
  3. 根据权利要求2所述的多声道平台音频数据传输方法,其特征在于,所述将多个声道的音频数据进行处理还包括:
    将预定的声道的音频数据进行拆分,并将所有声道的音频数据进行重新编号。
  4. 根据权利要求3所述的对多声道平台音频数据传输方法,其特征在于,所述按所述组合排列信息,将所接收到的音频数据进行解码并复原,得到多个声道的音频数据包括:
    按照所述组合排列信息,从音频编码数据中的预定位置上截取预定采样位 宽和/或采样速率的音频数据,并为所截取到的音频数据添加上为对应声道的属性信息。
  5. 根据权利要求4所述的对多声道平台音频数据传输方法,其特征在于,所述通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出之前包括:
    对所生成的音频编码数据进行缓存,判断所有的需要进行传输的音频通道中是否均生成有音频编码数据,若均生成有音频编码数据,则开始通过音频通道,对所有音频编码数据和组合排列信息进行输出;若未生成有音频编码数据,则等待所有的需要进行传输的音频通道均生成有音频编码数据。
  6. 根据权利要求5所述的对多声道平台音频数据传输方法,其特征在于,所述接收所有音频通道中的音频数据之后还包括:
    对所有接收到的音频数据进行缓存。
  7. 一种多声道平台音频数据传输装置,其特征在于,包括:
    主芯片和音频协处理器芯片,所述主芯片包括:
    解码器,所述解码器用于接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
    重编码器,所述重编码器与所述解码器相连接,用于将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
    音频通道,所述音频通道的数量为两路以上,且每路所述音频通道均与所述重编码器相连接,用于对所有音频编码数据和组合排列信息进行输出;
    所述音频协处理器芯片与所有音频通道相连接,用于接收所有音频通道中 的音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据。
  8. 一种显示设备,其特征在于,包括:
    显示屏,被配置为呈现图像数据;
    扬声器,被配置为再现声音数据;
    解码器,被配置为接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
    重编码器,所述重编码器与所述解码器相连接,用于将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
    多路音频通道,每路所述音频通道均与所述重编码器相连接,用于对所有音频编码数据和组合排列信息进行输出;
    音频处理器,被配置为接收所有音频通道中的音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据,并输出到所述扬声器。
  9. 根据权利要8所述的显示设备,其特征在于,所述重编码器包括:
    传输音频调整模块,所述传输音频调整模块用于改变所述音频通道中一条传输音频数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条传输音频数据中置入相适应数量声道的音频数据;
    音频数据拆分模块,所述音频数据拆分模块用于将预定的声道的音频数据进行拆分。
  10. 根据权利要9所述的显示设备,其特征在于,所述音频处理器还包括 缓存模块,所述缓存模块用于对所有接收到的音频数据进行缓存。
  11. 一种显示设备,包括:
    显示屏,被配置为呈现图像数据;
    扬声器,被配置为再现声音数据;
    控制器,被配置为接收混合音频数据,并对所述混合音频数据进行解析与识别,得到多个声道的音频数据;
    将多个声道的音频数据进行处理,并按预定的组合排列方式,将超过两个声道的音频数据置入一路音频通道中,生成音频编码数据;
    通过两路以上的音频通道,对所有音频编码数据和组合排列信息进行输出;
    接收所有音频编码数据,并按所述组合排列信息,将所接收到的音频编码数据进行解码并复原,得到多个声道的音频数据。
  12. 根据权利要求11所述的显示设备,其特征在于,所述控制器还被配置为改变所述音频通道中一条音频编码数据的采样位宽和/或采样速率,并按照改变后的采样位宽和/或采样速率,在一条音频编码数据中置入相适应数量声道的音频数据。
  13. 根据权利要12所述的显示设备,其特征在于,所述控制器还被配置为将预定的声道的音频数据进行拆分,并将所有声道的音频数据进行重新编号。
  14. 根据权利要求13所述的显示设备,其特征在于,所述控制器还被配置为按照所述组合排列信息,从音频编码数据中的预定位置上截取预定采样位宽和/或采样速率的音频数据,并为所截取到的音频数据添加上为对应声道的属性信息。
  15. 根据权利要求14所述的显示设备,其特征在于,所述控制器还被配置 为对所生成的音频编码数据进行缓存,判断所有的需要进行传输的音频通道中是否均生成有音频编码数据,若均生成有音频编码数据,则开始通过音频通道,对所有音频编码数据和组合排列信息进行输出;若未生成有音频编码数据,则等待所有的需要进行传输的音频通道均生成有音频编码数据。
  16. 根据权利要求15所述的显示设备,其特征在于,所述控制器还被配置为对所有接收到的音频数据进行缓存。
PCT/CN2020/070887 2019-07-09 2020-01-08 一种多声道平台音频数据传输方法及其装置、显示设备 WO2021004045A1 (zh)

Applications Claiming Priority (16)

Application Number Priority Date Filing Date Title
CN201910616404.6 2019-07-09
CN201910614701.7A CN112216310B (zh) 2019-07-09 2019-07-09 音频处理方法与装置、以及多声道系统
CN201910614701.7 2019-07-09
CN201910613254.3A CN112216290A (zh) 2019-07-09 2019-07-09 音频数据的传输方法、装置及播放设备
CN201910618541 2019-07-09
CN201910613254.3 2019-07-09
CN201910615836.5A CN112218019B (zh) 2019-07-09 2019-07-09 一种音频数据传输方法及装置
CN201910613160.6 2019-07-09
CN201910613160 2019-07-09
CN201910616404.6A CN112218016B (zh) 2019-07-09 2019-07-09 显示装置
CN201910618541.3 2019-07-09
CN201910615836.5 2019-07-09
CN201910659488.1A CN112218020B (zh) 2019-07-09 2019-07-22 一种多声道平台音频数据传输方法及其装置
CN201910659488.1 2019-07-22
CN201910710346.3 2019-08-02
CN201910710346.3A CN112218210B (zh) 2019-07-09 2019-08-02 显示装置、音频播放方法及装置

Publications (1)

Publication Number Publication Date
WO2021004045A1 true WO2021004045A1 (zh) 2021-01-14

Family

ID=74114387

Family Applications (5)

Application Number Title Priority Date Filing Date
PCT/CN2020/070929 WO2021004049A1 (zh) 2019-07-09 2020-01-08 显示装置、音频数据传输方法及装置
PCT/CN2020/070887 WO2021004045A1 (zh) 2019-07-09 2020-01-08 一种多声道平台音频数据传输方法及其装置、显示设备
PCT/CN2020/070890 WO2021004046A1 (zh) 2019-07-09 2020-01-08 音频处理方法与装置、以及显示设备
PCT/CN2020/070891 WO2021004047A1 (zh) 2019-07-09 2020-01-08 显示装置、音频播放方法
PCT/CN2020/070902 WO2021004048A1 (zh) 2019-07-09 2020-01-08 显示装置、音频数据的传输方法

Family Applications Before (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/070929 WO2021004049A1 (zh) 2019-07-09 2020-01-08 显示装置、音频数据传输方法及装置

Family Applications After (3)

Application Number Title Priority Date Filing Date
PCT/CN2020/070890 WO2021004046A1 (zh) 2019-07-09 2020-01-08 音频处理方法与装置、以及显示设备
PCT/CN2020/070891 WO2021004047A1 (zh) 2019-07-09 2020-01-08 显示装置、音频播放方法
PCT/CN2020/070902 WO2021004048A1 (zh) 2019-07-09 2020-01-08 显示装置、音频数据的传输方法

Country Status (1)

Country Link
WO (5) WO2021004049A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113301494B (zh) * 2021-05-25 2022-09-02 亿咖通(湖北)技术有限公司 一种音频播放系统、控制设备及功放设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1926610A (zh) * 2004-03-12 2007-03-07 诺基亚公司 基于编码的多声道音频信号合成单声道音频信号
US20090313028A1 (en) * 2008-06-13 2009-12-17 Mikko Tapio Tammi Method, apparatus and computer program product for providing improved audio processing
CN102572588A (zh) * 2011-12-14 2012-07-11 中兴通讯股份有限公司 一种实现机顶盒混音的方法及装置
CN108076306A (zh) * 2017-12-29 2018-05-25 中兴通讯股份有限公司 会议实现方法、装置、设备和系统、计算机可读存储介质

Family Cites Families (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006005701A (ja) * 2004-06-18 2006-01-05 D & M Holdings Inc マルチチャンネルオーディオシステム
CN101079265B (zh) * 2007-07-11 2011-06-08 无锡中星微电子有限公司 一种语音信号处理系统
CN201063781Y (zh) * 2007-07-24 2008-05-21 青岛海信电器股份有限公司 音视频编码和解码装置
US8199941B2 (en) * 2008-06-23 2012-06-12 Summit Semiconductor Llc Method of identifying speakers in a home theater system
US20100057471A1 (en) * 2008-08-26 2010-03-04 Hongwei Kong Method and system for processing audio signals via separate input and output processing paths
CN103237259A (zh) * 2013-03-29 2013-08-07 天脉聚源(北京)传媒科技有限公司 一种视频声道处理装置及方法
JP6013646B2 (ja) * 2013-04-05 2016-10-25 ドルビー・インターナショナル・アーベー オーディオ処理システム
CN104468915A (zh) * 2013-09-24 2015-03-25 卓望数码技术(深圳)有限公司 一种移动终端的测试装置和测试系统
CN103714847B (zh) * 2013-12-31 2016-05-04 中山大学花都产业科技研究院 一种基于dsp的多通道数字音频处理器
CN103945310B (zh) * 2014-04-29 2017-01-11 华为终端有限公司 一种传输方法、移动终端、多声道耳机及音频播放系统
CN105992040A (zh) * 2015-02-15 2016-10-05 深圳市民展科技开发有限公司 多声道音频数据发送方法、音频数据同步播放方法及装置
CN105578347A (zh) * 2015-12-25 2016-05-11 数源科技股份有限公司 一体化汽车电子产品的音频系统
CN107135301A (zh) * 2016-02-29 2017-09-05 宇龙计算机通信科技(深圳)有限公司 一种音频数据处理方法及装置
CN105959438A (zh) * 2016-07-06 2016-09-21 惠州Tcl移动通信有限公司 一种音频多通路输出扬声器的处理方法、系统及手机
CN206117991U (zh) * 2016-08-10 2017-04-19 深圳市米尔声学科技发展有限公司 音频处理设备
CN106911987B (zh) * 2017-02-21 2019-11-05 珠海全志科技股份有限公司 主控端、设备端、传输多声道音频数据的方法和系统
CN206879039U (zh) * 2017-07-05 2018-01-12 珠海市杰理科技股份有限公司 支持usb音频的无线麦克风及智能终端卡拉ok系统
CN109996167B (zh) * 2017-12-31 2020-09-11 华为技术有限公司 一种多终端协同播放音频文件的方法及终端
CN108520763B (zh) * 2018-04-13 2021-07-16 广州醇美电子有限公司 一种数据存储方法、装置、设备和存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1926610A (zh) * 2004-03-12 2007-03-07 诺基亚公司 基于编码的多声道音频信号合成单声道音频信号
US20090313028A1 (en) * 2008-06-13 2009-12-17 Mikko Tapio Tammi Method, apparatus and computer program product for providing improved audio processing
CN102572588A (zh) * 2011-12-14 2012-07-11 中兴通讯股份有限公司 一种实现机顶盒混音的方法及装置
CN108076306A (zh) * 2017-12-29 2018-05-25 中兴通讯股份有限公司 会议实现方法、装置、设备和系统、计算机可读存储介质

Also Published As

Publication number Publication date
WO2021004046A1 (zh) 2021-01-14
WO2021004048A1 (zh) 2021-01-14
WO2021004047A1 (zh) 2021-01-14
WO2021004049A1 (zh) 2021-01-14

Similar Documents

Publication Publication Date Title
US11895360B2 (en) Method for waking up audio device, and display apparatus
CN111757171A (zh) 一种显示设备及音频播放方法
WO2021169141A1 (zh) 在显示设备中显示音轨语言的方法及显示设备
WO2018043977A1 (en) Image display apparatus and operating method thereof
WO2021109354A1 (zh) 媒体流数据播放方法及设备
CN111601134B (zh) 一种显示设备中时间显示方法及显示设备
WO2020098504A1 (zh) 一种视频切换的控制方法及显示设备
CN111405221B (zh) 显示设备及录制文件列表的显示方法
CN101383932B (zh) 信息处理系统及装置和方法、遥控器
US11991231B2 (en) Method for playing streaming media file and display apparatus
CN111654743B (zh) 音频播放方法及显示设备
WO2021189708A1 (zh) 一种显示设备开启屏幕保护的方法及显示设备
WO2021004045A1 (zh) 一种多声道平台音频数据传输方法及其装置、显示设备
CN101325676A (zh) 一种音视频解码装置
WO2021227232A1 (zh) 一种语言选项和国家选项的显示方法及显示设备
KR20150059483A (ko) 영상표시장치 및 영상표시장치의 구동방법, 음향출력장치 및 음향출력장치의 구동방법
CN113497906B (zh) 一种音量调节方法、装置及终端
CN111343498B (zh) 一种静音控制方法、装置及智能电视
CN113542829A (zh) 分屏显示方法、显示终端及可读存储介质
US20110162032A1 (en) Television system, television, and set top box
CN112702549B (zh) 一种声音输出方法和显示设备
CN113115105B (zh) 一种显示设备及配置wisa扬声器的提示方法
US20220188069A1 (en) Content-based voice output method and display apparatus
JP2008193258A (ja) マルチ画面表示装置及びマルチ画面表示制御方法
CN116915350A (zh) 显示设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20836347

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 20836347

Country of ref document: EP

Kind code of ref document: A1