CN110085241B - Data encoding method, data encoding device, computer storage medium and data encoding equipment - Google Patents

Data encoding method, data encoding device, computer storage medium and data encoding equipment Download PDF

Info

Publication number
CN110085241B
CN110085241B CN201910350133.4A CN201910350133A CN110085241B CN 110085241 B CN110085241 B CN 110085241B CN 201910350133 A CN201910350133 A CN 201910350133A CN 110085241 B CN110085241 B CN 110085241B
Authority
CN
China
Prior art keywords
data
path
audio input
encoded
input data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910350133.4A
Other languages
Chinese (zh)
Other versions
CN110085241A (en
Inventor
刘磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN201910350133.4A priority Critical patent/CN110085241B/en
Publication of CN110085241A publication Critical patent/CN110085241A/en
Application granted granted Critical
Publication of CN110085241B publication Critical patent/CN110085241B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Compression, Expansion, Code Conversion, And Decoders (AREA)

Abstract

The embodiment of the disclosure discloses a data encoding method, a data encoding device, a computer storage medium and data encoding equipment, wherein the data encoding method comprises the following steps: receiving at least one path of audio input data transmitted from at least one data channel; determining a frame clock signal based on the sampling rate of at least one path of audio input data; coding each path of audio input data in at least one path of audio input data to obtain coded audio data, and establishing a corresponding relation between each path of coded audio data and the period of a frame clock signal; combining the encoded audio data into a set of encoded data based on the correspondence; a set of encoded data is sent to a processor. The embodiment of the disclosure can realize that one processor processes multi-channel audio data, expand the channel number of the audio data processed by the processor and save hardware resources; in addition, the embodiment of the disclosure can be applied to various processors, has small dependency on hardware upgrading and upgrading of the processors, and is beneficial to reducing development workload.

Description

Data encoding method, data encoding device, computer storage medium and data encoding equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a data encoding method and apparatus, a computer storage medium, and a data encoding device.
Background
In general products such as mobile phones, tablet computers, smart speakers, etc., there are usually 1 or 2 audio input devices (e.g., Microphone (MIC) interfaces), and a CPU (Central Processing Unit) only needs 1 channel of I2S (Inter-IC Sound, integrated circuit built-in audio bus) input to complete MIC signal acquisition and transmission. With the improvement of the technology and the improvement of the user experience requirement, some scenes need to support multiple MICs and multiple speakers to complete the functions of voice noise reduction, echo cancellation, sound source positioning and the like. To support the acquisition and transmission of multiple voice channels, the CPU is typically required to support a TDM (Time Division Multiplexing) mode or multiple I2S inputs.
Disclosure of Invention
The embodiment of the disclosure provides a data encoding method and device, a computer storage medium and data encoding equipment.
According to an aspect of an embodiment of the present disclosure, there is provided a data encoding method including: receiving at least one path of audio input data transmitted from at least one data channel; determining a frame clock signal based on the sampling rate of at least one path of audio input data; coding each path of audio input data in at least one path of audio input data to obtain coded audio data, and establishing a corresponding relation between each path of coded audio data and the period of a frame clock signal; combining the encoded audio data into a set of encoded data based on the correspondence; a set of encoded data is sent to a processor.
According to another aspect of the embodiments of the present disclosure, there is provided a data encoding apparatus including: the receiving module is used for receiving at least one path of audio input data transmitted from at least one data channel; the determining module is used for determining a frame clock signal based on the sampling rate of at least one path of audio input data; the encoding module is used for encoding each path of audio input data in at least one path of audio input data to obtain encoded audio data and establishing a corresponding relation between each path of encoded audio data and the period of a frame clock signal; the combination module is used for combining the coded audio data into a group of coded data based on the corresponding relation; and the sending module is used for sending the group of coded data to the processor.
According to another aspect of the embodiments of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the above-described data encoding method.
According to another aspect of the embodiments of the present disclosure, there is provided a data encoding apparatus including: a data processing unit; a memory for storing the processor-executable instructions; the data processing unit is used for executing the data coding method.
Based on the data encoding method, device, computer storage medium and data encoding equipment provided by the above embodiments of the present disclosure, by receiving at least one path of audio input data, determining a frame clock signal based on the sampling rate of the at least one path of audio input data, encoding each path of audio input data to obtain encoded audio data, establishing a correspondence between each path of encoded audio data and the period of the frame clock signal, and finally combining the encoded audio data into a set of encoded data based on the correspondence and sending the set of encoded data to a processor, a set of encoded data is obtained based on the at least one path of audio input data, and since the set of encoded data can be transmitted to the processor through one path of data channel, it is possible to realize that one processor processes multiple paths of audio data, expand the number of channels of audio data processed by the processor, and avoid transmitting audio data through multiple transmission channels, thus saving hardware resources; in addition, the embodiment can be applied to various processors, has small dependency on hardware upgrading and upgrading of the processors, and is beneficial to reducing development workload and saving development time.
The technical solution of the present disclosure is further described in detail by the accompanying drawings and examples.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
Fig. 1 is a system diagram to which the present disclosure is applicable.
Fig. 2 is a flowchart illustrating a data encoding method according to an exemplary embodiment of the disclosure.
FIG. 3 is an exemplary diagram of a set of encoded data being sent to a processor via a single channel based on a frame clock signal.
Fig. 4 is a schematic diagram of an application scenario of the data encoding method of the embodiment of the present disclosure.
Fig. 5 is a flowchart illustrating a data encoding method according to another exemplary embodiment of the present disclosure.
Fig. 6 is an exemplary diagram of two adjacent paths of encoded audio data being spaced apart by a data spacing bit and the encoded audio data being combined according to a channel number.
Fig. 7 is a schematic structural diagram of a data encoding apparatus according to an exemplary embodiment of the present disclosure.
Fig. 8 is a schematic structural diagram of a data encoding apparatus according to another exemplary embodiment of the present disclosure.
Fig. 9 is a block diagram of a data encoding apparatus according to an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to terminal devices, computer systems, servers, etc. that may operate with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with terminal devices, computer systems, servers, and the like include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set-top boxes, programmable consumer electronics, networked personal computers, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above, and the like.
The data encoding devices of the terminal devices, computer systems, servers, etc., may be described in the general context of computer system-executable instructions, such as program modules, being executed by computer systems. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
In general products such as mobile phones, tablet computers, smart sound boxes and the like, only 1 or 2 MIC interfaces are usually provided, and a CPU only needs 1 path of I2S input to complete MIC signal acquisition and transmission. However, with the advancement of technology and the improvement of user experience requirements, a scenario supporting multiple MICs and multiple speakers is required to complete functions of voice noise reduction, echo cancellation, sound source positioning, and the like. In order to support the collection and transmission of multiple paths of voice, a CPU supporting only 1 path of I2S input cannot meet the requirement, and the CPU must support a TDM mode or multiple paths of I2S input, which results in the replacement of a platform and increases the development workload, development time and development cost.
Exemplary System
Fig. 1 illustrates an exemplary system architecture 100 of a data encoding method or data encoding apparatus to which embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a data encoding device 101, a processor 102, at least one ADC (analog/digital signal converter) 103. The ADC103 may collect a plurality of channels (CH 1-CH8 as shown in the figure) of audio input data (e.g., audio input data collected from MIC, speakers, etc.) and transmit the audio input data to the data encoding apparatus 101. The data encoding apparatus 101 generates a set of encoded data by encoding the received multiple audio input data and transmits the set of encoded data to the processor 103.
The data encoding apparatus 101 may be various apparatuses for Processing Digital signals, including but not limited to a DSP (Digital Signal Processing), a single chip microcomputer, an FPGA (Field-Programmable Gate Array), and the like.
The data encoding device 101, the processor 102, and the ADC103 may be disposed in various electronic devices, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, and an on-board terminal.
It should be noted that the data encoding method provided by the embodiment of the present disclosure is generally executed by the data encoding apparatus 101, and accordingly, the data encoding apparatus 101 may be provided in an electronic apparatus.
It should be understood that the number of data encoding devices 101, processors 102 and ADCs 103 in fig. 1 is merely illustrative. There may be any number of data encoding devices, processors, and ADCs as desired for an implementation.
Exemplary method
Fig. 2 is a flowchart illustrating a data encoding method according to an exemplary embodiment of the disclosure. The present embodiment can be applied to the data encoding apparatus 101 shown in fig. 1, as shown in fig. 2, and includes the following steps:
step 201, at least one path of audio input data transmitted from at least one data channel is received.
In this embodiment, an execution subject (e.g., the data encoding apparatus shown in fig. 1) for executing the data encoding method may receive at least one piece of audio input data transmitted from at least one data channel. Each data channel is used for receiving one path of audio input data. The audio input data may be audio data transmitted by various devices. For example, the at least one piece of audio input data may include, but is not limited to, at least one of: audio data collected by a sound collection device (e.g., a microphone), audio data output by a sound output device (e.g., a speaker).
Step 202, determining a frame clock signal based on the sampling rate of at least one path of audio input data.
In this embodiment, the data encoding apparatus 101 shown in fig. 1 may determine the frame clock signal based on the sampling rate of at least one audio input data.
As an example, if the sampling rate of each audio input data in the at least one audio input data is s, there are n audio input data, and m audio input data are transmitted per cycle of the frame clock signal, the frequency of the frame clock signal is determined to be s × (n/m). For example, if s is 16kHz, n is 16, and m is 2, the frequency of the frame clock signal is 16kHz × 128kHz (16/2).
Step 203, encoding each path of audio input data in at least one path of audio input data to obtain encoded audio data, and establishing a corresponding relationship between each path of encoded audio data and a period of a frame clock signal.
In this embodiment, the data encoding apparatus 101 may encode each path of audio input data in at least one path of audio input data to obtain encoded audio data, and establish a corresponding relationship between each path of encoded audio data and a period of a frame clock signal.
As an example, the correspondence may be characterized by the following correspondence table:
channel sequence number for audio input data Cycle number of frame clock signal
CH1 T1_H
CH2 T1_L
CH3 T2_H
CH4 T2_L
CH15 T8_H
CH16 T8_L
CH1 T9_H
Where H denotes a high period of the period, and L denotes a low period of the period. As can be seen from the correspondence table, if the number of channels of the encoded audio data is 16, and each period of the frame clock signal can transmit two channels of encoded audio data, the data encoding device 101 may sequentially transmit 16 channels of encoded audio data according to the serial number of each channel, and may transmit 16 channels of encoded audio data once every 8 periods.
In an embodiment, the data encoding apparatus 101 may encode each audio input data of the at least one audio input data based on an I2S protocol. The above-mentioned correspondence between the encoded audio data and the period of the frame clock signal may be used to characterize the order in which the encoded audio data is transmitted and the number of channels of the encoded audio data transmitted in each period.
And step 204, combining the encoded audio data into a set of encoded data based on the corresponding relationship.
In this embodiment, the data encoding apparatus 101 may combine the encoded audio data into a set of encoded data based on the correspondence relationship. Since the correspondence may be used to represent the order of transmitting the encoded audio data and the number of channels of the encoded audio data transmitted in each period, a time interval of each encoded audio data when transmitting may be determined according to the period of the frame clock signal, so that a certain number of extra data bits are supplemented on the basis of the encoded audio data according to the time interval, thereby connecting each encoded audio data into a set of encoded data, and specifically how to connect each encoded audio data into a set of encoded data, refer to the following description of the embodiment shown in fig. 4, which is not described herein again.
Step 205, a set of encoded data is sent to a processor.
In this embodiment, the data encoding apparatus 101 may transmit a set of encoded data to a processor (such as the processor shown in fig. 1). Typically, the set of encoded data may be sent to the processor via a channel.
In some alternative implementations, the data encoding apparatus 101 may send a set of encoded data to the processor over a single channel based on the frame clock signal. Wherein, the single channel may be a channel for transmitting the set of encoded data. As an example, one path of encoded audio data may be transmitted during each period of the frame clock signal, so that the encoded audio data included in the set of encoded data is sequentially sent to the processor through the one-path channel.
In some alternative implementations, the data encoding apparatus 101 may send a set of encoded data to the processor through a single channel as follows:
based on the corresponding relation between each path of encoded audio data and the period of the frame clock signal, one path of encoded audio data included in the group of encoded data is sent to the processor through the one-way channel during the high level period of each period of the frame clock signal, and the other path of encoded audio data included in the group of encoded data is sent to the processor through the one-way channel during each low level period of the frame clock signal.
Here, since each period of the frame clock signal includes a high level period and a low level period, two encoded audio data can be transmitted per period. As shown in fig. 3, CH1 is a channel number of a channel of audio input data, and sends encoded audio data corresponding to CH1 to the processor during a first high level period of the frame clock signal, and sends encoded audio data corresponding to CH2 to the processor during a first low level period of the frame clock signal. And by analogy, sequentially sending the n paths of encoded audio data to the processor.
In an embodiment, the processor 102 shown in fig. 1 may decode the set of encoded data after receiving the set of encoded data according to the same encoding mode as that adopted by the data encoding apparatus 101 to encode the audio input data, and may further perform various subsequent processing, such as speech noise reduction, wake-up, echo cancellation, sound source localization, and the like, by using the decoded audio data.
With continued reference to fig. 4, fig. 4 is a schematic diagram of an application scenario of the data encoding method according to the present embodiment. In the application scenario of fig. 4, the DSP401 receives from the ADC402 audio input data CH1-CH6 collected from a four-way microphone and two-way speaker, each at a sampling rate of 16 kHz. The DSP401 then determines a frame clock signal based on the sample rate of the at least one audio input data path. When one audio input data is transmitted in each period of the frame clock signal, the frequency of the frame clock signal is 16kHz (16/2) ═ 128 kHz. Then, the DSP401 encodes each path of audio input data in the at least one path of audio input data according to the I2S protocol to obtain encoded audio data, and establishes a correspondence between each path of encoded audio data and a period of the frame clock signal (i.e., one path of audio input data is transmitted during a high level period and a low level period of the frame clock signal, respectively). And combining the encoded audio data into a set of encoded data based on the correspondence. For example, the time interval of each encoded audio data at the time of transmission may be determined according to the period of the frame clock signal, so that a certain number of extra data bits (e.g., 4031 in fig. 4 is an extra data bit supplemented after the encoded audio data for spacing two adjacent encoded audio data) are supplemented on the basis of the encoded audio data according to the time interval, thereby connecting each encoded audio data into one set of encoded data 403. Finally, the encoded audio data comprised by the set of encoded data 402 is sent to the processor 404 in sequence.
It should be noted that the data encoding apparatus 101 shown in fig. 1 may be implemented by the DSP 401.
The method provided by the above embodiment of the present disclosure determines a frame clock signal based on a sampling rate of at least one channel of audio input data by receiving at least one channel of audio input data, encodes each channel of audio input data to obtain encoded audio data, and establishing a corresponding relation between each path of encoded audio data and the period of the frame clock signal, and finally combining the encoded audio data into a group of encoded data based on the corresponding relation and sending the encoded data to a processor, thereby realizing that a group of encoded data is obtained based on at least one path of audio input data, because a group of coded data can be transmitted to the processor through a data channel, one processor can process multiple paths of audio data, the number of channels of the audio data processed by the processor is expanded, and hardware resources are saved because multiple transmission channels are not adopted to transmit the audio data; in addition, the embodiment can be applied to various processors, has small dependency on hardware upgrading and upgrading of the processors, and is beneficial to reducing development workload and saving development time.
With further reference to fig. 5, a flow diagram of yet another embodiment of a data encoding method is shown. As shown in fig. 5, on the basis of the embodiment shown in fig. 2, step 203 may include the following steps:
step 2031, determine the channel number of each audio input data in at least one audio input data.
In this embodiment, an execution subject (for example, the data encoding apparatus 101 shown in fig. 1 or the DSP401 shown in fig. 4) for executing the data encoding method may determine a channel number of each of the at least one piece of audio input data. As an example, assuming there are 6 total audio input data, the channel numbers may be 1, 2, 3, 4, 5, 6.
And 2032, based on the preset coding mode, coding each channel of audio input data by using the channel number of each channel of audio input data to obtain coded audio data.
In this embodiment, the data encoding apparatus 101 or the DSP401 may encode each path of audio input data by using a channel number of each path of audio input data based on a preset encoding mode (e.g., I2S encoding mode), so as to obtain encoded audio data. As an example, after the audio input data is encoded according to the preset encoding mode, the channel number may be converted into a binary number, and the binary number may be added to the encoded audio data. For example, assuming that a 16-bit binary number is obtained after encoding a certain path of audio input data, and the channel number corresponding to the path of audio input data is converted into a 4-bit binary number, the 4-bit binary number may be merged after or before the 16-bit binary number to obtain a 20-bit binary number, where the 20-bit binary number is data included in the encoded audio data. It should be understood that the encoded audio data may include other data in addition to the above-described 20-bit binary number.
In some optional implementations, the data encoding device 101 or the DSP401 may encode each of the at least one channel of audio input data according to the following steps to obtain encoded audio data:
firstly, a data interval bit is set in each audio input data in at least one audio input data. The data interval bit may include at least one binary bit, and the value on the data interval bit may be a preset value (e.g., 0).
And then, based on a preset coding mode, coding each path of audio input data provided with data interval bits to obtain coded audio data. The data interval bit is used for carrying out interval between two paths of coded audio data.
By setting the data interval bit, two paths of adjacent encoded audio data can be accurately distinguished when the encoded audio data is decoded by the processor, so that the possibility of errors in decoding is reduced.
In some alternative implementations, the data encoding apparatus 101 or the DSP401 may combine the encoded audio data into a set of encoded data according to the sequence number of the channel of each path of audio input data. Specifically, the data encoding apparatus 101 or the DSP401 may sequentially connect the encoded audio data end to end (that is, the lowest bit of the previous data is directly or indirectly connected to the highest bit of the next data) according to the sequence of the channel number, so as to obtain a set of encoded data. In the optional implementation manner, since the audio data after each path of coding is combined into a group of coded data in sequence, the generated group of coded data can be more regular, which is beneficial to improving the efficiency of decoding the coded data.
As an example, as shown in fig. 6, it shows an exemplary diagram of separating two adjacent paths of encoded audio data by data separation bits and combining the encoded audio data according to channel sequence numbers. In the figure, D1 and D2 are respectively valid data bits of the encoded audio data of the first path and the second path, S0 and S1 are respectively channel sequence numbers of the encoded audio data of the first path and the second path, and W1 and W2 bit data space bits.
In the method provided by the above embodiment of the present disclosure, the channel serial number of each channel of audio input data is determined, and each channel of audio input data is encoded by using the channel serial number based on the preset encoding mode, so as to obtain encoded audio data. Therefore, when the processor decodes a group of coded data, the channel serial numbers corresponding to the channels of coded audio data included in the group of coded data are determined, so that the coded audio data of the channels can be decoded respectively, the processor can process the channels of decoded audio data differently, and the accuracy of processing the audio data is improved.
Exemplary devices
Fig. 7 is a schematic structural diagram of a data encoding apparatus according to an exemplary embodiment of the present disclosure. This embodiment can be applied to a data encoding device, as shown in fig. 7, the text classification apparatus includes: a receiving module 701, configured to receive at least one channel of audio input data transmitted from at least one data channel; a determining module 702, configured to determine a frame clock signal based on a sampling rate of at least one channel of audio input data; the encoding module 703 is configured to encode each channel of audio input data in at least one channel of audio input data to obtain encoded audio data, and establish a corresponding relationship between each channel of encoded audio data and a period of a frame clock signal; a combining module 704, configured to combine the encoded audio data into a set of encoded data based on the correspondence; a sending module 705 for sending a set of encoded data to the processor.
In this embodiment, the receiving module 701 may receive at least one piece of audio input data transmitted from at least one data channel. Each data channel is used for receiving one path of audio input data. The audio input data may be audio data transmitted by various devices. For example, the at least one piece of audio input data may include, but is not limited to, at least one of: audio data collected by a sound collection device (e.g., a microphone), audio data output by a sound output device (e.g., a speaker).
In this embodiment, the determining module 702 may determine the frame clock signal based on a sampling rate of at least one audio input data.
As an example, if the sampling rate of each audio input data in the at least one audio input data is s, there are n audio input data, and m audio input data are transmitted per cycle of the frame clock signal, the frequency of the frame clock signal is determined to be s × (n/m). For example, if s is 16kHz, n is 16, and m is 2, the frequency of the frame clock signal is 16kHz × 128kHz (16/2).
In this embodiment, the encoding module 703 may encode each path of audio input data in at least one path of audio input data to obtain encoded audio data, and establish a corresponding relationship between each path of encoded audio data and a period of a frame clock signal.
In an embodiment, generally, the encoding module 703 may encode each audio input data of the at least one audio input data based on the I2S protocol. The above-mentioned correspondence between the encoded audio data and the period of the frame clock signal may be used to characterize the order in which the encoded audio data is transmitted and the number of channels of the encoded audio data transmitted in each period.
In this embodiment, the combining module 704 may combine the encoded audio data into a set of encoded data based on the correspondence. Because the correspondence can be used for representing the sequence of transmitting the encoded audio data and the channel number of the encoded audio data transmitted in each period, the time interval of each encoded audio data during transmission can be determined according to the period of the frame clock signal, so that a certain number of extra data bits are supplemented on the basis of the encoded audio data according to the time interval, and each encoded audio data is connected into a group of encoded data.
In this embodiment, the sending module 705 may send a set of encoded data to a processor (such as the processor shown in fig. 1). Typically, the set of encoded data may be sent to the processor via a channel.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a data encoding apparatus according to another exemplary embodiment of the present disclosure.
In some optional implementations, the sending module 705 may be further configured to: based on the frame clock signal, a set of encoded data is sent to the processor over a one-way channel.
In some optional implementations, the sending module 705 may be further configured to: based on the corresponding relation between each path of encoded audio data and the period of the frame clock signal, one path of encoded audio data included in the group of encoded data is sent to the processor through the one-way channel during the high level period of each period of the frame clock signal, and the other path of encoded audio data included in the group of encoded data is sent to the processor through the one-way channel during each low level period of the frame clock signal.
In some optional implementations, the encoding module 703 may include: a determining unit 7031, configured to determine a channel number of each channel of audio input data in at least one channel of audio input data; the first encoding unit 7032 is configured to encode each channel of audio input data by using a channel number of each channel of audio input data based on a preset encoding mode, so as to obtain encoded audio data.
In some optional implementations, the encoding module 703 may further include: a setting unit 7033, configured to set a data interval bit in each of the at least one channel of audio input data; a second encoding unit 7034, configured to encode each path of audio input data with a data interval bit based on a preset encoding mode, to obtain encoded audio data, where the data interval bit is used to perform an interval between two paths of encoded audio data.
In some optional implementations, the combining module 704 may be further configured to: and combining the encoded audio data into a group of encoded data according to the sequence of the channel serial number of each path of audio input data.
The apparatus provided in the foregoing embodiment of the present disclosure receives at least one channel of audio input data, determines a frame clock signal based on a sampling rate of the at least one channel of audio input data, encodes each channel of audio input data to obtain encoded audio data, and establishing a corresponding relation between each path of encoded audio data and the period of the frame clock signal, and finally combining the encoded audio data into a group of encoded data based on the corresponding relation and sending the encoded data to a processor, thereby realizing that a group of encoded data is obtained based on at least one path of audio input data, because a group of coded data can be transmitted to the processor through a data channel, one processor can process multiple paths of audio data, the number of channels of the audio data processed by the processor is expanded, and hardware resources are saved because multiple transmission channels are not adopted to transmit the audio data; in addition, the embodiment can be applied to various processors, has small dependency on hardware upgrading and upgrading of the processors, and is beneficial to reducing development workload and saving development time.
Next, a data encoding device according to an embodiment of the present disclosure is described with reference to fig. 9. The data encoding apparatus may communicate with a processor as shown in fig. 1 to transmit encoded data to the processor.
Fig. 9 illustrates a block diagram of a data encoding apparatus according to an embodiment of the present disclosure. As shown in fig. 9, the data encoding apparatus 900 includes one or more data processing units 901 and a memory 902.
The data processing unit 901 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the data encoding device 900 to perform desired functions.
Memory 902 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. Volatile memory can include, for example, Random Access Memory (RAM), cache memory (or the like). The non-volatile memory may include, for example, Read Only Memory (ROM), a hard disk, flash memory, and the like. One or more computer program instructions may be stored on a computer readable storage medium and executed by the data processing unit 901 to implement the data encoding methods of the various embodiments of the present disclosure above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the data encoding apparatus 900 may further include: an input interface 903 and an output interface 904, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the data encoding device is a DSP, the input interface 903 may be an interface of the I2S protocol for receiving audio data collected by a microphone or the like.
The output interface 904 may output data, such as encoded data that encodes audio data via a data encoding apparatus, to the outside. For example, the output interface may be an interface of the I2S protocol for sending encoded data to a processor as shown in FIG. 1.
Of course, for the sake of simplicity, only some of the components related to the present disclosure in the data encoding apparatus 900 are shown in fig. 9, and components such as a bus and the like are omitted. In addition, the data encoding device 900 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions that, when executed by a processor, cause the processor to perform the steps in the data encoding method according to various embodiments of the present disclosure described in the "exemplary methods" section of this specification above.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a data encoding method according to various embodiments of the present disclosure described in the "exemplary methods" section above of this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
In the present specification, the embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same or similar parts in the embodiments are referred to each other. For the system embodiment, since it basically corresponds to the method embodiment, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
The methods and apparatus of the present disclosure may be implemented in a number of ways. For example, the methods and apparatus of the present disclosure may be implemented by software, hardware, firmware, or any combination of software, hardware, and firmware. The above-described order for the steps of the method is for illustration only, and the steps of the method of the present disclosure are not limited to the order specifically described above unless specifically stated otherwise. Further, in some embodiments, the present disclosure may also be embodied as programs recorded in a recording medium, the programs including machine-readable instructions for implementing the methods according to the present disclosure. Thus, the present disclosure also covers a recording medium storing a program for executing the method according to the present disclosure.
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (9)

1. A method of data encoding comprising:
receiving at least one path of audio input data transmitted from at least one data channel;
determining a frame clock signal based on the sampling rate of the at least one path of audio input data;
encoding each path of audio input data in the at least one path of audio input data to obtain encoded audio data, and establishing a corresponding relation between each path of encoded audio data and the period of the frame clock signal;
based on the corresponding relation, the encoded audio data are combined into a group of encoded data;
sending the set of encoded data to a processor;
the encoding each channel of audio input data in the at least one channel of audio input data to obtain encoded audio data includes:
determining a channel serial number of each path of audio input data in the at least one path of audio input data;
based on a preset coding mode, coding each path of audio input data by using the channel serial number of each path of audio input data to obtain coded audio data;
the encoding of each channel of audio input data by using the channel serial number of each channel of audio input data based on the preset encoding mode to obtain encoded audio data includes:
and respectively converting the channel serial number of each path of audio input data into binary numbers, and respectively adding the converted binary numbers into the corresponding encoded audio data.
2. The method of claim 1, wherein said transmitting the set of encoded data to a processor comprises:
based on the frame clock signal, the set of encoded data is sent to a processor over a single channel.
3. The method of claim 2, wherein said sending the set of encoded data to a processor over a single channel based on the frame clock signal comprises:
and respectively sending one path of encoded audio data included in the group of encoded data to a processor through a single-path channel during the high level period of each period of the frame clock signal, and respectively sending the other path of encoded audio data included in the group of encoded data to the processor through the single-path channel during each low level period of the frame clock signal based on the corresponding relation between each path of encoded audio data and the period of the frame clock signal.
4. The method according to claim 1, wherein said encoding each of the at least one channel of audio input data to obtain encoded audio data further comprises:
setting a data interval bit in each path of audio input data in the at least one path of audio input data;
and based on the preset coding mode, coding each path of audio input data provided with the data interval bit to obtain coded audio data, wherein the data interval bit is used for carrying out interval between two paths of coded audio data.
5. The method of claim 1, wherein said combining the encoded audio data into a set of encoded data comprises:
and combining the coded audio data into a group of coded data according to the sequence of the channel serial number of each path of audio input data.
6. A data encoding apparatus comprising:
the receiving module is used for receiving at least one path of audio input data transmitted from at least one data channel;
the determining module is used for determining a frame clock signal based on the sampling rate of the at least one path of audio input data;
the encoding module is used for encoding each path of audio input data in the at least one path of audio input data to obtain encoded audio data and establishing a corresponding relation between each path of encoded audio data and the period of the frame clock signal;
the combination module is used for combining the coded audio data into a group of coded data based on the corresponding relation;
a sending module for sending the set of encoded data to a processor;
the encoding module includes:
the determining unit is used for determining the channel serial number of each path of audio input data in the at least one path of audio input data;
the first coding unit is used for coding each path of audio input data by using the channel serial number of each path of audio input data based on a preset coding mode to obtain coded audio data;
the first encoding unit is further to:
and respectively converting the channel serial number of each path of audio input data into binary numbers, and respectively adding the converted binary numbers into the corresponding encoded audio data.
7. The apparatus of claim 6, wherein the encoding module further comprises:
the setting unit is used for setting a data interval bit in each path of audio input data in the at least one path of audio input data;
and the second coding unit is used for coding each path of audio input data provided with the data interval bit based on the preset coding mode to obtain coded audio data, wherein the data interval bit is used for separating the two paths of coded audio data.
8. A computer-readable storage medium, storing a computer program for executing the data encoding method of any one of claims 1 to 5.
9. A data encoding apparatus, the data encoding apparatus comprising:
a data processing unit;
a memory for storing the processor-executable instructions;
the data processing unit for performing the data encoding method of any of the preceding claims 1-5.
CN201910350133.4A 2019-04-28 2019-04-28 Data encoding method, data encoding device, computer storage medium and data encoding equipment Active CN110085241B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910350133.4A CN110085241B (en) 2019-04-28 2019-04-28 Data encoding method, data encoding device, computer storage medium and data encoding equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910350133.4A CN110085241B (en) 2019-04-28 2019-04-28 Data encoding method, data encoding device, computer storage medium and data encoding equipment

Publications (2)

Publication Number Publication Date
CN110085241A CN110085241A (en) 2019-08-02
CN110085241B true CN110085241B (en) 2021-10-08

Family

ID=67417372

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910350133.4A Active CN110085241B (en) 2019-04-28 2019-04-28 Data encoding method, data encoding device, computer storage medium and data encoding equipment

Country Status (1)

Country Link
CN (1) CN110085241B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110838298A (en) * 2019-11-15 2020-02-25 闻泰科技(无锡)有限公司 Method, device and equipment for processing multi-channel audio data and storage medium
CN113518300B (en) * 2021-06-15 2023-12-22 翱捷科技(深圳)有限公司 I2S-based automatic configuration method and system for parameters of audio acquisition chip
CN113965853B (en) * 2021-10-19 2024-01-05 深圳市广和通无线股份有限公司 Module device, audio processing method and related device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1795635A (en) * 2003-04-17 2006-06-28 株式会社理光 Signal transmitting apparatus, power supplying system, and serial communication apparatus
CN105261365A (en) * 2015-09-15 2016-01-20 北京云知声信息技术有限公司 Audio output method and device
CN105389155A (en) * 2015-11-18 2016-03-09 苏州思必驰信息科技有限公司 Method and system for receiving TDM audio data by using SPI interface
CN106782562A (en) * 2016-12-20 2017-05-31 Tcl通力电子(惠州)有限公司 Audio-frequency processing method, apparatus and system
CN106788844A (en) * 2016-12-16 2017-05-31 深圳市声菲特科技技术有限公司 A kind of MCVF multichannel voice frequency synchronous transfer circuit
CN207690497U (en) * 2018-01-10 2018-08-03 成都天奥信息科技有限公司 A kind of MCVF multichannel voice frequency coding/decoding system applied to the radio station VOIP gateway
CN109660933A (en) * 2019-01-30 2019-04-19 北京视通科技有限公司 A kind of device of simultaneous transmission multi-channel analog audio

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3988528A (en) * 1972-09-04 1976-10-26 Nippon Hoso Kyokai Signal transmission system for transmitting a plurality of information signals through a plurality of transmission channels
FR2349243A1 (en) * 1976-04-23 1977-11-18 Telecommunications Sa DIGITAL TIME DIVISION TRANSMISSION SYSTEM
JP2861518B2 (en) * 1991-09-03 1999-02-24 日本電気株式会社 Adaptive multiplexing method
US8464120B2 (en) * 2006-10-18 2013-06-11 Panasonic Corporation Method and system for data transmission in a multiple input multiple output (MIMO) system including unbalanced lifting of a parity check matrix prior to encoding input data streams
US9015399B2 (en) * 2007-08-20 2015-04-21 Convey Computer Multiple data channel memory module architecture
CN104144331B (en) * 2014-08-18 2018-01-16 中国航空无线电电子研究所 The device of multiway images/video data encoder transmission is realized using single SDI passages

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1795635A (en) * 2003-04-17 2006-06-28 株式会社理光 Signal transmitting apparatus, power supplying system, and serial communication apparatus
CN105261365A (en) * 2015-09-15 2016-01-20 北京云知声信息技术有限公司 Audio output method and device
CN105389155A (en) * 2015-11-18 2016-03-09 苏州思必驰信息科技有限公司 Method and system for receiving TDM audio data by using SPI interface
CN106788844A (en) * 2016-12-16 2017-05-31 深圳市声菲特科技技术有限公司 A kind of MCVF multichannel voice frequency synchronous transfer circuit
CN106782562A (en) * 2016-12-20 2017-05-31 Tcl通力电子(惠州)有限公司 Audio-frequency processing method, apparatus and system
CN207690497U (en) * 2018-01-10 2018-08-03 成都天奥信息科技有限公司 A kind of MCVF multichannel voice frequency coding/decoding system applied to the radio station VOIP gateway
CN109660933A (en) * 2019-01-30 2019-04-19 北京视通科技有限公司 A kind of device of simultaneous transmission multi-channel analog audio

Also Published As

Publication number Publication date
CN110085241A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN110085241B (en) Data encoding method, data encoding device, computer storage medium and data encoding equipment
US10469967B2 (en) Utilizing digital microphones for low power keyword detection and noise suppression
JP3186472U (en) Audio decoder using program information metadata
JP7053687B2 (en) Last mile equalization
CN108665895B (en) Method, device and system for processing information
CN104054125B (en) Devices for redundant frame coding and decoding
US20150170665A1 (en) Attribute-based audio channel arbitration
US20170103769A1 (en) Methods, apparatuses for forming audio signal payload and audio signal payload
US8959021B2 (en) Single interface for local and remote speech synthesis
US11244686B2 (en) Method and apparatus for processing speech
US9245529B2 (en) Adaptive encoding of a digital signal with one or more missing values
US8391513B2 (en) Stream synthesizing device, decoding unit and method
US10909332B2 (en) Signal processing terminal and method
CN112687286A (en) Method and device for adjusting noise reduction model of audio equipment
CN109524004B (en) Method for realizing parallel transmission of multi-channel audio and data, external voice interaction device and system
US8868419B2 (en) Generalizing text content summary from speech content
US20180374493A1 (en) System, control method, and control terminal
US20170116998A1 (en) Backward-compatible communication system components
CN110838298A (en) Method, device and equipment for processing multi-channel audio data and storage medium
CN109697987A (en) A kind of the far field voice interaction device and implementation method of circumscribed
US20200335111A1 (en) Audio stream dependency information
CN113096651A (en) Voice signal processing method and device, readable storage medium and electronic equipment
US20210185102A1 (en) Server in multipoint communication system, and operating method thereof
WO2024104460A1 (en) Audio encoding method, audio decoding method, audio encoding apparatus, audio decoding apparatus, device, and storage medium
CN115631758B (en) Audio signal processing method, apparatus, device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant