CN116600234A - Audio stream processing method and device, electronic equipment, storage medium and vehicle - Google Patents
Audio stream processing method and device, electronic equipment, storage medium and vehicle Download PDFInfo
- Publication number
- CN116600234A CN116600234A CN202310453541.9A CN202310453541A CN116600234A CN 116600234 A CN116600234 A CN 116600234A CN 202310453541 A CN202310453541 A CN 202310453541A CN 116600234 A CN116600234 A CN 116600234A
- Authority
- CN
- China
- Prior art keywords
- data
- audio
- stream
- digital audio
- output
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 claims abstract description 30
- 238000004891 communication Methods 0.000 claims abstract description 17
- 238000012545 processing Methods 0.000 claims description 40
- 238000004590 computer program Methods 0.000 claims description 6
- 238000013500 data storage Methods 0.000 claims description 5
- 230000005236 sound signal Effects 0.000 abstract description 13
- 238000010586 diagram Methods 0.000 description 11
- 239000000047 product Substances 0.000 description 9
- 230000005540 biological transmission Effects 0.000 description 6
- 230000008569 process Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 3
- 230000003321 amplification Effects 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000007726 management method Methods 0.000 description 2
- 238000003199 nucleic acid amplification method Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001360 synchronised effect Effects 0.000 description 2
- 241000282414 Homo sapiens Species 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 239000007795 chemical reaction product Substances 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000000802 evaporation-induced self-assembly Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/12—Circuits for transducers, loudspeakers or microphones for distributing signals to two or more loudspeakers
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02D—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
- Y02D10/00—Energy efficient computing, e.g. low power processors, power management or thermal management
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
Abstract
The application discloses an audio stream processing method, an audio stream processing method device, electronic equipment, a storage medium and a vehicle, wherein the method comprises the following steps: the input end obtains a digital audio data stream; splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in a frame queue in a stack; setting an addressing pointer of a corresponding audio channel according to a stack storage address of the digital audio data stream; and reading frame queue data according to the addressing pointer and outputting the frame queue data at an output end according to the corresponding audio channel. Through the scheme, the audio data streams are respectively stored on the high 16 bits and the low 16 bits by utilizing the 32 bits of the bandwidth of the audio communication bus, so that the original one frame of data carries the data of two audio channels, and the data bearing density of the audio channels is expanded. By pointer addressing the high 16-bit data and the low 16-bit data, two independent data streams are formed for output, and the original audio signal output in one audio channel is developed to output the audio signal in two audio channels.
Description
Technical Field
The present application relates to the field of audio, and in particular, to an audio stream processing method, an audio stream processing method apparatus, an electronic device, a storage medium, and a vehicle.
Background
In vehicle audio systems, an audio controller is required to control speakers, and early products often employed integrated audio controllers with fewer output channels. On the middle-high end products, independent audio controllers are often adopted, and the controllers have independent DSP and multi-channel output characteristics, and are generally more than 12 output channels. Along with the rapid development of intelligent cabins, the requirements on the number of output channels of audio control are higher and higher, so that the contradiction between the consumption of original hardware inventory in a short period and the adaptation of quick product iteration is caused, on the one hand, the original hardware inventory saves the cost by batch purchasing, but the number of channels is continuously increased by the quick product iteration, so that accessories cannot be turned out according to the pre-planned inventory consumption.
Therefore, a solution for expanding audio channels for an audio controller is needed, so that the existing audio controller can adapt to the requirement of increasing audio channels caused by rapid product iteration to a certain extent.
Disclosure of Invention
The present application is directed to an audio stream processing method, an electronic device, a storage medium, and a vehicle, and at least one of the above-mentioned problems is solved.
The application provides the following scheme:
according to an aspect of the present application, there is provided an audio stream processing method including:
the input end obtains a digital audio data stream;
splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in a frame queue in a stack;
setting an addressing pointer of a corresponding audio channel according to a stack storage address of the digital audio data stream;
and reading frame queue data according to the addressing pointer and outputting the frame queue data at an output end according to the corresponding audio channel.
Further, the digital audio data stream has a 32bit bandwidth:
the audio channel outputs 16bit data;
and distributing high 16bit and low 16bit of the digital audio data stream for bearing 16bit data of two paths of audio channels.
Further, the splitting the digital audio data stream and storing in-stack frame queues includes:
splitting the data of the digital audio data stream frame by frame, and storing the frame in a stack by taking 16 bits as a unit.
Further, the reading the data in the storage queue according to the addressing pointer, and outputting the data at the output end according to the corresponding audio channel includes:
outputting two paths of 16bit data streams;
wherein each 16bit data stream corresponds to an output terminal.
Further, the two paths of 16bit data streams include:
two paths of 16bit data streams are respectively output at two output ends in the same frame.
Further, the method further comprises the following steps:
the audio channel outputs 8bit data;
distributing four 8bit bandwidths of the digital audio data stream from high order to low order for bearing data of four audio channels;
splitting the data of the digital audio data stream frame by frame, and storing the frame in a stack by taking 8 bits as a unit;
obtaining four paths of 8bit data streams according to the addressing pointer;
wherein each 8bit data stream corresponds to an output end;
four paths of 8bit data streams are respectively output at four output ends in the same frame.
According to two aspects of the present application, there is provided an audio stream processing apparatus including:
the data input module is used for acquiring a digital audio data stream at an input end;
the data processing module is used for splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in an in-stack frame queue;
the data storage module is used for setting an addressing pointer of a corresponding audio channel according to the stack storage address of the digital audio data stream;
and the data reading module is used for reading the frame queue data according to the addressing pointer and outputting the frame queue data at an output end according to the corresponding audio channel.
According to three aspects of the present application, there is provided an electronic apparatus including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the audio stream processing method.
According to four aspects of the present application, there is provided a computer-readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform the steps of the audio stream processing method.
According to five aspects of the present application, there is provided a vehicle including:
electronic equipment for implementing the steps of the audio stream processing method;
a processor that runs a program, and that performs the steps of the audio stream processing method from data output from the electronic device when the program runs;
a storage medium storing a program that, when executed, performs the steps of the audio stream processing method on data output from an electronic device.
Through the scheme, the following beneficial technical effects are obtained:
the application uses the audio communication bus bandwidth of 32 bits to store audio data streams on the upper 16 bits and the lower 16 bits respectively, so that the original frame data carries the data of two audio channels, and the data bearing density of the audio channels is expanded.
The application forms two independent data streams to output by making pointer addressing to the high 16bit data and the low 16bit data, and develops the original audio signal output in one audio channel to output the audio signal in two audio channels.
Drawings
Fig. 1 is a flow chart of a method of processing an audio stream according to one or more embodiments of the present application.
Fig. 2 is a block diagram of an audio stream processing apparatus according to one or more embodiments of the present application.
Fig. 3 is a schematic diagram of an audio stream processing system architecture according to an embodiment of the present application.
Fig. 4 is a schematic diagram of an audio stream splitting storage method according to an embodiment of the present application.
Fig. 5 is a schematic diagram of a method for extracting data expansion channels from an audio stream according to an embodiment of the present application.
Fig. 6 is a schematic diagram of a channel expansion method according to an embodiment of the present application.
Fig. 7 is a block diagram of an electronic device according to one or more embodiments of the present application.
Detailed Description
The following description of the embodiments of the present application will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the application are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
Fig. 1 is a flow chart of a method of processing an audio stream according to one or more embodiments of the present application.
The audio stream processing method as shown in fig. 1 includes:
step S1, an input end acquires a digital audio data stream;
s2, splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in a frame queue in a stack;
step S3, setting an addressing pointer of a corresponding audio channel according to a stack storage address of the digital audio data stream;
and S4, reading frame queue data according to the addressing pointer, and outputting the frame queue data at an output end according to the corresponding audio channel.
Through the scheme, the following beneficial technical effects are obtained:
the application uses the audio communication bus bandwidth of 32 bits to store audio data streams on the upper 16 bits and the lower 16 bits respectively, so that the original frame data carries the data of two audio channels, and the data bearing density of the audio channels is expanded.
The application forms two independent data streams to output by making pointer addressing to the high 16bit data and the low 16bit data, and develops the original audio signal output in one audio channel to output the audio signal in two audio channels.
Specifically, the algorithm processing system or the algorithm processing chip with the bandwidth of 32 bits is generally selected to process 16-bit sound source data, which considers the bandwidth redundancy of reserved data processing, that is, the 32-bit algorithm processing system can be compatible with 4-bit, 8-bit, 12-bit, 16-bit, 18-bit and 32-bit sound source data, and the original product or system can still normally process and output the original data stream under the original system clock beat.
However, with the rapid development of the electric vehicle field, the product iteration is faster, the original purchasing has scale benefit, but in the scene of rapid product iteration, the contradiction is prominent. For example, an electric vehicle wants to create a better audio output effect, and needs to perform audio fusion processing through more channels. However, the chip audio processing algorithm and hardware configuration of the existing stock do not consider the iteration of expanding the channel of the product due to the rapid expansion of the number of channels, and the iteration direction of the product cannot be foreseen.
Generally, sound is slightly weaker than video, and human beings feel the same, for example, existing 16-bit data can express a sound type better, and a plurality of sound mixtures are needed to blend sound with special styles. Therefore, the channels are required to be output synchronously, and a mixing technique is used.
Existing audio stream data typically occupy bandwidth within an audio processing system starting from low order bits, such as compatible 4-bit, 8-bit, 12-bit, 16-bit, 18-bit data, are all used to starting from bit "0", and high order bits are filled with invalid data, including memory, etc.
Two 16-bit data can be loaded on a digital sound source data stream acquired at the input end of an audio processing system, and the two 16-bit data respectively occupy the high 16 bits and the low 16 bits, but not only occupy the low 16 bits;
the audio processing system splits a frame of data into two 16-bit data, which are pushed onto the stack sequentially. The addressing mode is set according to the storage queue of the data, for example, two pointers are used for respectively taking out two groups of data, and audio data streams are output on two ports. The two data streams come from the same data input, can be output on two ports in the same frame, can be matched into a mixed sound signal, and does not need to put the data of two sound sources into an internal register respectively and consider a scheme of adding one-step synchronous processing.
Therefore, the system bandwidth resource can be utilized to bear more sound source data on the input data, and then the high 16-bit data is distributed and arranged in the same register as the low 16-bit data by the data shifting method. And setting different address pointers for the two groups of data, respectively taking out the data to form a data stream, outputting the data stream at two output ends, and expanding a second sound channel compared with the original method of only forming a sound channel at one output end.
When data is input, the two sound sources are respectively allocated with high 16 bits and low 16 bits, the audio processing system does not need to consider the sequence relation of the data sources, and the two sound channels are not interfered.
Of course, the input end adopts a serial bus mode, so that the problem of insufficient hardware ports is solved, but the output end needs to allocate the spare ports to the sound channel output expansion. If the hardware ports are insufficient, the hardware ports can be made up by adopting a proper port expansion strategy, such as an external latch and the like.
In the memory, since a data width of 32 bits is already set, the input data is actually still stored in the lower 16 bits of the memory after being differentiated, but the arrangement of the storage addresses is not hindered, the method of carrying two audio source signals on the input data is not hindered, and the output into two channels is not hindered. The lower 16 bits of the register may be output as valid data and the two address pointers each perform the task of generating an audio stream.
In this embodiment, the digital audio data stream has a 32bit bandwidth:
the audio channel outputs 16bit data;
high 16 bits and low 16 bits of the digital audio data stream are allocated for carrying 16bit data of two channels of audio channels.
In particular, the digital audio data stream has a bandwidth that does not exclude a 64bit bandwidth or more depending on the audio processing system selected. The channel data bandwidth does not exclude other bandwidths if 8 bits. Based on the input of the sound source, bandwidth resources of the audio processing system can be allocated respectively in the same way.
Taking the current common digital audio data stream with 32bit bandwidth as an example and the current common 16bit audio source as an example, the high 16bit and the low 16bit are allocated for bearing the 16bit data of two paths of audio channels.
In this embodiment, splitting the digital audio data stream and in-stack frame queue storage includes:
splitting the data of the digital audio data stream frame by frame, and storing the frame in the stack in a unit of 16 bits.
Specifically, the high 16bit and the low 16bit of the digital audio data stream are all arranged in a queue for storage, and a group of high 16bit and low 16bit data is completely stored at every two adjacent storage addresses.
In this embodiment, reading data in the storage queue according to the address pointer, and outputting at the output end according to the corresponding audio channel includes:
outputting two paths of 16bit data streams;
wherein each 16bit data stream corresponds to an output terminal.
Specifically, in the original system, all 32bit data bandwidths only serve one preset output end. Now each 32bit data contains data of two channels, so that it is necessary to distinguish the output ends corresponding to the two data. For example, a first sound source corresponds to a high 16-bit data corresponding to a first channel output port, for example, a second sound source corresponds to a low 16-bit data corresponding to a second channel output port, and the two channel ports do not intersect.
In this embodiment, the two 16bit data streams include:
two paths of 16bit data streams are respectively output at two output ends in the same frame.
Specifically, the output of the two channels is independent, each frame of 32-bit data stream carries two 16-bit data, and the two 16-bit data together enter the audio processing system, which is not the traditional mode, so that the two channels can be synchronously output and are synchronous with the data frame during input.
In this embodiment, further comprising:
the audio channel outputs 8bit data;
distributing four 8bit bandwidths of the digital audio data stream from high order to low order for bearing data of four audio channels;
splitting the data of the digital audio data stream frame by frame, and storing the frame in the stack by taking 8 bits as a unit;
four paths of 8bit data streams are obtained according to the addressing pointers;
wherein each 8bit data stream corresponds to an output end;
four paths of 8bit data streams are respectively output at four output ends in the same frame.
Specifically, based on the existing 32-bit audio processing system, one frame of data can carry two 16-bit audio source data, and four 8-bit audio source data can also be carried. Similarly, a 64-bit audio processing system can be used to carry four 16-bit audio source data. Through the scheme, the method and the device can cope with the future possibly occurring insufficient-channel scenes to a certain extent, and expand more channels in a short time.
Fig. 2 is a block diagram of an audio stream processing apparatus according to one or more embodiments of the present application.
The audio stream processing apparatus as shown in fig. 2 includes: the device comprises a data input module, a data processing module, a data storage module and a data reading module;
the data input module is used for acquiring a digital audio data stream at an input end;
the data processing module is used for splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in a frame queue in a stack;
the data storage module is used for setting an addressing pointer of a corresponding audio channel according to a stack storage address of the digital audio data stream;
and the data reading module is used for reading the frame queue data according to the addressing pointer and outputting the frame queue data at the output end according to the corresponding audio channel.
It should be noted that, although the system only discloses the data input module, the data processing module, the data storage module and the data reading module, the present application is not limited to the above basic functional modules, but rather, the present application is intended to mean that one skilled in the art may add one or more functional modules to the basic functional modules to form infinite embodiments or technical solutions, that is, the system is open rather than closed, and the scope of the claims of the present application should be considered to be limited to the above disclosed basic functional modules because the present embodiment only discloses individual basic functional modules.
Fig. 3 is a schematic diagram of an audio stream processing system architecture according to an embodiment of the present application.
As shown in fig. 3, a storage space is opened up inside an audio stream processing system (audio processing system) for splitting input sound source data, and then addresses are allocated for queue storage. And then forming new data streams through different pointer addressing, and respectively outputting the new data streams to two output ends to form channels. For example, a storage space can be set outside the DSP system for data splitting, or a storage space can be opened inside the DSP system (excluding the DSP computing core) for data splitting.
The system is used for data stack and transmission allocation processing of multichannel audio streams, multiplexes the digital channels of the DSP chip in a mode of double address pointer index and intermediate buffer registers, and achieves the purpose of expanding the output channels of an audio controller (audio processing system) by properly adding output ends and connecting an analog amplifying chip. Taking a DSP with four paths of I2S digital signal output as an example, in the traditional design scheme, each path of I2S can transmit two paths of audio signals, the two paths of audio signals are respectively given to different power amplification chips as input, and the audio signals are output after being converted by a DAC, and under the software architecture, the maximum output channel of an audio controller is 8 paths. Digital audio signals of the I2S type typically employ a data stack and a transport protocol. In the input aspect, one path of I2S transmits two paths of audio signal data (sound source data) including high-order and low-order data of the audio data. In the output aspect, an address pointer p is set to point to the data address which is required to be read currently, and data reading is carried out according to a certain addressing sequence, and the reading sequence of the data can be customized by a developer.
Fig. 4 is a schematic diagram of an audio stream splitting storage method according to an embodiment of the present application.
As shown in fig. 4, first, high 16-bit data and low 16-bit data of an input audio stream (e.g., a sound source data stream) are stored in the same data stack on the data stack. Of the input data, the high-order data is converted from the low-order data by a certain coding mode, for example, the low-order data is "01 02 0304 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10", the high-order data is "02 04 06 08 0A 0C 0E 1012 14 16 18 1A 1C 1E 20", and the relationship between the low-order data and the high-order data can be expressed as "high-order data=low-order data < <1". By the data stack mode, more data can be stored simultaneously on the premise of unchanged data bandwidth.
Next, by introducing an intermediate buffer register for storing the input audio data while the high-order data is stored in the intermediate buffer register in the form of actual data by way of decoding, for example, the low-order data is "01 02 0304 05 06 07 08 09 0A 0B 0C 0D 0E 0F 10", the high-order data is "02 04 06 08 0A 0C 0E 1012 14 16 18 1A 1C 1E 20", and decoding the high-order data into the low-order data may be expressed as "low-order data=high-order data > >1".
The memory location of the intermediate cache register itself may be 32 bits, but does not interfere with processing data in a 16bit manner of operation.
Fig. 5 is a schematic diagram of a method for extracting data expansion channels from an audio stream according to an embodiment of the present application.
As shown in fig. 5, when reading data, since data of a plurality of channels need to be read in the same register and output distribution is performed, one pointer is used to calculate which channel data is currently read, the address of the data start position of each channel data is pointed to, and the variation range spans between each channel by the double address pointer index. The other pointer is used for calculating the data of the corresponding audio data which is read currently, points to the address of the data in each channel, and the range of variation spans and is in each channel.
As shown in fig. 6, in another specific embodiment, taking 32bit audio data as an example, the present embodiment does not use 16bit low rate audio data to replace original 32bit audio data; instead, the stack and codec modes are used to store more data with less memory space. The shift operation in this embodiment is only one of the encoding and decoding modes, and the method is not limited herein.
The traditional scheme is that one input corresponds to one output, and the input corresponds to one I2S, so that a single-channel loudspeaker is driven to work, and if more channels of loudspeakers are required to be driven to work, the chip resources of the DSP are required to be increased.
The scheme of the embodiment is to realize the driving of more channel speakers under the condition of ensuring that hardware DSP resources are unchanged. When the audio data bandwidth is 32 bits (the audio source is 8 bits, 16 bits, 24 bits), the data transmission and transmission form is accessed only from the low order.
By the data stack mode provided in the embodiment, the storage space of the data can be compressed, the 16-bit space is used for storing the 32-bit data, the high-order information of the original data is marked as X, the low-order information is marked as Y, and the converted data is marked as Z, and then the method comprises the following steps: denoted as H, the data encoding matrix may be customized by a developer, without limitation. Will [ X, Y ]]Conversion to [ Z]Data transmission and storage are carried out, so that the number of channels and the memory requirement [ Z ] in the transmission process can be reduced]=[X,Y]The method comprises the steps of carrying out a first treatment on the surface of the During replay, [ Z ]]Is reduced to [ X, Y ]]To ensure playback of the original data, [ X, Y ]]=[Z]H -1 ;
Taking 32bit audio data as an example, by way of example in the embodiment, data transmission can become as follows. Even if there is only one I2S, the output channel can be changed from the original single channel to two channels or 2 channels to 4 channels.
The I2S signal output from the DSP is stored and data-converted by one buffer, and finally output to an AMP (amplifier) for output. When reading data, since the data of a plurality of channels need to be read in the same register and output distribution is performed, one pointer is used for calculating which channel data is currently read through the index of the double address pointer, the address of the data starting position of each channel data is pointed, and the variation range spans between each channel. The other pointer is used to calculate the current read in-channel data, points to the address of each in-channel data, and spans the range of variation and within each channel.
According to the scheme, the audio signal is encoded and decoded through data transformation, the data is compressed and stored before being stored at the input DSP end, the audio data is restored after being output to the DSP end, and meanwhile, the audio data is output to the AMP end for audio amplification, most sound systems in the automobile are more than 12 loudspeakers at present, because more output channels are needed, the traditional scheme generally adopts a mode of connecting loudspeakers in parallel under the condition that the DSP output channels are insufficient, the mode has great difficulty in realizing frequency division and tuning, and the mode of double-address pointer or multi-pointer index and intermediate buffer registers is adopted on the basis of the original DSP.
Fig. 7 is a block diagram of an electronic device according to one or more embodiments of the present application.
As shown in fig. 7, the present application provides an electronic apparatus including: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory stores a computer program which, when executed by the processor, causes the processor to perform the steps of a method of processing an audio stream.
The application also provides a computer readable storage medium storing a computer program executable by an electronic device, which when run on the electronic device causes the electronic device to perform the steps of a method of audio stream processing.
The present application also provides a vehicle including:
the electronic equipment is used for realizing the steps of the audio stream processing method;
a processor that runs a program, and performs the steps of the audio stream processing method from data output from the electronic device when the program runs;
a storage medium storing a program that, when executed, performs steps of an audio stream processing method on data output from an electronic device.
The communication bus mentioned above for the electronic devices may be a peripheral component interconnect standard (Peripheral Component Interconnect, PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, EISA) bus, etc. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. For ease of illustration, the figures are shown with only one bold line, but not with only one bus or one type of bus.
The electronic device includes a hardware layer, an operating system layer running on top of the hardware layer, and an application layer running on top of the operating system. The hardware layer includes hardware such as a central processing unit (CPU, central Processing Unit), a memory management unit (MMU, memory Management Unit), and a memory. The operating system may be any one or more computer operating systems that implement electronic device control via processes (processes), such as a Linux operating system, a Unix operating system, an Android operating system, an iOS operating system, or a windows operating system, etc. In addition, in the embodiment of the present application, the electronic device may be a handheld device such as a smart phone, a tablet computer, or an electronic device such as a desktop computer, a portable computer, which is not particularly limited in the embodiment of the present application.
The execution body controlled by the electronic device in the embodiment of the application can be the electronic device or a functional module in the electronic device, which can call a program and execute the program. The electronic device may obtain firmware corresponding to the storage medium, where the firmware corresponding to the storage medium is provided by the vendor, and the firmware corresponding to different storage media may be the same or different, which is not limited herein. After the electronic device obtains the firmware corresponding to the storage medium, the firmware corresponding to the storage medium can be written into the storage medium, specifically, the firmware corresponding to the storage medium is burned into the storage medium. The process of burning the firmware into the storage medium may be implemented by using the prior art, and will not be described in detail in the embodiment of the present application.
The electronic device may further obtain a reset command corresponding to the storage medium, where the reset command corresponding to the storage medium is provided by the provider, and the reset commands corresponding to different storage media may be the same or different, which is not limited herein.
At this time, the storage medium of the electronic device is a storage medium in which the corresponding firmware is written, and the electronic device may respond to a reset command corresponding to the storage medium in which the corresponding firmware is written, so that the electronic device resets the storage medium in which the corresponding firmware is written according to the reset command corresponding to the storage medium. The process of resetting the storage medium according to the reset command may be implemented in the prior art, and will not be described in detail in the embodiments of the present application.
For convenience of description, the above devices are described as being functionally divided into various units and modules. Of course, the functions of the units, modules may be implemented in one or more pieces of software and/or hardware when implementing the application.
It will be understood by those skilled in the art that all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs unless defined otherwise. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the prior art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
For the purposes of simplicity of explanation, the methodologies are shown and described as a series of acts, it is to be understood and appreciated by one of ordinary skill in the art that the methodologies are not limited by the order of acts, as some acts may, in accordance with the methodologies, take place in other order or concurrently. Further, those skilled in the art will appreciate that the embodiments described in the specification are presently preferred embodiments, and that the acts are not necessarily required by the embodiments of the application.
From the above description of embodiments, it will be apparent to those skilled in the art that the present application may be implemented in software plus a necessary general hardware platform. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, etc., including several instructions for causing a computer device (which may be a personal computer, a server or a network device, etc.) to perform the method according to the embodiments or some parts of the embodiments of the present application.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present application, and not for limiting the same; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the application.
Claims (10)
1. An audio stream processing method, characterized in that the audio stream processing method comprises:
the input end obtains a digital audio data stream;
splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in a frame queue in a stack;
setting an addressing pointer of a corresponding audio channel according to a stack storage address of the digital audio data stream;
and reading frame queue data according to the addressing pointer and outputting the frame queue data at an output end according to the corresponding audio channel.
2. The audio stream processing method according to claim 1, wherein the digital audio data stream has a 32bit bandwidth:
the audio channel outputs 16bit data;
and distributing high 16bit and low 16bit of the digital audio data stream for bearing 16bit data of two paths of audio channels.
3. The audio stream processing method according to claim 2, wherein splitting the digital audio data stream and storing in-stack frame queues comprises:
splitting the data of the digital audio data stream frame by frame, and storing the frame in a stack by taking 16 bits as a unit.
4. A method of processing an audio stream according to claim 3, wherein said reading data in the memory queue according to the addressing pointer and outputting at the output according to the corresponding audio channel comprises:
outputting two paths of 16bit data streams;
wherein each 16bit data stream corresponds to an output terminal.
5. The audio stream processing method according to claim 4, wherein the two 16bit data streams include:
two paths of 16bit data streams are respectively output at two output ends in the same frame.
6. The audio stream processing method according to claim 5, further comprising:
the audio channel outputs 8bit data;
distributing four 8bit bandwidths of the digital audio data stream from high order to low order for bearing data of four audio channels;
splitting the data of the digital audio data stream frame by frame, and storing the frame in a stack by taking 8 bits as a unit;
obtaining four paths of 8bit data streams according to the addressing pointer;
wherein each 8bit data stream corresponds to an output end;
four paths of 8bit data streams are respectively output at four output ends in the same frame.
7. An audio stream processing apparatus, characterized in that the audio stream processing apparatus comprises:
the data input module is used for acquiring a digital audio data stream at an input end;
the data processing module is used for splitting the digital audio data stream according to the audio channel corresponding to the digital audio data stream and storing the digital audio data stream in an in-stack frame queue;
the data storage module is used for setting an addressing pointer of a corresponding audio channel according to the stack storage address of the digital audio data stream;
and the data reading module is used for reading the frame queue data according to the addressing pointer and outputting the frame queue data at an output end according to the corresponding audio channel.
8. An electronic device, comprising: the device comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
the memory has stored therein a computer program which, when executed by the processor, causes the processor to perform the steps of the audio stream processing method of any of claims 1 to 6.
9. A computer readable storage medium, characterized in that a computer program executable by an electronic device is stored, which, when run on the electronic device, causes the electronic device to perform the steps of the audio stream processing method according to any one of claims 1 to 6.
10. A vehicle, characterized by comprising:
electronic device for implementing the steps of the audio stream processing method of any one of claims 1 to 6;
a processor that runs a program, data output from the electronic device when the program runs performing the steps of the audio stream processing method according to any one of claims 1 to 6;
a storage medium storing a program that, when executed, performs the steps of the audio stream processing method according to any one of claims 1 to 6 on data output from an electronic device.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310453541.9A CN116600234A (en) | 2023-04-25 | 2023-04-25 | Audio stream processing method and device, electronic equipment, storage medium and vehicle |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310453541.9A CN116600234A (en) | 2023-04-25 | 2023-04-25 | Audio stream processing method and device, electronic equipment, storage medium and vehicle |
Publications (1)
Publication Number | Publication Date |
---|---|
CN116600234A true CN116600234A (en) | 2023-08-15 |
Family
ID=87607037
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310453541.9A Pending CN116600234A (en) | 2023-04-25 | 2023-04-25 | Audio stream processing method and device, electronic equipment, storage medium and vehicle |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN116600234A (en) |
-
2023
- 2023-04-25 CN CN202310453541.9A patent/CN116600234A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP2001350713A (en) | Transfer controller | |
CN105556421A (en) | System and method for conserving memory power using dynamic memory i/o resizing | |
US8510514B2 (en) | Apparatus, method and data processing element for efficient parallel processing of multimedia data | |
TWI423119B (en) | Externally-connected audio apparatus and method for processing audio signal thereof | |
US20150348514A1 (en) | A method and apparatus for adaptive graphics compression and display buffer switching | |
US6629001B1 (en) | Configurable controller for audio channels | |
US20080109229A1 (en) | Sound data processing apparatus | |
CN116600234A (en) | Audio stream processing method and device, electronic equipment, storage medium and vehicle | |
US20070296620A1 (en) | Data processing integrated circuit | |
US20060253288A1 (en) | Audio coding and decoding apparatus, computer device incorporating the same, and method thereof | |
JPH09282143A (en) | Method and device for controlling gain in digital signal processor and instruction for computer | |
US8327108B2 (en) | Slave and a master device, a system incorporating the devices, and a method of operating the slave device | |
KR20150078866A (en) | Apparatus for processing data and method for processing data | |
JP2880961B2 (en) | Data buffering device and control method thereof | |
CN113835671A (en) | Audio data fast playing method, system, computer equipment and storage medium | |
US20020065665A1 (en) | Digital data decompressing system and method | |
US5442125A (en) | Signal processing apparatus for repeatedly performing a same processing on respective output channels in time sharing manner | |
JP2002140226A (en) | Bit stream processor | |
KR20040104200A (en) | apparatus for controlling audio date of a different driver only by inner software and method for controlling the same | |
CN115794022B (en) | Audio output method, apparatus, device, storage medium, and program product | |
CN116389970B (en) | Independent sound channel output method of multiple types of sound sources, SOC chip and automobile | |
CN111028848B (en) | Compressed voice processing method and device and electronic equipment | |
US20240143268A1 (en) | Supporting multiple audio endpoints with a single integrated inter-chip sound controller | |
CN114442951B (en) | Method, device, storage medium and electronic equipment for transmitting multipath data | |
KR100285419B1 (en) | Apparatus and method for digital audio coding using broadcasting system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |