CN111782177A - Rtos-based audio stream output method - Google Patents

Rtos-based audio stream output method Download PDF

Info

Publication number
CN111782177A
CN111782177A CN202010663697.6A CN202010663697A CN111782177A CN 111782177 A CN111782177 A CN 111782177A CN 202010663697 A CN202010663697 A CN 202010663697A CN 111782177 A CN111782177 A CN 111782177A
Authority
CN
China
Prior art keywords
layer
audio
rtos
voice data
audio stream
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010663697.6A
Other languages
Chinese (zh)
Other versions
CN111782177B (en
Inventor
李重
王利平
权良民
程龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Anhui Xinzhi Technology Co ltd
Original Assignee
Anhui Xinzhi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Anhui Xinzhi Technology Co ltd filed Critical Anhui Xinzhi Technology Co ltd
Priority to CN202010663697.6A priority Critical patent/CN111782177B/en
Publication of CN111782177A publication Critical patent/CN111782177A/en
Application granted granted Critical
Publication of CN111782177B publication Critical patent/CN111782177B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output

Abstract

The invention discloses an audio stream output method based on rtos.A function interface packaged by an I/O Device Framework layer is called by an application layer, and continuous voice data is transmitted to a next sample layer for resampling; the Audio Framework reads the data after the resampling processing and sends the data to the BSP Audio Driver layer, and the BSP Audio Driver finally calls the HW Driver to send the data through a bottom hardware interface; the invention improves the real-time and stability of audio stream output based on a real-time operating system, reduces the system coupling, and ensures that the data of an application layer can be stably output to a bottom hardware interface in real time by using independent task processing on each layer through a multi-layer design; the method not only simplifies the data processing flow of application development, but also enables the drive development to be more modularized, improves the transportability of the whole audio drive, and is favorable for the convenience of service layer expansion and maintenance.

Description

Rtos-based audio stream output method
Technical Field
The invention relates to the technical field of audio stream processing, in particular to an audio stream output method based on rtos.
Background
rtos is different from linux, android and windows systems, has small volume and simple function, and is commonly used in the fields of Internet of things, industrial control and the like; and due to the aspects of small software and hardware size, insufficient computing capability, narrow application range and the like, the support provided by various new technologies, new architectures and new applications is very limited. At present, the processing method of audio streams on rtos basically directly operates bottom hardware by application layer codes, no hierarchy exists, developers need to know both upper layer software and bottom hardware mechanisms, the development difficulty is high, the portability does not exist, and the application developers directly operate the bottom hardware to have great risk to a system. Unlike linux, which has an alsa framework to construct output control for the entire audio stream.
In order to solve the above-mentioned drawbacks, a technical solution is now provided.
Disclosure of Invention
The invention aims to provide an audio stream output method based on rtos, which refers to a linux system audio frame and introduces the linux system audio frame into the field of an embedded real-time operating system, so that rtos application developers can directly call an upper interface without paying attention to a bottom layer audio processing logic completely, and then voice data can be stably output in real time.
The technical problems to be solved by the invention are as follows:
how to solve the existing stable output method of audio stream under rtos by an effective method. At present, the audio stream output under rtos is basically that the audio stream is directly sent to bottom hardware by an application layer without any hierarchy, developers need to know both upper layer software and bottom hardware mechanisms, the development difficulty is high, the transportability is not provided, and the application developers directly operate the bottom hardware to have great risk to the system.
The purpose of the invention can be realized by the following technical scheme:
an rtos-based audio stream output method, comprising the steps of:
step one, an application layer transfers continuous voice data to a next-layer sample layer by calling a function interface packaged by an I/O Device Framework layer, such as open, start, write and the like;
step two, after receiving the data of the application layer, the reply layer determines whether to perform resampling processing according to the configuration parameters, and then writes the data into the allocated buffer area 0;
step three, the Audio frame layer reads the voice data from the buffer area 0, writes the voice data into the buffer area 1, checks whether the ping-pong buffer area is empty, and moves the data of the buffer area 1 into the ping-pong buffer area;
reading voice data from a ping-pong buffer area by the BSP Audio Driver layer, writing the voice data into the HW Driver layer, triggering the Audio Framework layer to move the data in the buffer area 1 into the ping-pong buffer area, and simultaneously acquiring parameter information such as volume, Audio path and the like to configure the parameter information;
writing the final voice data into the DMA by the HW Driver layer, and finally sending out the voice data through a bottom layer audio interface to finish the output of the whole audio stream;
further, the audio stream is equivalent to a pcm format data stream; rtos is equivalent to a real-time operating system.
Further, the I/O Device Framework layer mainly implements an I/O Device management interface, so that a voice stream can be written into an audio Device. Some mainstream rtos has an I/O Device Framework layer interface that can be packaged well, such as open, start, write, etc., and can be used directly.
Further, the sample layer mainly realizes a resampling function for a voice stream, and needs to register a pcm Device for opening an I/O Device Framework layer. This layer has no platform dependencies.
Further, the Audio Framework layer is mainly used for constructing the whole Audio driving Framework and is used for connecting the sample layer and the BSP Audio Driver. This layer has no platform dependencies.
Further, the BSP Audio Driver layer mainly transmits the voice stream to the HW Driver layer, and the layer needs to register a sound card device for binding the pcm device in the second step. The layer has platform dependency, and the initialization of a system clock and a sound card is realized at the layer.
Furthermore, the HW Driver layer is the bottom layer, mainly realizes the controller drive of audio, and has platform dependency.
The invention has the beneficial effects that:
the invention is applied to how to stably and effectively output the audio stream of an embedded real-time operating system in real time, and is introduced into the field of the embedded real-time operating system according to the existing framework of a linux system. The method simplifies the data processing flow of application development, reduces the system coupling, simultaneously enables the drive development to be more modularized, improves the transportability of the whole audio drive, is favorable for the convenience of service layer expansion and maintenance, and reduces the project development period.
Drawings
In order to facilitate understanding for those skilled in the art, the present invention will be further described with reference to the accompanying drawings;
FIG. 1 is a block diagram of the multi-layer software output flow of the present invention.
Detailed Description
The following provides a detailed description of the preferred embodiments of the present invention with reference to the accompanying drawings:
as shown in fig. 1, an rtos-based audio stream output method includes the following steps:
step one, an application layer transfers continuous voice data to a next-layer sample layer by calling a function interface packaged by an I/O Device Framework layer, such as open, start, write and the like;
wherein the audio stream is equivalent to a pcm format data stream; rtos is equivalent to a real-time operating system;
furthermore, the I/O Device Framework layer mainly realizes an I/O Device management interface, so that the voice stream can be written into the registered audio equipment; some mainstream rtos has an I/O Device Framework layer interface which can be packaged well, such as open, start, write, etc., and can be modified and used on the basis of the I/O Device Framework layer interface; note that if rtos is a preemptive kernel based on task priority, write must use a blocking approach, otherwise other tasks will not get CPU while looping write audio streams;
step two, after receiving the data of the application layer, the reply layer determines whether to perform resampling processing according to the configuration parameters, and then writes the data into the allocated buffer area 0;
the multiplex layer mainly realizes the resampling function of the voice stream, a pcm Device needs to be registered for opening the I/O Device Framework layer, and the size of the buffer area 0 needs to be dynamically configured according to the transmission rate of an actual audio interface; the layer has no platform correlation, and is convenient to transplant among platforms;
furthermore, the resampling of the voice stream can refer to a linux system alsa framework, and some simple resampling algorithms can be found from the internet and can be debugged;
step three, the Audio frame layer reads the voice data from the buffer area 0, writes the voice data into the buffer area 1, checks whether the ping-pong buffer area is empty, and moves the data of the buffer area 1 into the ping-pong buffer area;
wherein, the Audio Framework layer is mainly used for constructing the whole Audio driving frame and is used for connecting the sample layer and the BSP Audio Driver; the size of the buffer 1 needs to be matched with the buffer 0, and the size of the ping-pong buffer is related to the size of one DMA transfer of hardware; the layer has no platform correlation, and is convenient to transplant among platforms;
reading voice data from a ping-pong buffer area by the BSP Audio Driver layer, writing the voice data into the HW Driver layer, triggering the Audio Framework layer to move the data in the buffer area 1 into the ping-pong buffer area, and simultaneously acquiring parameter information such as volume, Audio path and the like to configure the parameter information;
the BSP Audio Driver layer mainly transmits the voice stream to the HW Driver layer, and a sound card device needs to be registered in the layer and is used for binding the pcm device in the step two; the layer has platform correlation, and the configuration of parameter information such as volume, audio path and the like and the initialization of a system clock and a sound card are realized in the layer;
furthermore, after data is written into the write HW Driver layer each time, the Audio frame layer needs to be triggered to move the data in the buffer area 1 into the ping-pong buffer area, so that the situation that one buffer area is provided with data at the moment of the ping-pong buffer area can be ensured;
furthermore, the configuration of the parameter information such as the volume, the audio path and the like is that when the system is provided with an externally controllable DAC device, the DAC device can be used for adjusting the volume and selecting different audio paths to output voice;
writing the final voice data into the DMA by the HW Driver layer, and finally sending out the voice data through a bottom layer audio interface to finish the output of the whole audio stream;
the HW Driver layer is the bottom layer and mainly realizes controller driving of audio, the layer has platform correlation, and the difference of different controller driving is large;
the working principle is as follows: in the field of real-time operating systems, due to the limitations of embedded software and hardware environments, the real-time operating systems have very limited support for audio function modules, and it is difficult to find an alsa framework similar to linux to manage the whole audio, and only based on the current rtos environment, a set of audio framework suitable for rtos is developed to manage the stable output of the whole audio stream. Based on rtos multitasking, a multi-layer driving software architecture is correspondingly designed to manage the whole audio stream. Each layer uses independent task processing to improve the real-time performance of the system, so that rtos application developers can directly call an upper layer interface to stably output voice data in real time without paying attention to the bottom layer audio processing logic. The method simplifies the data processing flow of application development, reduces the system coupling, simultaneously enables the drive development to be more modularized, improves the transportability of the whole audio drive, is favorable for the convenience of service layer expansion and maintenance, and reduces the project development period.
The foregoing is merely exemplary and illustrative of the present invention and various modifications, additions and substitutions may be made by those skilled in the art to the specific embodiments described without departing from the scope of the invention as defined in the following claims.

Claims (7)

1. An rtos-based audio stream output method, comprising the steps of:
the method comprises the following steps: the application layer transmits continuous voice data to the next sample layer by calling the function interface packaged by the I/O Device Framework layer, wherein the function interface comprises open, start and write;
step two: after receiving the voice data of the application layer, the response layer carries out resampling processing according to the configuration parameters, and then writes the voice data into the allocated buffer area 0;
step three: the Audio frame layer reads the voice data from the buffer area 0, writes the voice data into the buffer area 1, checks whether the ping-pong buffer area is empty, and moves the voice data in the buffer area 1 into the ping-pong buffer area;
reading voice data from a ping-pong buffer area by the BSP Audio Driver layer, writing the voice data into the HW Driver layer, triggering the Audio Framework layer to move the voice data in the buffer area 1 into the ping-pong buffer area, and simultaneously acquiring and configuring the volume and Audio path parameter information;
step five: and the HW Driver layer writes the final voice data into the DMA and sends the final voice data out through a bottom layer audio interface so as to finish the output of the whole audio stream.
2. The rtos-based audio stream output method according to claim 1, wherein the audio stream is a pcm format data stream; rtos is a real-time operating system.
3. The rtos-based audio stream output method of claim 1, wherein the I/O device Framework layer is used to implement an I/O device management interface, and write a voice stream into an audio device.
4. The rtos-based audio stream output method as claimed in claim 1, wherein the sample layer is used to realize resampling of the audio stream, and simultaneously register a pcm Device for I/O Device Framework layer open use.
5. The rtos-based Audio stream output method as claimed in claim 1, wherein the AudioFramework layer is used to construct the whole Audio Driver framework and to connect the sample layer and the BSP Audio Driver layer.
6. The rtos-based Audio stream output method as claimed in claim 1, wherein the BSP Audio Driver layer is used to transmit the Audio stream to the HW Driver layer, the BSP Audio Driver layer registers a sound card device, the sound card device is bound with the pcm device, and the BSP Audio Driver layer is further used to initialize the clock and the sound card.
7. The rtos-based audio stream output method as claimed in claim 1, wherein the HWDriver layer is the bottom layer, and is used for realizing controller driving of audio, and has platform dependency.
CN202010663697.6A 2020-07-10 2020-07-10 Rtos-based audio stream output method Active CN111782177B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010663697.6A CN111782177B (en) 2020-07-10 2020-07-10 Rtos-based audio stream output method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010663697.6A CN111782177B (en) 2020-07-10 2020-07-10 Rtos-based audio stream output method

Publications (2)

Publication Number Publication Date
CN111782177A true CN111782177A (en) 2020-10-16
CN111782177B CN111782177B (en) 2023-10-03

Family

ID=72768286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010663697.6A Active CN111782177B (en) 2020-07-10 2020-07-10 Rtos-based audio stream output method

Country Status (1)

Country Link
CN (1) CN111782177B (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100023A1 (en) * 2003-11-07 2005-05-12 Buckwalter Paul B. Isochronous audio network software interface
US20060074637A1 (en) * 2004-10-01 2006-04-06 Microsoft Corporation Low latency real-time audio streaming
EP2323292A1 (en) * 2009-11-12 2011-05-18 SiTel Semiconductor B.V. Resampler with automatic detecting and adjusting the resampling ratio
CN106293659A (en) * 2015-05-21 2017-01-04 阿里巴巴集团控股有限公司 A kind of audio frequency real-time processing method, device and intelligent terminal
CN106528040A (en) * 2016-11-02 2017-03-22 福建星网视易信息系统有限公司 Method and apparatus for improving audio quality of android device
US20170286048A1 (en) * 2016-03-29 2017-10-05 Shoumeng Yan Technologies for framework-level audio device virtualization
CN107992282A (en) * 2017-11-29 2018-05-04 珠海市魅族科技有限公司 Audio data processing method and device, computer installation and readable storage devices
US20190018243A1 (en) * 2017-07-14 2019-01-17 Realwear, Incorporated Multi-process access to a single-process resource

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050100023A1 (en) * 2003-11-07 2005-05-12 Buckwalter Paul B. Isochronous audio network software interface
US20060074637A1 (en) * 2004-10-01 2006-04-06 Microsoft Corporation Low latency real-time audio streaming
EP2323292A1 (en) * 2009-11-12 2011-05-18 SiTel Semiconductor B.V. Resampler with automatic detecting and adjusting the resampling ratio
CN106293659A (en) * 2015-05-21 2017-01-04 阿里巴巴集团控股有限公司 A kind of audio frequency real-time processing method, device and intelligent terminal
US20170286048A1 (en) * 2016-03-29 2017-10-05 Shoumeng Yan Technologies for framework-level audio device virtualization
CN106528040A (en) * 2016-11-02 2017-03-22 福建星网视易信息系统有限公司 Method and apparatus for improving audio quality of android device
US20190018243A1 (en) * 2017-07-14 2019-01-17 Realwear, Incorporated Multi-process access to a single-process resource
CN107992282A (en) * 2017-11-29 2018-05-04 珠海市魅族科技有限公司 Audio data processing method and device, computer installation and readable storage devices

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈熹等: "面向Wi-Fi音频应用的嵌入式Linux音频驱动设计", 电子设计工程, pages 95 - 100 *

Also Published As

Publication number Publication date
CN111782177B (en) 2023-10-03

Similar Documents

Publication Publication Date Title
Schmidt et al. A high-performance end system architecture for real-time CORBA
CA2284277C (en) Software implementation of modem on computer
US7080386B2 (en) Architecture with digital signal processor plug-ins for general purpose processor media frameworks
US6209041B1 (en) Method and computer program product for reducing inter-buffer data transfers between separate processing components
CN101268445B (en) Method and device for providing real-time threading service for application program of multi-core environment
US20020091826A1 (en) Method and apparatus for interprocessor communication and peripheral sharing
CN111666242B (en) Multi-channel communication system based on FT platform LPC bus
KR20080040104A (en) Application component communication apparatus of software communication architecture(sca)-based system, and method thereof
CN103019823B (en) Realize the message queue method that VxWorks communicates with Qt
KR20060053246A (en) Message-passing processor
CN111782177A (en) Rtos-based audio stream output method
CN102104508A (en) M module low-level (LL) driver layer realization method for M module-based local area network (LAN)-based extensions for instrumentation (LXI) equipment
Armand Give a process to your drivers
CN102567071A (en) Virtual serial port system and communication method for same
KR100605067B1 (en) Application controlled data flow between processing tasks
Pava et al. Real time platform middleware for transparent prototyping of haptic applications
Picioroaga Scalable and Efficient Middleware for Real-time Embedded Systems. A Uniform Open Service Oriented Microkernel Based Architecture
Fischer et al. Towards interprocess communication and interface synthesis for a heterogeneous real-time rapid prototyping environment
RU2511611C2 (en) Data flow network
Mendoza et al. Developing CAN based networks on RT-Linux
Scott et al. Hardware/software runtime environment for dynamically reconfigurable systems
Yin et al. Implementing high performance remote method invocation in cca
Subramonian et al. Design and implementation of norb
Acher et al. The TUM PCI/SCI Adapter
Yao Implementing an application programming interface for distributed adaptive computing systems

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant