CN116665625A - Audio signal processing method, device, electronic equipment and storage medium - Google Patents

Audio signal processing method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116665625A
CN116665625A CN202310936012.4A CN202310936012A CN116665625A CN 116665625 A CN116665625 A CN 116665625A CN 202310936012 A CN202310936012 A CN 202310936012A CN 116665625 A CN116665625 A CN 116665625A
Authority
CN
China
Prior art keywords
audio signal
audio
signal
layer
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310936012.4A
Other languages
Chinese (zh)
Inventor
蒲小飞
徐开庭
万为侗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Seres Technology Co Ltd
Original Assignee
Chengdu Seres Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Seres Technology Co Ltd filed Critical Chengdu Seres Technology Co Ltd
Priority to CN202310936012.4A priority Critical patent/CN116665625A/en
Publication of CN116665625A publication Critical patent/CN116665625A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/361Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems
    • G10H1/366Recording/reproducing of accompaniment for use with an external source, e.g. karaoke systems with means for modifying or correcting the external signal, e.g. pitch correction, reverberation, changing a singer's voice
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0041Recording/reproducing or transmission of music for electrophonic musical instruments in coded form
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/0033Recording/reproducing or transmission of music for electrophonic musical instruments
    • G10H1/0083Recording/reproducing or transmission of music for electrophonic musical instruments using wireless transmission, e.g. radio, light, infrared
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/40Rhythm
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/46Volume control
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/005Musical accompaniment, i.e. complete instrumental rhythm synthesis added to a performed melody, e.g. as output by drum machines

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Details Of Audible-Bandwidth Transducers (AREA)
  • Stereophonic System (AREA)

Abstract

The application relates to the technical field of audio frequency, and provides an audio signal processing method, an audio signal processing device, electronic equipment and a storage medium. The method comprises the following steps: receiving an audio signal through a wireless driving module of a kernel layer of the processing unit; copying the audio signals to obtain a first audio signal and a second audio signal which are identical; transmitting the first audio signal to an audio driving module, wherein the audio driving module is positioned at the kernel layer of the processing unit and is used for outputting the received audio signal; transmitting the second audio signal to a software layer of the processing unit; the software layer obtains the associated audio signals of the audio signals, and copies the associated audio signals to obtain the first associated audio signals and the second associated audio signals which are identical; the software layer transmits the first associated audio signal to the audio driving module; and the software layer processes the second audio signal and/or the second associated audio signal to obtain and output an audio processing signal. The method can reduce the play delay of the audio signal.

Description

Audio signal processing method, device, electronic equipment and storage medium
Technical Field
The present application relates to the field of audio technologies, and in particular, to an audio signal processing method, an audio signal processing device, an electronic device, and a storage medium.
Background
Along with the increasing degree of vehicle intellectualization, some current vehicle models are equipped with a vehicle-mounted K song system. The vehicle-mounted K song system can receive singing voice of a user by using the wireless microphone, acquire corresponding accompaniment from K song application of the vehicle-mounted unit or terminal application connected with the vehicle-mounted unit, output the singing voice with accompaniment through a power amplifier after the singing voice and the accompaniment are processed through synchronization, scoring and the like, and display information such as scoring and the like on a display interface of the K song system.
However, the audio processing of the existing vehicle-mounted K song system is generally realized based on an android system, singing voice received by an interface driving unit located in a system kernel layer is required to be sent to a software layer, and after being processed together with accompaniment, the singing voice is sent to an audio driving module in the kernel layer, and the singing voice is sent to a power amplifier to be played by the audio driving module, so that an information processing link is longer, time delay is longer, and user experience is reduced.
Disclosure of Invention
In view of the above, the embodiments of the present application provide an audio signal processing method, an apparatus, an electronic device, and a storage medium, so as to solve the problem in the prior art that the audio signal transmission delay of an audio processing unit implemented based on the android system is relatively large.
In a first aspect of an embodiment of the present application, there is provided an audio signal processing method, which is executed by a processing unit, where the processing unit includes at least a kernel layer and a software layer, and includes:
receiving an audio signal through a wireless driving module of a kernel layer of the processing unit;
copying the audio signals to obtain a first audio signal and a second audio signal which are identical;
transmitting the first audio signal to an audio driving module, wherein the audio driving module is positioned at the kernel layer of the processing unit and is used for outputting the received audio signal;
transmitting the second audio signal to a software layer of the processing unit;
the software layer obtains the associated audio signals of the audio signals, and copies the associated audio signals to obtain the first associated audio signals and the second associated audio signals which are identical;
the software layer transmits the first associated audio signal to the audio driving module;
and the software layer processes the second audio signal and/or the second associated audio signal to obtain and output an audio processing signal.
A second aspect of an embodiment of the present application provides an audio signal processing apparatus, including:
the receiving module is configured to receive the audio signal through the wireless driving module of the kernel layer of the processing unit;
The first copying module is configured to copy the audio signals to obtain a first audio signal and a second audio signal which are identical;
the first transmission module is configured to transmit a first audio signal to the audio driving module, and the audio driving module is positioned at the kernel layer of the processing unit and is used for outputting the received audio signal;
the first transmission module is further configured to transmit the second audio signal to a software layer of the processing unit;
the second copying module is configured to acquire the associated audio signals of the audio signals at the software layer, copy the associated audio signals and obtain the first associated audio signals and the second associated audio signals which are identical;
a second transmission module configured to transmit the first associated audio signal to the audio driver module at the software layer;
and the processing module is configured to process the second audio signal and/or the second associated audio signal by the software layer to obtain and output an audio processing signal.
In a third aspect of the embodiments of the present application, there is provided an electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the above method when executing the computer program.
In a fourth aspect of the embodiments of the present application, there is provided a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
Compared with the prior art, the embodiment of the application has the beneficial effects that: according to the embodiment of the application, the audio signals received by the wireless driving module at the kernel layer of the audio processing unit are duplicated, one path of duplicated identical audio signals is directly transmitted to the audio driving module at the kernel layer for output, the other path of duplicated identical audio signals is transmitted to the software layer for post-processing, meanwhile, after the software layer acquires the associated audio signals of the audio signals, the software layer is duplicated to obtain two paths of identical associated audio signals, one path of identical associated audio signals is directly transmitted to the audio driving module for output, and the other path of identical associated audio signals and the audio signals received by the software layer are mixed for post-processing, so that the play delay of the audio signals can be reduced and the user experience can be improved on the basis of ensuring the integrity of functions such as scoring the audio signals.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application.
Fig. 2 is a schematic diagram of a prior art car K song system.
Fig. 3 is a flowchart of an audio signal processing method according to an embodiment of the present application.
Fig. 4 is a flowchart of a method for copying an audio signal according to an embodiment of the present application.
Fig. 5 is a flowchart of a method for transmitting a first audio signal to an audio driving module according to an embodiment of the application.
Fig. 6 is a flowchart of a method for transmitting a second audio signal to a software layer of a processing unit according to an embodiment of the present application.
Fig. 7 is a schematic diagram of a vehicle-mounted K song system using audio signal processing provided by an embodiment of the present application.
Fig. 8 is a schematic diagram of an implementation of data transmission between drives according to an embodiment of the present application.
Fig. 9 is a schematic diagram of an audio signal processing apparatus according to an embodiment of the present application.
Fig. 10 is a schematic diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth such as the particular system architecture, techniques, etc., in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
An audio signal processing method and apparatus according to embodiments of the present application will be described in detail with reference to the accompanying drawings.
Fig. 1 is a schematic view of an application scenario according to an embodiment of the present application. The application scenario may include a vehicle 1, an on-board unit 2, a microphone 3, and a power amplifier 4.
The vehicle 1 may be a vehicle equipped with a voice control function. Wherein the voice control function may be implemented by the on-board unit 2. The in-vehicle unit 2 may be hardware or software. When the In-vehicle Unit 2 is hardware, it may be a variety of electronic devices having computing capabilities and supporting communication with In-vehicle sensors, in-vehicle wireless devices, other vehicles, roadside units, and servers, including, but not limited to, in-vehicle infotainment systems (In-Vehicle Infotainment, IVI) In-vehicle units (On board units, OBUs), electronic control units (Electronic Control Unit, ECU), and the like. When the in-vehicle unit 2 is software, it may be installed in the electronic apparatus as described above. The in-vehicle unit 2 may be implemented as a plurality of software or software modules, or may be implemented as a single software or software module, to which the embodiments of the present application are not limited. Further, various applications, such as a data processing application, an instant messaging tool, social platform software, a search class application, a shopping class application, and the like, may be installed on the in-vehicle unit 2.
A microphone 3 may also be included in the vehicle 1 for capturing speech signals, such as singing sounds when the user is singing using the microphone K. The microphone 3 may be connected to the on-board unit 2, for example, wireless based on a universal serial bus (Universal Serial Bus, USB) interface. Further, the vehicle 1 may further include a power amplifier 4, and the power amplifier 4 may also be connected to the on-board unit 2, so as to receive the audio signal output by the on-board unit, amplify the audio signal, and play the audio signal.
The user can use the microphone 3 to K songs in the vehicle 1, after receiving the singing voice of the user, the vehicle-mounted unit 2 calls the corresponding song accompaniment, and after processing such as synchronizing and scoring the singing voice and the accompaniment, the audio signal is transmitted to the power amplifier 4 to be played, and the image signal is transmitted to a display interface such as IVI to be displayed.
It should be noted that the specific types, numbers and combinations of the vehicle 1, the on-board unit 2, the microphone 3 and the power amplifier 4 may be adjusted according to the actual requirements of the application scenario, which is not limited in the embodiment of the present application.
The above mentioned, the audio processing of the existing vehicle-mounted K song system is usually implemented based on an android system, and the singing voice received by the interface driving unit located in the kernel layer of the system needs to be sent to the software layer, and after being processed together with accompaniment, the singing voice is sent to the audio driving module in the kernel layer, and the singing voice is sent to the power amplifier to be played by the audio driving module, so that the information processing link is longer, the time delay is larger, and the user experience is reduced.
Fig. 2 is a schematic diagram of a prior art car K song system. As shown in fig. 2, where a K song application is an application that is available for interaction with a user. The software development kit (Software Development Kit, SDK)/application is a lower-level application of the K song application, and can be used for processing algorithm logic such as K song scoring, output frequency spectrum, beat synchronization and the like. The K song application and the SDK/application are located at the application layer of the system. An audio parsing component (audiotexture) is a native component of the android system, and can be used to parse the original audio data into pulse code modulation (Pulse Code Modulation, PCM) and then add the pulse code modulation to a play queue. An audio recording component (AudioRecord) is also a native component of the android system for recording audio and generating PCM data. AudioTruck and AudioRecord are located at the Framework (Framework) layer of the system. The USB driver is a hardware module connected with the USB interface. Audio (Audio) drivers are hardware modules that connect Audio outputs. The volume adjustment may be implemented using a digital-to-analog converter (Digital to Analog Converter, DAC). The power amplifier is used for amplifying and playing the received audio signal.
After the user sends out the voice, voice audio data can be collected through the wireless microphone device, and the built-in algorithm of the microphone is called for tuning. Meanwhile, the USB wireless receiver connected to the vehicle-mounted unit receives the optimized voice audio data, and the received audio data is transmitted to a hardware abstraction layer (Hardware Abstraction Layer, HAL) layer of the android system through USB drive located at the bottom layer of the android system, namely the kernel layer. Meanwhile, the karaoke application can decode accompaniment audio data through AudioTruck and then transmit the accompaniment audio data to the HAL layer.
On the one hand, the HAL layer mixes the received Audio data with accompaniment Audio data and then outputs the mixed Audio data to the Audio driver, and then sequentially outputs the mixed Audio data to the volume adjusting module and the power amplifier for playing. On the other hand, the HAL layer may also send the data after audio data and accompaniment audio data are mixed to the SDK/application, where the SDK/application performs spectrum analysis, beat analysis and scoring algorithm processing on the received mixed data, and then sends the analysis result and the scoring result to a User Interface (UI) of the K song application for display.
In this way, the K song signal is processed, and the delay from the reception of the human voice signal by the microphone to the output of the power amplifier is about 50 milliseconds (ms). For delays of sound signals, typically 1-30ms is ideal, when the delay is in this range, it is almost imperceptible to the user; 31-50ms is a good condition, and when the delay is within the range, the user can experience normally without obvious delay influence; 51-100ms is a common case where the user can feel a significant delay. Therefore, when the audio signal delay is about 50ms, although the user can normally experience the playing effect, the optimal experience effect is not achieved at this time, and there is still a lifting space.
In view of this, the embodiment of the application provides an audio signal processing method, by copying an audio signal received by a wireless driving module at a kernel layer of an audio processing unit, directly transmitting one path of the copied and identical audio signal to the audio driving module at the kernel layer for output, and transmitting the other path of the copied and identical audio signal to a software layer for post-processing, meanwhile, after the software layer acquires the associated audio signal of the audio signal, the software layer also copies and obtains two paths of identical associated audio signals, one path of the audio signal is directly transmitted to the audio driving module for output, and the other path of the audio signal is used for post-processing after being mixed with the audio signal received by the software layer, so that the play delay of the audio signal can be reduced and the user experience can be improved on the basis of ensuring the integrity of functions such as scoring the audio signal.
Fig. 3 is a flowchart of an audio signal processing method according to an embodiment of the present application. The audio signal processing method of fig. 3 may be performed by the on-board unit 2 of fig. 1, where the on-board unit includes a processing unit, and the processing unit includes at least a kernel layer and a software layer. As shown in fig. 3, the method comprises the steps of:
in step S301, an audio signal is received by a wireless driving module of a kernel layer of a processing unit.
In step S302, the audio signal is copied to obtain a first audio signal and a second audio signal which are identical.
In step S303, the first audio signal is transmitted to the audio driving module.
The audio driving module is located in the kernel layer of the processing unit and is used for outputting the received audio signals.
In step S304, the second audio signal is transmitted to the software layer of the processing unit.
In step S305, the software layer acquires the associated audio signal of the audio signal, and copies the associated audio signal to obtain the first associated audio signal and the second associated audio signal which are identical.
In step S306, the software layer transmits the first associated audio signal to the audio driver module;
in step S307, the software layer processes the second audio signal and/or the second associated audio signal to obtain and output an audio processing signal.
In the embodiment of the application, the method can be executed by the vehicle-mounted unit and is used for processing the K song audio signals of passengers in the vehicle. Specifically, the vehicle-mounted unit can mix the K song voice signals of the passengers in the vehicle with the song accompaniment and then output the mixed signals to the power amplifier module so as to play the mixed signals. Meanwhile, the vehicle-mounted unit can score K song voice signals of passengers in the vehicle, generate frequency spectrum animation and the like, and display the scoring result and the generated frequency spectrum animation in K song application connected with the vehicle-mounted unit. Further, the system framework of the vehicle-mounted unit can be realized based on an android system, and at least comprises a kernel layer and a software layer. Still further, the software layers may include a HAL layer, a Framework layer, and an application layer.
In the embodiment of the application, the wireless driving module of the kernel layer of the processing unit receives the audio signal. The audio signal may be an audio signal obtained by preprocessing a K-song voice signal of a passenger in the vehicle by a microphone. The microphone may be connected wirelessly with the on-board unit where the processing unit is located, for example by a wireless USB interface, or by bluetooth technology. The wireless connection needs to receive signals through a wireless driver module, which in the embodiment of the application may be located in a kernel layer of the processing system.
In the embodiment of the application, after receiving the audio signal, the wireless driving module can copy the audio signal to obtain the first audio signal and the second audio signal which are identical. The replication is made directly in the kernel layer.
In the embodiment of the application, after the wireless driving module copies the first audio signal and the second audio signal, the wireless driving module can directly transmit the first audio signal to the audio driving module. The audio driving module is also positioned in the kernel layer of the processing unit. The audio driving module is configured to output a received audio signal, for example, the audio driving module may output the received first audio signal to the ADC module to perform volume adjustment, and then output the first audio signal after volume adjustment to the power amplifier to perform playing.
In the embodiment of the application, the wireless driving module can also transmit the second audio signal to a software layer of the processing unit. Meanwhile, the software layer can acquire the associated audio signals of the audio signals, copy the associated audio signals and obtain the first associated audio signals and the second associated audio signals which are identical. The first associated audio signal can be directly transmitted to the audio driving module by the software layer, and is transmitted to the power amplifier by the audio driving module through the ADC module, and is synchronously played with the first audio signal.
Specifically, the software layer may obtain a song accompaniment from the karaoke application as an associated audio signal of the audio signal when the user selects a song to be singed in the karaoke application, and copy the associated audio signal. When the K song starts, the software layer transmits the copied first associated audio signal to the audio driving module so as to play the first associated audio signal through the power amplifier. Meanwhile, the microphone collects the voice signal of the user K song, and the voice signal is converted into a first audio signal by adopting the mode and transmitted to the audio driving module to be played through the power amplifier. At this time, because the voice signal is directly transmitted to the audio driving module by the wireless driving module at the kernel layer of the processing unit and then output, no other operation is performed, the signal delay is small, the voice played by the power amplifier can be well matched with the accompaniment, and no additional synchronous operation is required. Wherein the audio signal comprises a human voice signal and the associated audio signal comprises an accompaniment signal; the processing unit comprises a vehicle-mounted K song system processing unit.
In the embodiment of the application, the software layer can also process the second audio signal and/or the second associated audio signal to obtain and output an audio processing signal. Specifically, for the K song application, the user needs to hear not only the singing voice but also information such as a spectral animation of the vocal signal of the user singing, a result of scoring the singing effect of the user, and the like. Thus, the second audio signal and/or the second associated audio signal may be processed at the software layer, and the processing result may then be presented at the UI of the K song application.
According to the technical scheme provided by the embodiment of the application, through copying the audio signals received by the wireless driving module at the kernel layer of the audio processing unit, one path of the audio signals which are completely identical and obtained by copying is directly transmitted to the audio driving module at the kernel layer for output, the other path of the audio signals are transmitted to the software layer for post-processing, meanwhile, after the software layer acquires the associated audio signals of the audio signals, the two paths of the associated audio signals which are completely identical are also copied, one path of the audio signals are directly transmitted to the audio driving module for output, and the other path of the audio signals are used for post-processing after being mixed with the audio signals received by the software layer, so that the playing delay of the audio signals can be reduced on the basis of guaranteeing the integrity of functions such as scoring the audio signals, and the user experience is improved.
Fig. 4 is a flowchart of a method for copying an audio signal according to an embodiment of the present application. As shown in fig. 4, the method comprises the steps of:
in step S401, in the memory of the kernel layer, a mapping function mmap is called to map to obtain a copy buffer.
In step S402, the audio signal is written into the copy buffer based on the input/output module ioControl, so as to obtain a second audio signal identical to the audio signal, where the audio signal is the first audio signal.
In the embodiment of the application, the audio signal received by the wireless driving module can be directly copied in the kernel layer. Specifically, the mapping function mmap can be called in the memory of the kernel layer to map to obtain the copy buffer area. And then writing the audio signal into the copy buffer area based on the input/output module ioControl to obtain a second audio signal which is identical to the audio signal, wherein the audio signal is the first audio signal. That is, the memory can be directly operated at the kernel layer, a buffer is mapped by a mmap function, and then IO writing is performed by the ioControl, so that the audio data is written into the buffer to achieve the copying effect.
Fig. 5 is a flowchart of a method for transmitting a first audio signal to an audio driving module according to an embodiment of the application. As shown in fig. 5, the method comprises the steps of:
In step S501, a kernel layer driver function winobj is called, and a device connection name of the audio driver module is determined based on the winobj function.
In step S502, a create file function CreateFile is called to create a file, and the operation access mode of the file object is set to be asynchronous open.
In step S503, data corresponding to the first audio signal is written into a file.
In step S504, after the initialization event, a callback function between the wireless driving module and the audio driving module is created, a write request is sent by calling a write file function WriteFile using the callback function, and a file is sent to the audio driving module using the write request.
In the embodiment of the application, the buffer created by directly writing the data into the audio driving module by the wireless driving module in the form of stream can be realized by calling the API of the An Zhuona kernel file function ZwWriteFile.
In the embodiment of the application, the kernel layer driving function winobj can be called first, and the device connection name of the audio driving module is determined based on the winobj function. Then, a create file function create file may be called and the operational access mode Desiredaccess of the file object may be set to open asynchronously, i.e., set without SYNCHRONIZE. Next, data corresponding to the first audio signal may be written to the created file. And finally, after an event is initialized, creating a callback function between the wireless driving module and the audio driving module, calling a write file function WriteFile by using the callback function to send a write request, and sending the file to the audio driving module by using the write request.
That is, data may be transferred between the wireless driver module and the audio driver module in an asynchronous call. In particular, the audio driver module may be determined using a winobj, such as the device connection name of DriverB, the device connection name is typically device drive B. Then, desiredaccess is opened asynchronously using CreateFile, at which point SYNCHRONIZE cannot be carried. And then, initializing an event, setting a set callback function, calling a WriteFile to send an IRP_MJ_WRITE request, and acquiring data which needs to be written into the audio driving module from the wireless driving module based on the IRP_MJ_WRITE request.
In embodiments of the present application, the copying and transmission of the audio signal may be performed asynchronously. That is, the audio signal received by the wireless driving module, i.e. the first audio signal, can be independently subjected to the operation of copying the first audio signal to obtain the second audio signal, and the operation of transmitting the first audio signal to the audio driving module, so as to further reduce the signal transmission delay and improve the user experience.
Fig. 6 is a flowchart of a method for transmitting a second audio signal to a software layer of a processing unit according to an embodiment of the present application. As shown in fig. 6, the method includes the steps of:
In step S601, fourier transforming the second audio signal is performed to obtain a frequency signal of the second audio signal.
In step S602, the frequency signal of the second audio signal is transmitted to the software layer of the processing unit.
In the embodiment of the present application, the second audio signal may be first subjected to fourier transform, for example, fast fourier transform (Fast Fourier Transform, FFT) at the core layer of the processing unit, to obtain the frequency signal of the second audio signal. The frequency signal of the second audio signal is then transmitted to the software layer of the processing unit for processing.
According to the technical scheme provided by the embodiment of the application, the second audio is firstly subjected to frequency conversion by adopting the FFT algorithm directly integrated by the kernel layer and then is transmitted to the software layer for processing, so that compared with the process of calling the frequency conversion algorithm at the software layer to process the audio signal, the speed is higher, the spatial complexity of algorithm deployment is lower, the interface display speed of the audio processing signal can be improved, and the user experience is further improved.
In an embodiment of the present application, processing the second audio signal and/or the second associated audio signal may include: generating a frequency animation signal based on the frequency signal of the second audio signal; or performing Fourier transform on the second associated audio signal to obtain a frequency signal of the second associated audio signal, and performing synchronous comparison processing on the frequency signal of the second audio signal and the frequency signal of the second associated audio signal; or performing Fourier transform on the second associated audio signal to obtain a frequency signal of the second associated audio signal, and performing scoring processing on the frequency signal of the second audio signal and the frequency signal of the second associated audio signal.
That is, the frequency signal of the second audio signal may be directly processed to generate a frequency animation signal corresponding to the user's voice, so as to be displayed in the UI of the K song application. In addition, the second associated audio signal can be subjected to frequency conversion, then the frequency signal of the second audio signal and the frequency signal of the second associated audio signal are subjected to synchronization, scoring and other processing, and the processing result is displayed in the UI of the Ksong application.
The synchronization comparison processing of the frequency signal of the second audio signal and the frequency signal of the second associated audio signal may be: acquiring a timestamp of the second audio signal; acquiring a timestamp of the second associated audio signal; and carrying out synchronous comparison processing on the frequency signal of the second audio signal with the same time stamp and the frequency signal of the second associated audio signal. That is, the timestamp of the current time zone may be retained when the voice is collected. At the same time, the accompaniment also has a time stamp at the time of play. In the software layer, the synchronization comparison can be performed according to the time stamp when the accompaniment is played, and the synchronization comparison result can be directly displayed or provided for subsequent scoring processing to serve as reference data.
In the embodiment of the application, the software layer can comprise a hardware abstraction HAL layer, a Framework layer and an application layer. Wherein the HAL layer receives the second audio signal from the kernel layer and receives the second associated audio signal from the application layer via the Framework layer. The frame layer decodes and/or records the received audio signal. The application layer sends the associated audio signals to the frame layer, and generates frequency animation signal processing, synchronous comparison processing and/or scoring processing are carried out on the received audio signals.
Any combination of the above optional solutions may be adopted to form an optional embodiment of the present application, which is not described herein.
Fig. 7 is a schematic diagram of a vehicle-mounted K song system using audio signal processing provided by an embodiment of the present application. As shown in fig. 7, the system also includes a karaoke application, an SDK/application, a AudioTruck, audioRecord, USB driver, an Audio driver, a microphone, a USB wireless receiving device, a volume adjustment module, and a power amplifier module.
After the user sends out the voice, voice audio data can be collected through the wireless microphone device, and the built-in algorithm of the microphone is called for tuning. Meanwhile, a USB wireless receiver connected to the vehicle-mounted unit receives the optimized voice Audio data, a USB driver at the Android bottom layer divides the voice Audio data into two transmission links from the receiver, one transmission link is an Audio driver for directly transmitting original voice Audio data to the vehicle-mounted unit after the voice Audio data are transformed by an FFT algorithm and then the spectrum of the voice Audio data is sent to the Android HAL layer, and the other transmission link is a voice Audio signal.
Meanwhile, the K song application decodes the accompaniment audio data through the audioTruck and then transmits the accompaniment audio data to the Android HAL layer.
After the system processes the audio signal, various signals can be output, including outputting a voice original audio signal, outputting voice frequency spectrum data, outputting a K song application accompaniment and the like. The output of the voice original audio signal can be realized in the following way: after the Audio drive obtains the digital signal of the human voice original Audio signal from the USB drive, the digital signal is directly transmitted to a volume adjusting module formed by a DAC to convert the bit analog signal, and finally the bit analog signal is output to the power amplifier to sound. The output of the human voice spectrum data can be realized in the following way: and after receiving FFT data of the voice, the Android HAL layer is directly transmitted to the SDK/application for displaying the frequency animation effect of the voice on a UI interface of the K song application. The output of the K song application accompaniment may be achieved as follows: after receiving the accompaniment data, the Android HAL layer outputs the accompaniment data in two paths, wherein the accompaniment data are transmitted to an Audio driver and are sent to a DAC for conversion, and accompaniment sounds are sent out in a power amplifier; the other path of sensing AudioRecord is recorded, recorded audio data is transmitted to the SDK/application, the audio data and the human voice frequency spectrum are processed synchronously, scoring is calculated, and a scoring result is displayed on a UI interface of the K song application.
The Audio data of the human voice is required to be split into two parts, the USB driver copies the Audio data of the human voice in the kernel, the copied Audio data is used for calculating a frequency spectrum by using FFT, and the human voice of the original Audio data is directly sent into the Audio driver. The two data are processed asynchronously, because the service and the logic have no dependency relationship, and the CUP waiting time is reduced.
In the embodiment of the application, the second Audio signal needs to be directly transmitted to the Audio driver in the kernel by the USB driver, that is, the USB driver needs to communicate with the Audio driver. The driver calls the driver, which is essentially the communication between the two kernel drivers, so that the USB driver can write the human voice original digital signal data into the buffer created by the Audio driver in a stream mode through the ZWWriteFile method of the Android kernel file API. As shown in fig. 8, the direct transmission of the second Audio signal by the USB Driver to the Audio Driver in the kernel may be achieved by transmitting the irp_mj_write request by the USB Driver, which is Driver a, to the Audio Driver, which is Driver B.
By adopting the technical scheme of the embodiment of the application, the time consumption of the link from the human voice to the power amplifier and the processing can be shortened, the delay of receiving the power amplifier by the human ear can be shortened to be within 30ms, the extremely fast standard is reached, and the user experience is improved. Furthermore, the logic operation of mixing is reduced, the playing delay of accompaniment is shortened by about 10ms-20ms, and the smoothness of user operation is obviously improved. Furthermore, the USB-driven processing FFT algorithm belongs to a kernel direct integration algorithm, the processing speed is faster and lower than the time complexity and the space complexity of an upper software algorithm, and the interface display fluency is obviously improved.
The following are examples of the apparatus of the present application that may be used to perform the method embodiments of the present application. For details not disclosed in the embodiments of the apparatus of the present application, please refer to the embodiments of the method of the present application.
Fig. 9 is a schematic diagram of an audio signal processing apparatus according to an embodiment of the present application. As shown in fig. 9, the audio signal processing apparatus includes:
a receiving module 901 configured to receive an audio signal through a wireless driving module of a kernel layer of the processing unit;
a first copying module 902 configured to copy the audio signal to obtain a first audio signal and a second audio signal which are identical;
the first transmission module 903 is configured to transmit the first audio signal to the audio driving module, where the audio driving module is located in a kernel layer of the processing unit and is configured to output the received audio signal;
the first transmission module is further configured to transmit the second audio signal to a software layer of the processing unit;
a second copying module 904, configured to obtain an associated audio signal of the audio signal at the software layer, and copy the associated audio signal to obtain a first associated audio signal and a second associated audio signal which are identical;
a second transmission module 905 configured to transmit the first associated audio signal to the audio driver module at the software layer;
A processing module 906 configured to process the second audio signal and/or the second associated audio signal by the software layer to obtain and output an audio processing signal.
According to the technical scheme provided by the embodiment of the application, through copying the audio signals received by the wireless driving module at the kernel layer of the audio processing unit, one path of the audio signals which are completely identical and obtained by copying is directly transmitted to the audio driving module at the kernel layer for output, the other path of the audio signals are transmitted to the software layer for post-processing, meanwhile, after the software layer acquires the associated audio signals of the audio signals, the two paths of the associated audio signals which are completely identical are also copied, one path of the audio signals are directly transmitted to the audio driving module for output, and the other path of the audio signals are used for post-processing after being mixed with the audio signals received by the software layer, so that the playing delay of the audio signals can be reduced on the basis of guaranteeing the integrity of functions such as scoring the audio signals, and the user experience is improved.
In the embodiment of the application, the audio signal is duplicated to obtain the first audio signal and the second audio signal which are identical, and the method comprises the following steps: calling a mapping function mmap to map in a memory of a kernel layer to obtain a copy buffer area; and writing the audio signal into the copy buffer area based on the input/output module ioControl to obtain a second audio signal which is identical to the audio signal, wherein the audio signal is the first audio signal.
In an embodiment of the present application, transmitting a first audio signal to an audio driving module includes: calling a kernel layer driving function winobj, and determining the equipment connection name of the audio driving module based on the winobj function; calling a creating file function createFile to create a file, and setting the operation access mode of a file object to be asynchronous opening; writing data corresponding to the first audio signal into a file; after an initialization event, a callback function between the wireless driving module and the audio driving module is created, a write file function is called by the callback function to send a write request, and the file is sent to the audio driving module by the write request.
In an embodiment of the present application, transmitting a second audio signal to a software layer of a processing unit includes: performing Fourier transform on the second audio signal to obtain a frequency signal of the second audio signal; transmitting the frequency signal of the second audio signal to a software layer of the processing unit; processing the second audio signal and/or the second associated audio signal, comprising: generating a frequency animation signal based on the frequency signal of the second audio signal; or performing Fourier transform on the second associated audio signal to obtain a frequency signal of the second associated audio signal, and performing synchronous comparison processing on the frequency signal of the second audio signal and the frequency signal of the second associated audio signal; or performing Fourier transform on the second associated audio signal to obtain a frequency signal of the second associated audio signal, and performing scoring processing on the frequency signal of the second audio signal and the frequency signal of the second associated audio signal.
In the embodiment of the present application, the synchronization comparison processing for the frequency signal of the second audio signal and the frequency signal of the second associated audio signal includes: acquiring a timestamp of the second audio signal; acquiring a timestamp of the second associated audio signal; and carrying out synchronous comparison processing on the frequency signal of the second audio signal with the same time stamp and the frequency signal of the second associated audio signal.
In the embodiment of the application, the software layer comprises a hardware abstraction HAL layer, a Framework layer and an application layer; the HAL layer receives a second audio signal from the kernel layer and receives a second associated audio signal from the application layer via the frame layer; the frame layer decodes and/or records the received audio signal; the application layer sends the associated audio signals to the frame layer, and generates frequency animation signal processing, synchronous comparison processing and/or scoring processing are carried out on the received audio signals.
In the embodiment of the application, the audio signal comprises a voice signal, and the associated audio signal comprises an accompaniment signal; the processing unit comprises a vehicle-mounted K song system processing unit.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present application.
Fig. 10 is a schematic diagram of an electronic device according to an embodiment of the present application. As shown in fig. 10, the electronic device 10 of this embodiment includes: a processor 1001, a memory 1002 and a computer program 1003 stored in the memory 1002 and executable on the processor 1001. The steps of the various method embodiments described above are implemented by the processor 1001 when executing the computer program 1003. Alternatively, the processor 1001 implements the functions of the modules/units in the above-described respective device embodiments when executing the computer program 1003.
The electronic device 10 may be a desktop computer, a notebook computer, a palm computer, a cloud server, or the like. The electronic device 10 may include, but is not limited to, a processor 1001 and a memory 1002. It will be appreciated by those skilled in the art that fig. 10 is merely an example of the electronic device 10 and is not limiting of the electronic device 10 and may include more or fewer components than shown, or different components.
The processor 1001 may be a central processing unit (Central Processing Unit, CPU) or other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application SpecificIntegrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like.
The memory 1002 may be an internal storage unit of the electronic device, such as a hard disk or memory of the electronic device 10. The memory 1002 may also be an external storage device of the electronic device 10, for example, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the electronic device 10. Memory 1002 may also include both internal and external storage units of electronic device 10. The memory 1002 is used to store computer programs and other programs and data required by the electronic device.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, and the computer program may be stored in a computer readable storage medium, where the computer program, when executed by a processor, may implement the steps of each of the method embodiments described above. The computer program may comprise computer program code, which may be in source code form, object code form, executable file or in some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the application has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (10)

1. A method of audio signal processing, the method being performed by a processing unit comprising at least a kernel layer and a software layer, the method comprising:
receiving the audio signal through a wireless driving module of the processing unit kernel layer;
copying the audio signals to obtain a first audio signal and a second audio signal which are identical;
transmitting the first audio signal to an audio driving module, wherein the audio driving module is positioned at the kernel layer of the processing unit and is used for outputting the received audio signal;
transmitting the second audio signal to a software layer of the processing unit;
the software layer acquires the associated audio signals of the audio signals, and copies the associated audio signals to obtain a first associated audio signal and a second associated audio signal which are identical;
the software layer transmitting the first associated audio signal to the audio driver module;
and the software layer processes the second audio signal and/or the second associated audio signal to obtain and output an audio processing signal.
2. The method of claim 1, wherein the copying the audio signal to obtain identical first and second audio signals comprises:
Calling a mapping function mmap to map in the memory of the kernel layer to obtain a copy buffer area;
and writing the audio signal into the copy buffer area based on an input/output module ioControl to obtain a second audio signal which is completely the same as the audio signal, wherein the audio signal is a first audio signal.
3. The method of claim 1, wherein the transmitting the first audio signal to an audio driver module comprises:
calling a kernel layer driving function winobj, and determining the equipment connection name of the audio driving module based on the winobj function;
calling a creating file function createFile to create a file, and setting the operation access mode of a file object to be asynchronous opening;
writing data corresponding to the first audio signal into the file;
after an initialization event, a callback function between the wireless driving module and the audio driving module is created, a write file function WriteFile is called by the callback function to send a write request, and the write request is used for sending the file to the audio driving module.
4. The method of claim 1, wherein the transmitting the second audio signal to the software layer of the processing unit comprises:
Performing Fourier transform on the second audio signal to obtain a frequency signal of the second audio signal;
transmitting a frequency signal of the second audio signal to a software layer of the processing unit;
the processing of the second audio signal and/or the second associated audio signal comprises:
generating a frequency animation signal based on the frequency signal of the second audio signal; or alternatively
Performing Fourier transform on the second associated audio signal to obtain a frequency signal of the second associated audio signal, and performing synchronous comparison processing on the frequency signal of the second audio signal and the frequency signal of the second associated audio signal; or alternatively
And carrying out Fourier transform on the second associated audio signals to obtain frequency signals of the second associated audio signals, and grading the frequency signals of the second audio signals and the frequency signals of the second associated audio signals.
5. The method of claim 4, wherein the synchronizing the frequency signal of the second audio signal with the frequency signal of the second associated audio signal comprises:
acquiring a timestamp of the second audio signal;
Acquiring a timestamp of the second associated audio signal;
and carrying out synchronous comparison processing on the frequency signal of the second audio signal with the same time stamp and the frequency signal of the second associated audio signal.
6. The method of claim 5, wherein the software layers include a hardware abstraction HAL layer, a Framework layer, and an application layer;
the HAL layer receives a second audio signal from the kernel layer and receives the second associated audio signal from the application layer via a frame layer;
the frame layer decodes and/or records the received audio signal;
the application layer sends the associated audio signals to the frame layer, and generates frequency animation signal processing, synchronous comparison processing and/or scoring processing are carried out on the received audio signals.
7. The method of any one of claims 1 to 6, wherein the audio signal comprises a human voice signal and the associated audio signal comprises an accompaniment signal;
the processing unit comprises a vehicle-mounted K song system processing unit.
8. An audio signal processing apparatus, comprising:
the receiving module is configured to receive the audio signal through the wireless driving module of the processing unit kernel layer;
The first copying module is configured to copy the audio signals to obtain identical first audio signals and second audio signals;
the first transmission module is configured to transmit the first audio signal to the audio driving module, and the audio driving module is positioned at the kernel layer of the processing unit and is used for outputting the received audio signal;
the first transmission module is further configured to transmit the second audio signal to a software layer of the processing unit;
the second copying module is configured to acquire the associated audio signals of the audio signals at the software layer, copy the associated audio signals and obtain the first associated audio signals and the second associated audio signals which are identical;
a second transmission module configured to transmit the first associated audio signal to the audio driver module at the software layer;
and the processing module is configured to process the second audio signal and/or the second associated audio signal by the software layer to obtain and output an audio processing signal.
9. An electronic device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 7 when the computer program is executed.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 7.
CN202310936012.4A 2023-07-28 2023-07-28 Audio signal processing method, device, electronic equipment and storage medium Pending CN116665625A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310936012.4A CN116665625A (en) 2023-07-28 2023-07-28 Audio signal processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310936012.4A CN116665625A (en) 2023-07-28 2023-07-28 Audio signal processing method, device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116665625A true CN116665625A (en) 2023-08-29

Family

ID=87720959

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310936012.4A Pending CN116665625A (en) 2023-07-28 2023-07-28 Audio signal processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116665625A (en)

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1106946A (en) * 1993-11-09 1995-08-16 大宇电子株式会社 Karaoke system capable of scoring a singing of a singer on accompaniment thereof
CN106205580A (en) * 2016-06-30 2016-12-07 维沃移动通信有限公司 A kind of audio data processing method and terminal
CN106293659A (en) * 2015-05-21 2017-01-04 阿里巴巴集团控股有限公司 A kind of audio frequency real-time processing method, device and intelligent terminal
CN107992282A (en) * 2017-11-29 2018-05-04 珠海市魅族科技有限公司 Audio data processing method and device, computer installation and readable storage devices
CN111654743A (en) * 2020-05-27 2020-09-11 海信视像科技股份有限公司 Audio playing method and display device
CN113284482A (en) * 2021-04-13 2021-08-20 北京雷石天地电子技术有限公司 Song singing evaluation method and system
CN113518258A (en) * 2021-05-14 2021-10-19 北京天籁传音数字技术有限公司 Low-delay full-scene audio implementation method and device and electronic equipment
CN116142101A (en) * 2023-02-21 2023-05-23 奇瑞汽车股份有限公司 Entertainment system and method for vehicle, vehicle and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1106946A (en) * 1993-11-09 1995-08-16 大宇电子株式会社 Karaoke system capable of scoring a singing of a singer on accompaniment thereof
CN106293659A (en) * 2015-05-21 2017-01-04 阿里巴巴集团控股有限公司 A kind of audio frequency real-time processing method, device and intelligent terminal
CN106205580A (en) * 2016-06-30 2016-12-07 维沃移动通信有限公司 A kind of audio data processing method and terminal
CN107992282A (en) * 2017-11-29 2018-05-04 珠海市魅族科技有限公司 Audio data processing method and device, computer installation and readable storage devices
CN111654743A (en) * 2020-05-27 2020-09-11 海信视像科技股份有限公司 Audio playing method and display device
CN113284482A (en) * 2021-04-13 2021-08-20 北京雷石天地电子技术有限公司 Song singing evaluation method and system
CN113518258A (en) * 2021-05-14 2021-10-19 北京天籁传音数字技术有限公司 Low-delay full-scene audio implementation method and device and electronic equipment
CN116142101A (en) * 2023-02-21 2023-05-23 奇瑞汽车股份有限公司 Entertainment system and method for vehicle, vehicle and storage medium

Similar Documents

Publication Publication Date Title
TW571290B (en) Computer-implemented speech recognition system training
US8156184B2 (en) Dialog server for handling conversation in virtual space method and computer program for having conversation in virtual space
CN113763956B (en) Interaction method and device applied to vehicle
JP2009300537A (en) Speech actuation system, speech actuation method and in-vehicle device
CN101405790A (en) Simultaneous sharing of system resources by multiple input devices
CN113689062B (en) Agent coordination device, agent coordination method, and recording medium having agent coordination program recorded thereon
KR101500177B1 (en) Audio system of vehicle
CN116665625A (en) Audio signal processing method, device, electronic equipment and storage medium
CN106471569A (en) Speech synthesis apparatus, phoneme synthesizing method and its program
CN113160824B (en) Information processing system
CN114501296A (en) Audio processing method and vehicle-mounted multimedia equipment
CN112685000A (en) Audio processing method and device, computer equipment and storage medium
CN115223582B (en) Audio noise processing method, system, electronic device and medium
WO2024090007A1 (en) Program, method, information processing device, and system
EP4383751A1 (en) Information processing device, information processing method, and program
CN210295901U (en) Car machine, on-vehicle KTV entertainment system and vehicle
EP4365888A1 (en) Method and apparatus for processing audio data
KR102001314B1 (en) Method and apparatus of enhancing audio quality recorded in karaoke room
CN111381797B (en) Processing method and device for realizing KTV function on client and user equipment
Every et al. A Software-Centric Solution to Automotive Audio for General Purpose CPUs
CN115378986A (en) Vehicle instrument control system and method and device for vehicle instrument control system
CN116347135A (en) Audio data processing method and device for vehicle
CN118098182A (en) Mixing processing method, device, computer equipment and computer readable storage medium
CN116092476A (en) Audio stream playing method and device and vehicle
CN114915943A (en) Audio data processing method and device, vehicle and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20230829

RJ01 Rejection of invention patent application after publication