CN111785298A - Acoustic performance testing method and device, electronic equipment and computer readable medium - Google Patents

Acoustic performance testing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN111785298A
CN111785298A CN202010623087.3A CN202010623087A CN111785298A CN 111785298 A CN111785298 A CN 111785298A CN 202010623087 A CN202010623087 A CN 202010623087A CN 111785298 A CN111785298 A CN 111785298A
Authority
CN
China
Prior art keywords
audio signal
acoustic performance
audio
value corresponding
voice
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010623087.3A
Other languages
Chinese (zh)
Inventor
张在东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202010623087.3A priority Critical patent/CN111785298A/en
Publication of CN111785298A publication Critical patent/CN111785298A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/60Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for measuring the quality of voice signals
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/01Assessment or evaluation of speech recognition systems

Abstract

The disclosure provides an acoustic performance testing method of voice equipment, belongs to the field of voice recognition, and can be used for performance testing of an artificial intelligent sound box. The method comprises the following steps: acquiring a first audio signal recorded by the voice equipment, wherein the first audio signal is an audio signal played by the voice equipment or an environmental noise signal generated by an external noise source; acquiring a second audio signal generated after the voice equipment performs signal processing on the first audio signal; generating first acoustic performance data of the speech device from the first audio signal and the second audio signal; and generating a first test result according to the first acoustic performance data and preset first standard data. The disclosure also provides an acoustic performance testing device, an electronic apparatus, and a computer-readable medium.

Description

Acoustic performance testing method and device, electronic equipment and computer readable medium
Technical Field
The embodiment of the disclosure relates to the technical field of voice recognition, and in particular relates to a method and a device for testing acoustic performance of voice equipment, electronic equipment and a computer readable medium.
Background
Intelligent interactive devices, especially voice interactive devices, are currently and generally used in people's daily life, work, and even production processes, for example, in vehicle-mounted voice recognition systems in the automotive field.
The existing scheme for evaluating the voice interaction equipment generally evaluates the voice recognition rate and the voice recognition effect of the equipment, but lacks a scheme for testing the performance of the underlying audio hardware of the voice interaction equipment.
Disclosure of Invention
The embodiment of the disclosure provides a method and a device for testing acoustic performance of voice equipment, electronic equipment and a computer readable medium.
In a first aspect, an embodiment of the present disclosure provides an acoustic performance testing method, where the acoustic performance testing method includes:
acquiring a first audio signal recorded by the voice equipment, wherein the first audio signal is an audio signal played by the voice equipment or an environmental noise signal generated by an external noise source;
acquiring a second audio signal generated after the voice equipment performs signal processing on the first audio signal;
generating first acoustic performance data of the speech device from the first audio signal and the second audio signal;
and generating a first test result according to the first acoustic performance data and preset first standard data.
In a second aspect, an embodiment of the present disclosure provides an acoustic performance testing apparatus for a speech device, the acoustic performance testing apparatus including:
the first acquisition module is used for acquiring a first audio signal recorded by the voice equipment, wherein the first audio signal is an audio signal played by the voice equipment or an environmental noise signal generated by an external noise source;
the second obtaining module is used for obtaining a second audio signal generated after the voice equipment performs signal processing on the first audio signal;
a performance parameter generation module, configured to generate first acoustic performance data of the speech device according to the first audio signal and the second audio signal;
and the evaluation module is used for generating a first test result according to the first acoustic performance data and preset first standard data.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including:
one or more processors;
a memory having one or more programs stored thereon, which when executed by the one or more processors, cause the one or more processors to implement the acoustic performance testing method provided by any of the embodiments above;
one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.
In a fourth aspect, the present disclosure provides a computer-readable medium, on which a computer program is stored, where the computer program is executed to implement the acoustic performance testing method provided in any one of the above embodiments.
According to the acoustic performance testing method and device for the voice equipment, the electronic equipment and the computer readable medium, the performance of the bottom layer audio hardware of the voice equipment is evaluated under different test scenes, test results under different test scenes are obtained, the test results serve as quality evaluation standards of the bottom layer audio hardware of the voice equipment, hardware guarantee can be provided for a voice recognition algorithm at the rear end, quality risk evaluation of the bottom layer audio hardware is advanced, and research and development cost of products can be saved.
Drawings
The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. The above and other features and advantages will become more apparent to those skilled in the art by describing in detail exemplary embodiments thereof with reference to the attached drawings, in which:
fig. 1 is a flowchart of a method for testing acoustic performance of a speech device according to an embodiment of the present disclosure;
FIG. 2 is a flow chart of one embodiment of step 13 of FIG. 1;
FIG. 3 is a flow chart of one embodiment of step 14 of FIG. 1;
FIG. 4 is a flow chart of another acoustic performance testing method provided by the embodiments of the present disclosure;
FIG. 5 is a flowchart of one specific implementation of step 23 in FIG. 4;
FIG. 6 is a flow chart of another specific implementation of step 23 in FIG. 4;
FIG. 7 is a flowchart of one specific implementation of step 24 of FIG. 4;
FIG. 8 is a flow chart of another specific implementation of step 24 in FIG. 4;
fig. 9 is a block diagram of an acoustic performance testing apparatus for a speech device according to an embodiment of the present disclosure;
fig. 10 is a block diagram of an electronic device according to an embodiment of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solution of the present disclosure, the following describes in detail an acoustic performance testing method and apparatus of a speech device, an electronic device, and a computer readable medium provided in the present disclosure with reference to the drawings.
Example embodiments will be described more fully hereinafter with reference to the accompanying drawings, but which may be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
Embodiments of the present disclosure and features of embodiments may be combined with each other without conflict.
As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and the present disclosure, and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
Fig. 1 is a flowchart of a method for testing acoustic performance of a speech device according to an embodiment of the present disclosure, and as shown in fig. 1, the method may be performed by an acoustic performance testing apparatus, which may be implemented by software and/or hardware, and the apparatus may be integrated in an electronic device such as a server. The acoustic performance testing method includes steps 11 and 14.
And 11, acquiring a first audio signal recorded by the voice equipment, wherein the first audio signal is an audio signal played by the voice equipment or an environmental noise signal generated by an external noise source.
In the embodiment of the present disclosure, the voice device is an intelligent terminal, an apparatus, and a system that can provide an intelligent voice interaction service for a user, for example, the voice device may be a vehicle-mounted voice recognition system, an intelligent sound, an intelligent video speaker, an intelligent story machine, an intelligent interaction platform, and the like. The underlying audio hardware of the voice device includes but is not limited to: the microphone array may include a sound playing component, a sound pickup component, and an audio Digital Signal Processing (DSP) chip, wherein the sound playing component may include a speaker, and the sound pickup component may include a Microphone (MIC) array.
In a test scenario, in a situation where a test environment is kept quiet, an audio signal for testing is played by using a sound playing component of a voice device itself, and recording is performed by using a sound pickup component of the voice device itself, and the recorded audio signal is used as a first audio signal, and in step 11, the first audio signal recorded by the sound pickup component of the voice device is acquired from the voice device.
In another test scenario, in a test environment, the voice device itself is kept in a quiet state (i.e., a state where no audio signal is emitted), an ambient noise signal (e.g., white noise, air conditioning noise, driving wind noise, etc.) is generated by an external noise source, and recording is performed by using the sound pickup component of the voice device itself, and the recorded audio signal is taken as a first audio signal, and in step 11, the first audio signal recorded by the sound pickup component thereof is acquired from the voice device.
And step 12, acquiring a second audio signal generated after the voice device performs signal processing on the first audio signal.
In the embodiment of the present disclosure, after the voice device records the first audio signal, the recorded first audio signal is transmitted to an audio digital signal processing chip (DSP), and the audio digital signal processing chip performs signal processing on the first audio signal, for example, signal processing such as noise reduction, echo cancellation (AEC), and electrical noise (noise generated by a circuit) cancellation, so as to perform hardware noise cancellation, thereby obtaining a second audio signal generated after the signal processing. In step 12, a second audio signal generated after the audio digital signal processing chip of the speech device performs signal processing on the first audio signal is obtained.
And step 13, generating first acoustic performance data of the voice equipment according to the first audio signal and the second audio signal.
In the embodiment of the present disclosure, the audio digital signal processing chip performs signal processing on the first audio signal, which theoretically eliminates an audio signal played by the voice device itself or eliminates an environmental noise signal generated by an external noise source, that is, theoretically, a second audio signal generated after the signal processing does not include the first audio signal. In the embodiment of the disclosure, the signal processing capability of the bottom layer hardware of the voice device in different test scenes can be analyzed by comparing and analyzing the first audio signal recorded by the voice device and the second audio signal generated after signal processing.
The method comprises the steps that when a voice is interrupted to wake up, namely a test scene that the voice device plays audio signals, when the voice device is woken up, the audio signals (such as music) played by the voice device will interfere with the waking up of the voice device, therefore, in order to test the noise reduction capability of the voice device to noise (first audio signals) generated by the voice device, the first audio signals played by the voice device and second audio signals generated after the first audio signals are processed are obtained, and through comparison and analysis, the signal processing capability of bottom layer hardware of the voice device in the voice interruption wake-up scene can be obtained through analysis.
After the voice device is awakened, the voice device does not play audio signals, and at the moment, noise generated by an external noise source is a main factor influencing word accuracy and sentence accuracy of voice recognition. Therefore, in a quasi-word recognition scene, that is, a test scene in which an external noise source generates an environmental noise signal, the environmental noise signal generated by the external noise source will interfere with speech recognition of the speech device, and in order to test the noise reduction capability of the speech device to noise (a first audio signal) generated by the external noise source, the signal processing capability of underlying hardware of the speech device in the quasi-word recognition scene can be obtained through obtaining the first audio signal generated by the external noise source and a second audio signal generated after the speech device processes the first audio signal, and through comparison and analysis.
Fig. 2 is a flow chart of an embodiment of step 13 in fig. 1, and as shown in fig. 2, step 13 includes step 131 and step 132 in some embodiments.
And 131, performing fast fourier transform calculation processing on the first audio signal and the second audio signal respectively, and calculating to obtain an energy value corresponding to the first audio signal and an energy value corresponding to the second audio signal.
Specifically, Fast Fourier Transform (FFT) calculation processing is performed on a first audio signal to obtain a first frequency spectrum corresponding to the first audio signal, a value of each frequency point in the first frequency spectrum is a complex number, a module value of the complex number corresponding to each frequency point in the first frequency spectrum is calculated, statistical average processing is performed on the module value corresponding to each frequency point in the first frequency spectrum to obtain a first statistical average value, and the first statistical average value is an energy value corresponding to the first audio signal.
Similarly, Fast Fourier Transform (FFT) calculation processing is performed on the second audio signal to obtain a second frequency spectrum corresponding to the second audio signal, a value of each frequency point in the second frequency spectrum is a complex number, a module value of the complex number corresponding to each frequency point in the second frequency spectrum is calculated, statistical average processing is performed on the module value corresponding to each frequency point in the second frequency spectrum to obtain a second statistical average value, and the second statistical average value is an energy value corresponding to the second audio signal.
Step 132, calculating to obtain a noise cancellation amount by using the energy value corresponding to the first audio signal and the energy value corresponding to the second audio signal, where the first acoustic performance data includes the noise cancellation amount.
The noise cancellation amount Y is a difference between an energy value S1 corresponding to the first audio signal and an energy value S2 corresponding to the second audio signal, that is, Y is S1-S2.
For example, in the case that the first audio signal is an audio signal played by the speech apparatus itself, in step 132, a first noise cancellation amount Y1 is calculated by using the energy value corresponding to the first audio signal and the energy value corresponding to the second audio signal, and the first acoustic performance data includes the first noise cancellation amount Y1.
For example, in the case that the first audio signal is an external noise source generating an environmental noise signal, in step 132, a second noise cancellation amount Y2 is calculated by using the energy value corresponding to the first audio signal and the energy value corresponding to the second audio signal, and the first acoustic performance data includes the second noise cancellation amount Y2.
And step 14, generating a first test result according to the first acoustic performance data and preset first standard data.
Specifically, a first test result is obtained by comparing the first acoustic performance data with preset first standard data. Wherein the first acoustic performance data includes a noise cancellation amount, and the first criterion data includes a first criterion value.
Fig. 3 is a flowchart of a specific implementation of step 14 in fig. 1, and as shown in fig. 3, step 14 includes step 141 and step 142 in some embodiments.
And step 141, comparing the noise elimination amount with the first judgment standard value to obtain a comparison result.
And 142, generating a first test result according to the comparison result of the noise elimination amount and the first judgment standard value.
It should be noted that, under different test scenarios, the value of the first criterion value may be different. For example, in the case where the first audio signal is an audio signal played by the speech apparatus itself, the first criterion value may be set as a1, and after the first noise removal amount Y1 in this case is obtained through step 132, the first noise removal amount Y1 is compared with the first criterion value a1 in step 141, resulting in a comparison result. In step 142, when the comparison result is that the first noise removal amount Y1 is greater than or equal to the first criterion value a1, the signal processing capability (hardware noise reduction capability) test result of the speech device under the condition is judged to be qualified, the first test result includes information indicating that the test is qualified, otherwise, the test result is judged to be unqualified, and the first test result includes information indicating that the test is unqualified.
In the case where the first audio signal is an ambient noise signal generated by an external noise source, the first criterion value may be set as a2, and after the second noise removal amount Y2 in this case is obtained through step 132, the second noise removal amount Y2 is compared with the first criterion value a2 in step 141, resulting in a comparison result. In step 142, when the comparison result is that the second noise removal amount Y2 is greater than or equal to the first criterion value a2, the signal processing capability (hardware noise reduction capability) test result of the speech device under the condition is judged to be qualified, the first test result includes information indicating that the test is qualified, otherwise, the test result is judged to be unqualified, and the first test result includes information indicating that the test is unqualified.
It can be understood that, the above steps 11 to 14 are processes of testing the signal processing (i.e. noise reduction, echo cancellation, electrical noise cancellation, etc.) capability of the underlying audio hardware of the speech device in some test scenarios, and fig. 1 shows a process of testing the signal processing capability of the underlying audio hardware of the speech device in some test scenarios.
In some embodiments, the acoustic performance testing method further includes a process of testing the underlying audio hardware of the speech device for the ability to eliminate non-significant audio portions (i.e., noise reduction, blind source separation, etc.). Fig. 4 is a flowchart of another acoustic performance testing method provided in the embodiment of the present disclosure, and as shown in fig. 4, the interactive system evaluation method further includes:
and step 21, acquiring a fourth audio signal generated after the voice device performs signal processing on the recorded third audio signal, wherein the third audio signal is an audio signal played by an external audio playing device.
In some embodiments, the external audio playing device may be a simulated mouth, and the external audio playing device may be placed in each sound zone of the voice device to simulate an actual usage scenario of the voice device. In a test scenario, in a test environment, the voice device itself is kept in a quiet state (i.e., a state where no audio signal is emitted), an external audio playing device is used to play an audio signal, and a sound pickup assembly of the voice device itself is used to record the audio signal, where the recorded audio signal is a third audio signal.
After the voice device records the third audio signal, the recorded third audio signal is transmitted to an audio digital signal processing chip (DSP), and the audio digital signal processing chip performs signal processing on the third audio signal, such as signal processing of noise reduction, blind source separation, and the like, so as to obtain a fourth audio signal generated after the signal processing. In step 21, a fourth audio signal generated after the audio digital signal processing chip of the speech device performs signal processing on the third audio signal is obtained.
And step 22, acquiring a sixth audio signal generated after the voice device performs signal processing on the recorded fifth audio signal, where the fifth audio signal includes an audio signal played by the voice device itself and an audio signal played by an external audio playing device, or the fifth audio signal includes an environmental noise signal generated by an external noise source and an audio signal played by the external audio playing device.
In a test scenario, the sound playing component of the speech device is used to play an audio signal, the external audio playing device is used to play an audio signal (the same as the third audio signal), and the sound pickup component of the speech device is used to record, where the recorded audio signal is a fifth audio signal, and at this time, the fifth audio signal includes an audio signal played by the speech device itself and an audio signal played by the external audio playing device.
In the test scenario, after the voice device records the fifth audio signal, the recorded fifth audio signal is transmitted to an audio digital signal processing chip (DSP), and the audio digital signal processing chip performs signal processing on the fifth audio signal, such as signal processing of noise reduction and blind source separation, so as to obtain a sixth audio signal generated after the signal processing. In step 22, a sixth audio signal generated after the fifth audio signal is signal-processed is obtained from the audio digital signal processing chip of the speech device.
In another test scenario, an external audio playing device is used to play an audio signal (the same audio signal as the third audio signal), an external noise source is used to generate an environmental noise signal, and a sound pickup assembly of the audio device is used to record the audio signal, where the recorded audio signal is a fifth audio signal, and at this time, the fifth audio signal includes the audio signal played by the external audio playing device and the environmental noise signal generated by the external noise source.
In the test scenario, after the voice device records the fifth audio signal, the recorded fifth audio signal is transmitted to an audio digital signal processing chip (DSP), and the audio digital signal processing chip performs signal processing on the fifth audio signal, such as signal processing of noise reduction and blind source separation, so as to obtain a sixth audio signal generated after the signal processing. In step 22, a sixth audio signal generated after the fifth audio signal is signal-processed is obtained from the audio digital signal processing chip of the speech device.
And 23, generating second acoustic performance data of the voice equipment according to the fourth audio signal and the sixth audio signal.
In the embodiment of the present disclosure, the audio signal (i.e., the third audio signal) played by the external audio playing device (simulated mouth) is an effective audio portion, and the other noise signals (the audio signal played by the voice device itself, the environmental noise signal generated by the external noise source, etc.) are non-effective audio portions.
Therefore, the audio digital signal processing chip performs signal processing on the third audio signal, theoretically, the non-effective audio part is eliminated, that is, theoretically, the fourth audio signal generated after the signal processing only contains the third audio signal, and does not contain other noise signals. In a similar way, the audio digital signal processing chip performs signal processing on the fifth audio signal, theoretically, the non-effective audio part is eliminated, that is, theoretically, the sixth audio signal generated after the signal processing only contains the third audio signal, but does not contain other noise signals.
In the embodiment of the present disclosure, the elimination capability of the bottom layer hardware of the speech device on the non-effective audio part can be obtained through analyzing the fourth audio signal generated after the third audio signal is subjected to signal processing and the sixth audio signal generated after the fifth audio signal is subjected to signal processing by comparison.
In a test scenario (such as a voice interruption wake-up scenario), in a situation where the voice device itself does not play any audio signal, an external audio playing device (simulated mouth) is caused to play the audio signal to obtain a fourth audio signal; then, in a case where the voice device itself plays the audio signal, the external audio playing device (simulated mouth) is caused to play the audio signal to obtain a sixth audio signal. By comparing and analyzing the fourth audio signal and the sixth audio signal obtained by the voice equipment in the test scene, the elimination capability of the bottom layer hardware of the voice equipment on the non-effective audio part in the test scene can be analyzed and obtained.
In another test scenario (such as a quasi-textual recognition scenario), in a situation where the speech device itself does not play any audio signal, an external audio playing device (simulated mouth) is caused to play the audio signal to obtain a fourth audio signal; then, in a case where the external noise source generates an ambient noise signal, the external audio playing device (artificial mouth) is caused to play the audio signal to obtain a sixth audio signal. Through comparing and analyzing the fourth audio signal and the sixth audio signal recorded by the voice equipment in the test scene, the elimination capability of the bottom layer hardware of the voice equipment on the non-effective audio part in the test scene can be analyzed and obtained.
Fig. 5 is a flowchart of a specific implementation manner of step 23 in fig. 4, and as shown in fig. 5, in some embodiments, in a case that the fifth audio signal includes an audio signal played by the speech device itself and an audio signal played by the external audio playing device, step 23 includes step 231a and step 232 a.
And 231a, performing fast fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal.
Specifically, Fast Fourier Transform (FFT) calculation processing is performed on the fourth audio signal to obtain a fourth frequency spectrum corresponding to the fourth audio signal, a value of each frequency point in the fourth frequency spectrum is a complex number, a module value of the complex number corresponding to each frequency point in the fourth frequency spectrum is calculated, statistical average processing is performed on the module values corresponding to each frequency point in the fourth frequency spectrum to obtain a fourth statistical average value, and the fourth statistical average value is an energy value corresponding to the fourth audio signal.
Similarly, performing Fast Fourier Transform (FFT) calculation processing on the sixth audio signal to obtain a sixth frequency spectrum corresponding to the sixth audio signal, where a value of each frequency point in the sixth frequency spectrum is a complex number, calculating a module value of the complex number corresponding to each frequency point in the sixth frequency spectrum, and performing statistical average processing on the module values corresponding to each frequency point in the sixth frequency spectrum to obtain a sixth statistical average value, where the sixth statistical average value is an energy value corresponding to the sixth audio signal.
Step 232a, calculating to obtain an energy attenuation amount by using an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal, where the second acoustic performance data includes the energy attenuation amount.
The energy attenuation amount Y3 is the difference between the energy value S3 corresponding to the fourth audio signal and the energy value S4 corresponding to the sixth audio signal, that is, Y3 is S3-S4.
Fig. 6 is a flowchart of another specific implementation manner of step 23 in fig. 4, and as shown in fig. 7, in some embodiments, in a case that the fifth audio signal includes an ambient noise signal generated by an external noise source and an audio signal played by an external audio playing device, step 23 includes step 231b and step 232 b.
And 231b, performing fast fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal.
For the calculation process of the energy value corresponding to the fourth audio signal and the energy value corresponding to the sixth audio signal, reference is made to the above description of step 231a, and details are not repeated here.
And step 232b, calculating to obtain a voice distortion quantity by using the energy value corresponding to the fourth audio signal and the energy value corresponding to the sixth audio signal, wherein the second acoustic performance data comprises the voice distortion quantity.
Specifically, using the energy value S3 corresponding to the fourth audio signal, the energy value S4 corresponding to the sixth audio signal, and the preset speech distortion SDR formula: SDR 10log10[(S3)/(S4-S3)]And calculating to obtain the speech distortion SDR.
And 24, generating a second test result according to the second acoustic performance data and preset second standard data.
Specifically, a second test result is obtained by comparing the second acoustic performance data with a preset second standard data.
Wherein, in a case where the fifth audio signal includes an audio signal played by the speech device itself and an audio signal played by the external audio playing device, the second acoustic performance data includes an energy attenuation amount Y3, and the second criterion data includes a second criterion value B.
In the case where the fifth audio signal includes an ambient noise signal generated by the external noise source and an audio signal played by the external audio playing device, the second acoustic performance data includes the speech distortion amount SDR, and the second criterion data includes the third criterion value C.
Fig. 7 is a flowchart of a specific implementation manner of step 24 in fig. 4, and as shown in fig. 8, in some embodiments, in a case that the fifth audio signal includes an audio signal played by the speech device itself and an audio signal played by an external audio playing device, the second acoustic performance data includes an energy attenuation amount Y3, the second criterion data includes a second criterion value B, and step 24 includes step 241a and step 242 a.
And 241a, comparing the energy attenuation amount with a second judgment standard value to obtain a comparison result.
And 242a, generating a second test result according to the comparison result of the energy attenuation and the second judgment standard value.
Specifically, when the comparison result is that the energy attenuation amount Y3 is smaller than the second criterion value B, the test result of the elimination capability of the non-effective audio of the speech apparatus in this situation is judged to be qualified, the second test result includes information indicating that the test is qualified, otherwise, the test result is judged to be unqualified, and the second test result includes information indicating that the test is unqualified.
Fig. 8 is a flowchart of another specific implementation manner of step 24 in fig. 4, and as shown in fig. 9, in some embodiments, in a case that the fifth audio signal includes an ambient noise signal generated by an external noise source and an audio signal played by an external audio playing device, the second acoustic performance data includes a speech distortion amount SDR, the second criterion data includes a third criterion value C, and step 24 includes step 241b and step 242 b.
And 241b, comparing the voice distortion quantity with the third judgment standard value to obtain a comparison result.
And 242b, generating a second test result according to the comparison result of the voice distortion quantity and the third judgment standard value.
Specifically, when the comparison result is that the speech distortion SDR is greater than or equal to the third criterion value C, the test result of the elimination capability of the speech equipment to the ineffective audio under the condition is judged to be qualified, the second test result comprises information representing that the test is qualified, otherwise, the test result is judged to be unqualified, and the second test result comprises information representing that the test is unqualified.
According to the acoustic performance testing method provided by the embodiment of the disclosure, the performance of the bottom layer audio hardware of the voice equipment is evaluated under different testing scenes, so that the testing results under different testing scenes are obtained, the testing results are used as the quality evaluation standard of the bottom layer audio hardware of the voice equipment, hardware guarantee can be provided for a voice recognition algorithm at the rear end, the quality risk evaluation of the bottom layer audio hardware is advanced, and the research and development cost of products can be saved.
Fig. 9 is a block diagram of an acoustic performance testing apparatus for a speech device according to an embodiment of the present disclosure, and as shown in fig. 9, the apparatus is configured to implement the acoustic performance testing method described above, and the acoustic performance testing apparatus includes: a first obtaining module 301, a second obtaining module 302, a performance parameter generating module 303 and an evaluating module 304.
The first obtaining module 301 is configured to obtain a first audio signal recorded by the voice device, where the first audio signal is an audio signal played by the voice device itself or an environmental noise signal generated by an external noise source.
The second obtaining module 302 is configured to obtain a second audio signal generated by the voice device after performing signal processing on the first audio signal.
The performance parameter generating module 303 is configured to generate first acoustic performance data of the speech device according to the first audio signal and the second audio signal.
The evaluating module 304 is configured to generate a first test result according to the first acoustic performance data and preset first standard data.
In some embodiments, the performance parameter generating module 303 is specifically configured to perform fast fourier transform processing on the first audio signal and the second audio signal respectively to obtain an energy value corresponding to the first audio signal and an energy value corresponding to the second audio signal; and calculating to obtain a noise elimination amount by using the energy value corresponding to the first audio signal and the energy value corresponding to the second audio signal, wherein the first acoustic performance data comprises the noise elimination amount.
In some embodiments, the first criterion data includes a first criterion value, and the evaluating module 304 is specifically configured to compare the noise cancellation amount with the first criterion value; and generating a first test result according to the comparison result of the noise elimination amount and the first judgment standard value.
In some embodiments, the first obtaining module 301 is further configured to obtain a fourth audio signal generated by the voice device after performing signal processing on the recorded third audio signal, where the third audio signal is an audio signal played by an external audio playing device. The second obtaining module 302 is further configured to obtain a sixth audio signal generated by the voice device after performing signal processing on the recorded fifth audio signal, where the fifth audio signal includes an audio signal played by the voice device itself and an audio signal played by an external audio playing device, or the fifth audio signal includes an environmental noise signal generated by an external noise source and an audio signal played by the external audio playing device. The performance parameter generating module 303 is further configured to generate second acoustic performance data of the speech device according to the fourth audio signal and the sixth audio signal. The evaluating module 304 is further configured to generate a second test result according to the second acoustic performance data and a preset second standard data.
In some embodiments, in a case that the fifth audio signal includes an audio signal played by the voice device itself and an audio signal played by the external audio playing device, the performance parameter generating module 303 is specifically configured to: performing fast Fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal; and calculating to obtain the energy attenuation amount by using the energy value corresponding to the fourth audio signal and the energy value corresponding to the sixth audio signal, wherein the second acoustic performance data comprises the energy attenuation amount.
In some embodiments, the second criterion data comprises a second criterion value; the evaluating module 304 is specifically configured to compare the energy attenuation amount with a second evaluation criterion value; and generating a second test result according to the comparison result of the energy attenuation and the second judgment standard value.
In some embodiments, in a case that the fifth audio signal includes an ambient noise signal generated by an external noise source and an audio signal played by an external audio playing device, the performance parameter generating module 303 is specifically configured to: performing fast Fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal; and calculating to obtain the voice distortion quantity by using the energy value corresponding to the fourth audio signal and the energy value corresponding to the fourth sixth audio signal, wherein the second acoustic performance data comprises the voice distortion quantity.
In some embodiments, the second criterion data comprises a third criterion value; the evaluating module 304 is specifically configured to compare the speech distortion amount with a third evaluation criterion value; and generating a second test result according to the comparison result of the voice distortion quantity and the third judgment standard value.
In addition, the acoustic performance testing apparatus provided in the embodiment of the present disclosure is specifically configured to implement the foregoing acoustic performance testing method, and reference may be specifically made to the description of the foregoing acoustic performance testing method, which is not described herein again.
Fig. 10 is a block diagram of an electronic device according to an embodiment of the disclosure, and as shown in fig. 10, the electronic device includes: one or more processors 501; a memory 502 having one or more programs stored thereon that, when executed by the one or more processors 501, cause the one or more processors 501 to implement the acoustic performance testing method described above; one or more I/O interfaces 503 coupled between the processor 501 and the memory 502 and configured to enable information interaction between the processor 501 and the memory 502.
The embodiment of the present disclosure also provides a computer readable storage medium, on which a computer program is stored, wherein the computer program is executed to implement the foregoing acoustic performance testing method.
It will be understood by those of ordinary skill in the art that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. In a hardware implementation, the division between functional modules/units mentioned in the above description does not necessarily correspond to the division of physical components; for example, one physical component may have multiple functions, or one function or step may be performed by several physical components in cooperation. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on computer readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable instructions, data structures, program modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, Digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer readable instructions, data structures, program modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
Example embodiments have been disclosed herein, and although specific terms are employed, they are used and should be interpreted in a generic and descriptive sense only and not for purposes of limitation. In some instances, features, characteristics and/or elements described in connection with a particular embodiment may be used alone or in combination with features, characteristics and/or elements described in connection with other embodiments, unless expressly stated otherwise, as would be apparent to one skilled in the art. Accordingly, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the scope of the disclosure as set forth in the appended claims.

Claims (18)

1. A method for testing acoustic performance of a voice device comprises the following steps:
acquiring a first audio signal recorded by the voice equipment, wherein the first audio signal is an audio signal played by the voice equipment or an environmental noise signal generated by an external noise source;
acquiring a second audio signal generated after the voice equipment performs signal processing on the first audio signal;
generating first acoustic performance data of the speech device from the first audio signal and the second audio signal;
and generating a first test result according to the first acoustic performance data and preset first standard data.
2. The acoustic performance testing method of claim 1, wherein the generating first acoustic performance data of the speech device from the first audio signal and the second audio signal comprises:
respectively carrying out fast Fourier transform processing on the first audio signal and the second audio signal to obtain an energy value corresponding to the first audio signal and an energy value corresponding to the second audio signal;
and calculating to obtain a noise elimination amount by using an energy value corresponding to the first audio signal and an energy value corresponding to the second audio signal, wherein the first acoustic performance data comprises the noise elimination amount.
3. The acoustic performance testing method according to claim 2, wherein the first standard data includes a first criterion value, and the generating a first test result according to the first acoustic performance data and a preset first standard data includes:
comparing the noise removal amount with the first evaluation criterion value;
and generating the first test result according to the comparison result of the noise elimination amount and the first judgment standard value.
4. The acoustic performance testing method of claim 1, wherein the method further comprises:
acquiring a fourth audio signal generated after the voice device performs signal processing on a recorded third audio signal, wherein the third audio signal is an audio signal played by an external audio playing device;
acquiring a sixth audio signal generated after the audio device performs signal processing on a recorded fifth audio signal, where the fifth audio signal includes an audio signal played by the audio device itself and an audio signal played by the external audio playing device, or the fifth audio signal includes an environmental noise signal generated by an external noise source and an audio signal played by the external audio playing device;
generating second acoustic performance data of the speech device from the fourth audio signal and the sixth audio signal;
and generating a second test result according to the second acoustic performance data and preset second standard data.
5. The acoustic performance testing method of claim 4, wherein in case that the fifth audio signal comprises an audio signal played by the voice device itself and an audio signal played by the external audio playing device,
generating second acoustic performance data of the speech device from the fourth audio signal and the sixth audio signal, comprising:
performing fast fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal;
and calculating to obtain an energy attenuation amount by using an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal, wherein the second acoustic performance data comprises the energy attenuation amount.
6. The acoustic performance testing method according to claim 5, wherein the second standard data includes a second criterion value, and the generating a second test result according to the second acoustic performance data and a preset second standard data includes:
comparing the energy attenuation amount with the second judgment standard value;
and generating the second test result according to the comparison result of the energy attenuation and the second judgment standard value.
7. The acoustic performance testing method of claim 4, wherein in a case where the fifth audio signal comprises an ambient noise signal generated by an external noise source and an audio signal played by the external audio playing device,
generating second acoustic performance data of the speech device from the fourth audio signal and the sixth audio signal, comprising:
performing fast fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal;
and calculating to obtain the voice distortion quantity by using the energy value corresponding to the fourth audio signal and the energy value corresponding to the sixth audio signal, wherein the second acoustic performance data comprises the voice distortion quantity.
8. The acoustic performance testing method according to claim 7, wherein the second standard data includes a third criterion value, and the generating a second test result according to the second acoustic performance data and a preset second standard data includes:
comparing the voice distortion amount with the third criterion value;
and generating the second test result according to the comparison result of the voice distortion quantity and the third judgment standard value.
9. An acoustic performance testing apparatus for a speech device, comprising:
the first acquisition module is used for acquiring a first audio signal recorded by the voice equipment, wherein the first audio signal is an audio signal played by the voice equipment or an environmental noise signal generated by an external noise source;
the second obtaining module is used for obtaining a second audio signal generated after the voice equipment performs signal processing on the first audio signal;
a performance parameter generation module, configured to generate first acoustic performance data of the speech device according to the first audio signal and the second audio signal;
and the evaluation module is used for generating a first test result according to the first acoustic performance data and preset first standard data.
10. The acoustic performance testing apparatus according to claim 9, wherein the performance parameter generating module is specifically configured to perform fast fourier transform processing on the first audio signal and the second audio signal, respectively, to obtain an energy value corresponding to the first audio signal and an energy value corresponding to the second audio signal; and calculating to obtain a noise elimination amount by using an energy value corresponding to the first audio signal and an energy value corresponding to the second audio signal, wherein the first acoustic performance data comprises the noise elimination amount.
11. The acoustic performance testing apparatus of claim 10, wherein the first criterion data comprises a first criterion value, the evaluation module being configured to compare the amount of noise cancellation to the first criterion value; and generating the first test result according to the comparison result of the noise elimination amount and the first judgment standard value.
12. The acoustic performance testing apparatus of claim 9,
the first obtaining module is further configured to obtain a fourth audio signal generated by the voice device after performing signal processing on a recorded third audio signal, where the third audio signal is an audio signal played by an external audio playing device;
the second obtaining module is further configured to obtain a sixth audio signal generated by the voice device after performing signal processing on a recorded fifth audio signal, where the fifth audio signal includes an audio signal played by the voice device itself and an audio signal played by the external audio playing device, or the fifth audio signal includes an environmental noise signal generated by an external noise source and an audio signal played by the external audio playing device;
the performance parameter generating module is further configured to generate second acoustic performance data of the speech device according to the fourth audio signal and the sixth audio signal;
the evaluation module is further used for generating a second test result according to the second acoustic performance data and preset second standard data.
13. The acoustic performance testing apparatus of claim 12, wherein in case that the fifth audio signal comprises an audio signal played by the voice device itself and an audio signal played by the external audio playing device,
the performance parameter generating module is specifically configured to perform fast fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal; and calculating to obtain an energy attenuation amount by using an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal, wherein the second acoustic performance data comprises the energy attenuation amount.
14. The acoustic performance testing apparatus of claim 13, wherein the second criterion data comprises a second criterion value;
the evaluation module is specifically used for comparing the energy attenuation with the second evaluation standard value; and generating the second test result according to the comparison result of the energy attenuation and the second judgment standard value.
15. The acoustic performance testing apparatus of claim 12, wherein in case the fifth audio signal comprises an ambient noise signal generated by an external noise source and an audio signal played by the external audio playing device,
the performance parameter generating module is specifically configured to perform fast fourier transform processing on the fourth audio signal and the sixth audio signal respectively to obtain an energy value corresponding to the fourth audio signal and an energy value corresponding to the sixth audio signal; and calculating to obtain the voice distortion quantity by using the energy value corresponding to the fourth audio signal and the energy value corresponding to the sixth audio signal, wherein the second acoustic performance data comprises the voice distortion quantity.
16. The acoustic performance testing apparatus of claim 15, wherein the second criterion data comprises a third criterion value;
the evaluating module is specifically used for comparing the voice distortion quantity with the third evaluating standard value; and generating the second test result according to the comparison result of the voice distortion quantity and the third judgment standard value.
17. An electronic device, comprising:
one or more processors;
memory having one or more programs stored thereon that, when executed by the one or more processors, cause the one or more processors to implement the acoustic performance testing method of any of claims 1-8;
one or more I/O interfaces connected between the processor and the memory and configured to enable information interaction between the processor and the memory.
18. A computer-readable medium, on which a computer program is stored, wherein the computer program, when executed, implements the acoustic performance testing method of any of claims 1-8.
CN202010623087.3A 2020-06-30 2020-06-30 Acoustic performance testing method and device, electronic equipment and computer readable medium Pending CN111785298A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010623087.3A CN111785298A (en) 2020-06-30 2020-06-30 Acoustic performance testing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010623087.3A CN111785298A (en) 2020-06-30 2020-06-30 Acoustic performance testing method and device, electronic equipment and computer readable medium

Publications (1)

Publication Number Publication Date
CN111785298A true CN111785298A (en) 2020-10-16

Family

ID=72761473

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010623087.3A Pending CN111785298A (en) 2020-06-30 2020-06-30 Acoustic performance testing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN111785298A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562740A (en) * 2020-11-25 2021-03-26 厦门亿联网络技术股份有限公司 Noise elimination test method, system, audio and video equipment and storage medium
CN114245281A (en) * 2021-12-09 2022-03-25 深圳市音络科技有限公司 Voice performance test method and system

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999560B1 (en) * 1999-06-28 2006-02-14 Cisco Technology, Inc. Method and apparatus for testing echo canceller performance
CN101661751A (en) * 2008-08-29 2010-03-03 华为技术有限公司 Method and device for evaluating acoustic echo cancellation algorithm
CN106161705A (en) * 2015-04-22 2016-11-23 小米科技有限责任公司 Audio frequency apparatus method of testing and device
CN107360530A (en) * 2017-07-03 2017-11-17 苏州科达科技股份有限公司 The method of testing and device of a kind of echo cancellor
CN108281140A (en) * 2017-12-29 2018-07-13 潍坊歌尔电子有限公司 The test method and system of smart machine noise removing performance
CN109547910A (en) * 2019-01-03 2019-03-29 百度在线网络技术(北京)有限公司 Electronic equipment acoustic assembly performance test methods, device, equipment and storage medium
CN109831733A (en) * 2019-02-26 2019-05-31 北京百度网讯科技有限公司 Test method, device, equipment and the storage medium of audio broadcast performance
CN110430519A (en) * 2019-08-07 2019-11-08 厦门市思芯微科技有限公司 A kind of acoustics of intelligent sound box is tested automatically and analysis system and method
CN110853664A (en) * 2019-11-22 2020-02-28 北京小米移动软件有限公司 Method and device for evaluating performance of speech enhancement algorithm and electronic equipment
CN110933240A (en) * 2019-10-16 2020-03-27 福建星网智慧软件有限公司 Voice frequency automatic testing device and method of VoIP terminal
CN111212372A (en) * 2020-01-09 2020-05-29 广州视声智能科技有限公司 Automatic testing and calibrating method and device for audio call products

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6999560B1 (en) * 1999-06-28 2006-02-14 Cisco Technology, Inc. Method and apparatus for testing echo canceller performance
CN101661751A (en) * 2008-08-29 2010-03-03 华为技术有限公司 Method and device for evaluating acoustic echo cancellation algorithm
CN106161705A (en) * 2015-04-22 2016-11-23 小米科技有限责任公司 Audio frequency apparatus method of testing and device
CN107360530A (en) * 2017-07-03 2017-11-17 苏州科达科技股份有限公司 The method of testing and device of a kind of echo cancellor
CN108281140A (en) * 2017-12-29 2018-07-13 潍坊歌尔电子有限公司 The test method and system of smart machine noise removing performance
CN109547910A (en) * 2019-01-03 2019-03-29 百度在线网络技术(北京)有限公司 Electronic equipment acoustic assembly performance test methods, device, equipment and storage medium
CN109831733A (en) * 2019-02-26 2019-05-31 北京百度网讯科技有限公司 Test method, device, equipment and the storage medium of audio broadcast performance
CN110430519A (en) * 2019-08-07 2019-11-08 厦门市思芯微科技有限公司 A kind of acoustics of intelligent sound box is tested automatically and analysis system and method
CN110933240A (en) * 2019-10-16 2020-03-27 福建星网智慧软件有限公司 Voice frequency automatic testing device and method of VoIP terminal
CN110853664A (en) * 2019-11-22 2020-02-28 北京小米移动软件有限公司 Method and device for evaluating performance of speech enhancement algorithm and electronic equipment
CN111212372A (en) * 2020-01-09 2020-05-29 广州视声智能科技有限公司 Automatic testing and calibrating method and device for audio call products

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
徐岩;孟静;: "基于粉红噪声的语音增强算法性能评价研究", 铁道学报, no. 04 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112562740A (en) * 2020-11-25 2021-03-26 厦门亿联网络技术股份有限公司 Noise elimination test method, system, audio and video equipment and storage medium
CN114245281A (en) * 2021-12-09 2022-03-25 深圳市音络科技有限公司 Voice performance test method and system
CN114245281B (en) * 2021-12-09 2024-03-19 深圳市音络科技有限公司 Voice performance test method and system

Similar Documents

Publication Publication Date Title
US11017799B2 (en) Method for processing voice in interior environment of vehicle and electronic device using noise data based on input signal to noise ratio
CN111798852B (en) Voice wakeup recognition performance test method, device, system and terminal equipment
CN108469966A (en) Voice broadcast control method and device, intelligent device and medium
CN106024035B (en) A kind of method and terminal of audio processing
CN108305637B (en) Earphone voice processing method, terminal equipment and storage medium
CN111031463B (en) Microphone array performance evaluation method, device, equipment and medium
CN111785298A (en) Acoustic performance testing method and device, electronic equipment and computer readable medium
CN113259832B (en) Microphone array detection method and device, electronic equipment and storage medium
CN111261195A (en) Audio testing method and device, storage medium and electronic equipment
CN111739512A (en) Voice wake-up rate testing method, system, device and medium based on real vehicle
CN111128167A (en) Far-field voice awakening method and device, electronic product and storage medium
CN113571047A (en) Audio data processing method, device and equipment
CN110475181B (en) Equipment configuration method, device, equipment and storage medium
CN111028838A (en) Voice wake-up method, device and computer readable storage medium
CN105869656B (en) Method and device for determining definition of voice signal
CN110390954B (en) Method and device for evaluating quality of voice product
CN109741761B (en) Sound processing method and device
CN111128216B (en) Audio signal processing method, processing device and readable storage medium
CN113068100A (en) Closed-loop automatic detection vibration reduction method, system, terminal and storage medium
CN108899041B (en) Voice signal noise adding method, device and storage medium
CN109121068A (en) Sound effect control method, apparatus and electronic equipment
CN116107537A (en) Audio quality adjustment method and device, electronic equipment and storage medium
CN112995882B (en) Intelligent equipment audio open loop test method
CN113517000A (en) Echo cancellation test method, terminal and storage device
CN111885474A (en) Microphone testing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination