CN115278465A - Howling suppression method and device, sound box and sound amplification system - Google Patents

Howling suppression method and device, sound box and sound amplification system Download PDF

Info

Publication number
CN115278465A
CN115278465A CN202211028454.0A CN202211028454A CN115278465A CN 115278465 A CN115278465 A CN 115278465A CN 202211028454 A CN202211028454 A CN 202211028454A CN 115278465 A CN115278465 A CN 115278465A
Authority
CN
China
Prior art keywords
signal
loudspeaker
played
microphone
filter coefficient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211028454.0A
Other languages
Chinese (zh)
Inventor
秦亚光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eswin Computing Technology Co Ltd
Original Assignee
Beijing Eswin Computing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eswin Computing Technology Co Ltd filed Critical Beijing Eswin Computing Technology Co Ltd
Priority to CN202211028454.0A priority Critical patent/CN115278465A/en
Publication of CN115278465A publication Critical patent/CN115278465A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/02Circuits for transducers, loudspeakers or microphones for preventing acoustic reaction, i.e. acoustic oscillatory feedback
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/02Casings; Cabinets ; Supports therefor; Mountings therein
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R27/00Public address systems
    • H04R27/04Electric megaphones

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Quality & Reliability (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

The embodiment of the application provides a howling suppression method and device, a sound box and a sound amplification system, and relates to the technical field of digital signal processing. The method comprises the following steps: preprocessing an audio signal in a public address system, and converting the audio signal into a frequency domain; the following processing procedures are carried out on each frame signal of each frequency point in the converted audio signal: filtering the converted signal played by the loudspeaker for two times based on the adaptive filter; determining a signal to be amplified based on a difference value between a signal acquired by a converted microphone and a signal played by a loudspeaker after twice filtering, and updating a filter coefficient twice by taking a signal played by the loudspeaker in a current frame signal as a reference signal in a processing process for a next frame signal; and converting the signal to be amplified into a time domain after the signal to be amplified is subjected to amplification processing to obtain a target audio. By updating the filter coefficient for the second time, the loop gain from the microphone to the loudspeaker and the howling suppression capability of the public address system can be improved.

Description

Howling suppression method and device, sound box and sound amplification system
Technical Field
The present application relates to the field of digital signal processing technologies, and in particular, to a howling suppression method, device, sound system, and sound amplification system.
Background
In the public address system, the signal collected by the microphone is transmitted to the loudspeaker for amplification and playing, the audio signal played by the loudspeaker is picked up by the microphone again, and the transmission and feedback of the audio signal between the loudspeaker and the microphone form an acoustic loop. During transmission, when the volume is high, the feedback loop of the sound forms positive feedback, namely the gain of the acoustic loop is greater than 1. The sound is amplified step by step in the continuous feedback, so that the harsh howling sound is generated, and the hearing experience of the user is seriously influenced.
Currently, a method for suppressing howling in a public address system includes: frequency shift phase shift method, notch suppression method, self-adaptive howling suppression method and the like. In the frequency shift and phase shift method, the frequency or the phase of sound is changed in real time in the sound processing process, so that the phase characteristic required by positive feedback is damaged. The notch suppression method is to forcibly reduce the acoustic loop gain of a frequency point where howling occurs through a notch filter, but all change the frequency response of a sound signal or a system, and cause certain distortion to the sound.
The self-adaptive howling suppression method adopts a self-adaptive filter to track a feedback path and counteract the effect of the feedback path, so that the generation of howling can be prevented, but the howling suppression capability is not enough, and the loop gain of the system is not improved enough.
Disclosure of Invention
The embodiment of the application provides a howling suppression method, a howling suppression device, a sound box and a sound amplification system, and aims to solve at least one technical problem in the prior art.
According to a first aspect of an embodiment of the present application, there is provided a howling suppression method, including:
preprocessing an audio signal in a public address system, and converting the audio signal into a frequency domain, wherein the audio signal comprises; signals collected by a microphone and signals played by a loudspeaker;
and performing the following processing processes on each frame signal of each frequency point in the converted audio signal:
filtering the converted signal played by the loudspeaker for two times based on the adaptive filter;
determining a signal to be amplified based on a difference value between a signal acquired by a converted microphone and a signal played by a loudspeaker after twice filtering, and updating a filter coefficient twice by taking a signal played by the loudspeaker in a current frame signal as a reference signal in a processing process for processing a next frame signal;
and converting the signal to be amplified into a time domain after the signal to be amplified is subjected to amplification processing to obtain a target audio.
In one possible implementation manner, the process of updating the filter coefficients twice by using the signal played by the speaker in the current frame signal as a reference signal includes:
taking a signal played by a loudspeaker in a current frame signal as a reference signal, and updating the filter coefficient based on a difference value between a signal acquired by a microphone in the current frame signal and the reference signal to obtain a first filter coefficient;
and taking a signal played by the loudspeaker after primary filtering in the current frame signal as a reference signal, and updating the first filter coefficient based on the difference value between the signal acquired by the microphone in the current frame signal and the reference signal to obtain a second filter coefficient.
In yet another possible implementation manner, before processing the initial frame signal of any frequency point, the method further includes:
determining an initial filter coefficient and an initial error signal corresponding to a transfer function of a loudspeaker to microphone path in the public address system;
the following processing procedures are carried out on the initial frame signal of any frequency point:
obtaining updated first filter coefficients and first error signals based on the initial filter coefficients, the initial error signals and the initial frame signals;
obtaining updated second filter coefficients and second error signals based on the first filter coefficients, the first error signals and the initial frame signals;
comparing power spectrums corresponding to the first error signal and the second error signal, and determining an error signal corresponding to a smaller power spectrum as a signal to be amplified corresponding to the initial frame signal;
wherein the initial error signal is determined based on the signal played by the speaker and the signal collected by the microphone in the initial frame signal, and the initial filter coefficient.
In another possible implementation, the process of determining the first error signal includes:
determining a corresponding first filter coefficient based on a preset updating step length, the initial filter coefficient, the initial error signal and a signal played by a loudspeaker in the initial frame signal;
based on the first filter coefficient, filtering a signal played by a loudspeaker in the initial frame signal to obtain a corresponding first filtered signal;
and determining the difference between the signal collected by the microphone in the initial frame signal and the first filtering signal as the first error signal.
In another possible implementation, the determining the second error signal includes:
obtaining a corresponding second filter coefficient based on a preset updating step length, the first filter coefficient, the first error signal and a signal played by a loudspeaker in the initial frame signal;
based on the second filter coefficient, filtering the signal played by the loudspeaker in the initial frame signal to obtain a corresponding second filtered signal;
and determining the difference between the signal collected by the microphone in the initial frame signal and the second filtering signal as the second error signal.
In yet another possible implementation manner, the following processing is performed for any non-initial frame signal of any frequency point:
obtaining a first filter coefficient and a first error signal corresponding to a current frame signal based on a first filter coefficient and a first error signal obtained by updating in the previous frame signal processing process and the current frame signal;
obtaining a second filter coefficient and a second error signal corresponding to the current frame signal based on a first filter coefficient and a first error signal corresponding to the current frame signal and the current frame signal;
and comparing the power spectrums corresponding to the first error signal and the second error signal corresponding to the current frame signal, and determining the error signal corresponding to the smaller power spectrum as the signal to be amplified corresponding to the current frame signal.
In another possible implementation manner, if the preprocessing is fast fourier transform, the converting the signal to be amplified into a time domain after performing amplification processing on the signal to be amplified to obtain a target audio includes:
and performing Fourier inverse transformation on the signal to be amplified and returning the signal to the time domain to obtain the target audio.
According to a second aspect of embodiments of the present application, there is provided a howling suppression apparatus, including: the device comprises a signal processing module, an adder, a comparator, an adaptive filter, a fast Fourier transform module and an inverse transform module, wherein the input end of the adaptive filter is connected with a loudspeaker, the output end of the adaptive filter is connected with the input end of the adder, the output end of the adder is connected with the input end of the comparator, the output end of the comparator is connected with the input end of the signal processing module, and the output end of the signal processing module is connected with the loudspeaker;
the fast Fourier transform module is used for preprocessing an audio signal collected by a microphone and converting the audio signal into a frequency domain, wherein the audio signal comprises a first audio signal and a second audio signal; signals collected by a microphone and signals played by a loudspeaker;
the adaptive filter is used for performing echo suppression and howling suppression processing twice on the converted audio signal and outputting the processed audio signal to the adder;
the adder is used for respectively subtracting the converted audio signal and the signals output by the adaptive filter twice and then outputting the subtracted signals to the comparator;
the comparator is used for comparing the two received signals and outputting a smaller signal to the signal processing module;
the signal processing module carries out local amplification processing on the received signal and then transmits the signal to the inverse Fourier transform module;
and the inverse Fourier transform module is used for converting the received signals into a time domain to obtain a target audio frequency and transmitting the target audio frequency to a loudspeaker for playing.
According to a third aspect of embodiments herein, there is provided a processor comprising: a signal conversion module and a signal processing update module, wherein,
the signal conversion module is used for preprocessing an audio signal in a public address system and converting the audio signal into a frequency domain, wherein the audio signal comprises; signals collected by a microphone and signals played by a loudspeaker;
the signal processing and updating module is used for performing the following processing processes on each frame signal of each frequency point in the converted audio signal: filtering the converted signal played by the loudspeaker for two times based on the adaptive filter; determining a signal to be amplified based on a difference value between a signal acquired by a converted microphone and a signal played by a loudspeaker after twice filtering, and updating a filter coefficient twice by taking a signal played by the loudspeaker in a current frame signal as a reference signal in a processing process for processing a next frame signal;
the signal conversion module is further configured to convert the signal to be amplified into a time domain after the signal to be amplified is subjected to amplification processing, so as to obtain a target audio.
According to a fourth aspect of embodiments of the present application, there is provided a sound apparatus, including: a loudspeaker and the howling suppressing device according to the embodiment of the second aspect, wherein,
the output end of the adder is connected with the input end of the comparator, the output end of the comparator is connected with the input end of a signal processing module in the howling suppression device, and the output end of the signal processing module is connected with the loudspeaker.
According to a fifth aspect of embodiments of the present application, there is provided a sound amplifying system, including: the howling suppression device is arranged between the microphone and the loudspeaker, and is configured to receive an audio signal collected by the microphone and output a generated target audio to the loudspeaker for playing.
The technical scheme provided by the embodiment of the application has the following beneficial effects:
based on the filter coefficient of the adaptive filter, the audio signal of the frequency domain is processed frame by frame and frequency point by frequency point to obtain a corresponding output signal, and the filter coefficient is updated twice by taking the signal played by the loudspeaker in the current frame signal as a reference signal in the processing process for processing the next frame signal.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the description of the embodiments of the present application will be briefly described below.
Fig. 1 is a schematic diagram illustrating a signal transmission process in the related art;
fig. 2 is a schematic diagram of a signal transmission process corresponding to a howling suppression method according to an embodiment of the present application;
fig. 3 is a flowchart illustrating a howling suppression method according to an embodiment of the present application;
fig. 4 is a schematic structural diagram of a processor according to an embodiment of the present application.
Icon: 10-a loudspeaker; 20-a microphone; 30-local public address system; 40-an adder; 50-an adaptive filter; 60-a comparator.
Detailed Description
Embodiments of the present application are described below in conjunction with the drawings in the present application. It should be understood that the embodiments set forth below in connection with the drawings are exemplary descriptions for explaining technical solutions of the embodiments of the present application, and do not limit the technical solutions of the embodiments of the present application.
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
When the public address system adopts the microphone to pick up sound, the sound signal collected by the microphone is transmitted to the loudspeaker to be amplified and played, and the sound signal played by the loudspeaker is collected by the microphone again through space transmission. Fig. 1 shows a transmission process of signals in a public address system, where x is a near-end speech signal, i.e., a real speaking sound, u is an audio signal finally played by a speaker, k is a feedback signal acted by a loop transfer function H, i.e., an audio signal played by the speaker is again collected by a microphone after being spatially transmitted, y is an audio signal collected by the microphone, and G is a local public address system.
The adaptive feedback suppression method adopts an adaptive filter to track a feedback path and counteract the effect of the feedback path, so that howling can be prevented, but the howling suppression capability is not enough, and the loop gain of the system is not improved enough.
In order to solve the above technical problems in the prior art, embodiments of the present application provide a howling suppression method, apparatus, sound system, and sound amplification system.
The technical solutions of the embodiments of the present application and the technical effects produced by the technical solutions of the present application will be described below through descriptions of several exemplary embodiments. It should be noted that the following embodiments may be referred to, referred to or combined with each other, and the description of the same terms, similar features, similar implementation steps, etc. in different embodiments is not repeated.
Fig. 2 is a schematic diagram illustrating a signal transmission process of a first howling suppression method provided in an embodiment of the present application. n and k respectively represent time and frequency points. Wherein, X (n, k) represents the near-end speech signal, Y (n, k) represents the audio signal collected by the microphone, U (n, k) represents the audio signal played by the speaker, and E (n, k) represents the error signal after the adaptive filter is updated. Filter coefficient of adaptive filter corresponding to transfer function H (n, k) of feedback path from loudspeaker to microphone
Figure BDA0003816514980000071
And
Figure BDA0003816514980000072
in this case, G (n, k) represents processing performed by a signal processing module in the local sound amplification system, including automatic gain control, signal amplification, power amplification, and the like.
Specifically, in this embodiment, the following steps are included:
step 1: initial filter coefficients corresponding to the transfer function of the loudspeaker to microphone path in the loudspeaker system may be estimated based on the debug audio.
And 2, step: an adaptive filter is initialized based on the initial filter coefficients.
And step 3: and (3) carrying out fast Fourier transform on the audio signal acquired by the microphone, converting the audio signal into a frequency domain, and carrying out frequency point-by-frequency point frame-by-frame processing in the frequency domain.
And 4, step 4: in the processing process, the signal is represented by vectorization, specifically, assuming that the number of frequency points is K and the number of filters is M, the vector corresponding to the nth frame of the frequency point K in the audio signal acquired by the microphone is:
Figure BDA0003816514980000073
similarly, X (n, k), U (n, k) can be obtained in vector form.
The vector form of the filter coefficients is:
Figure BDA0003816514980000081
in the same way, vector form can be obtained
Figure BDA0003816514980000082
The input signal of the microphone can be expressed as:
Figure BDA0003816514980000083
wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0003816514980000084
represent
Figure BDA0003816514980000085
The conjugate transpose of (1), Y (n, k), U (n, k) are known quantities, and the others are unknown quantities.
The error signal can be expressed as:
Figure BDA0003816514980000086
that is, after echo suppression and howling suppression processing is performed on the audio signal collected by one frame of microphone, a corresponding error signal can be obtained. Namely: the error signal of the audio signal collected by a frame of microphone is equal to the difference value of the frame of audio signal and the echo cancellation amount and the howling cancellation amount.
In the scheme of the application, the adaptive filter needs to be updated twice, specifically, a first filter coefficient obtained by updating in the previous frame signal processing process is used as a filter coefficient of the filter in the current frame signal processing process.
Based on the Minimum Mean Square Error (MMSE) criterion, the first update formula of the available adaptive filter is:
Figure BDA0003816514980000087
wherein, δ is a small integer, generally 0.0001, to prevent the decrease of stability caused by too small inner product of signal U (n, k). α is an update step, and is generally updated in a step-variable manner, which may refer to related technologies and is not described herein again.
The first error signal of the current frame (nth frame) can be expressed as:
Figure BDA0003816514980000088
the formula for performing the second update based on the result of the first update is:
Figure BDA0003816514980000089
the second error signal of the current frame may be expressed as:
Figure BDA0003816514980000091
finally, the output signal of the public address system is:
U'(n,k)=C(n,k)*min(E1(n,k)、E 2 (n,k)}
and 5: and 4, converting the output signal U' (n, k) obtained in the step 4 back to a time domain to obtain a target audio.
The scheme provided by the embodiment of the application combines a double-filter structure to ensure that the effect of the filter is better, and then secondary filtering processing is added, namely, secondary filtering is carried out on the result after the primary filtering, so that compared with the traditional self-adaptive howling inhibition method, the method provided by the application has the advantage that the loop gain of the system is improved more obviously.
Fig. 3 is a flowchart illustrating a howling suppression method according to an embodiment of the present application. The method shown in fig. 3 comprises:
s101, preprocessing an audio signal in a public address system, and converting the audio signal into a frequency domain. Wherein the audio signal comprises; signals collected by the microphone and signals played by the loudspeaker.
S102, carrying out the following processing processes on each frame signal of each frequency point in the converted audio signal:
filtering the converted signal played by the loudspeaker for two times based on the adaptive filter;
and determining a signal to be amplified based on the difference value between the converted signal collected by the microphone and the signal played by the loudspeaker after twice filtering, and updating the filter coefficient twice by taking the signal played by the loudspeaker in the current frame signal as a reference signal in the processing process for the next frame signal processing.
And S103, converting the signal to be amplified into a time domain after the signal to be amplified is subjected to amplification processing to obtain a target audio.
In this embodiment, the preprocessing in S101 is Fast Fourier Transform (FFT), and the specific implementation process in S101 is: the audio signal is framed, typically 10 to 30ms for one frame, typically with a 50% overlap ratio. A time domain window function (e.g., a hanning window) is selected, the window function is moved, the time domain audio signal is windowed, and then a fast fourier transform is performed to convert the time domain signal to the frequency domain.
For example: the frame length is 256 sampling points, overlap 0.5, and a hanning window (win) with a window length of 256 points is adopted to preprocess audio signals y (n) and u (n) in a public address system, and the obtained frequency domain signals can be respectively expressed as:
Y(n,k)=FFT(y(n)*win);
U(n,k)=FFT(u(n)*win)。
correspondingly, in S103, the signal to be amplified may be subjected to Inverse Fourier Transform (IFFT) and returned to the time domain to obtain the target audio. Specifically, after performing inverse fast fourier transform on the output signal, each frame of signal is multiplied by a window function, and then overlapped and added to obtain the target audio.
For example: the resulting target audio signal may be expressed as:
output = IFFT (OUT (n, k)) × win, where OUT (n, k) is the Output signal after the signal to be amplified is amplified.
By adopting the method of the embodiment of the application, based on the filter coefficient of the adaptive filter, the audio signal of the frequency domain is processed frame by frame and frequency point by frequency point to obtain the corresponding output signal, and the filter coefficient is updated twice by taking the signal played by the loudspeaker in the current frame signal as the reference signal in the processing process for processing the next frame signal.
In the embodiment of the present application, a possible implementation manner is provided, and a process of updating a filter coefficient twice by using a signal played by a speaker in a current frame signal as a reference signal in S102 to obtain a second filter coefficient includes:
taking a signal played by a loudspeaker in the current frame signal as a reference signal, and updating a filter coefficient based on a difference value between a signal acquired by a microphone in the current frame signal and the reference signal to obtain a first filter coefficient;
and taking a signal played by the loudspeaker after the current frame signal is subjected to primary filtering as a reference signal, and updating the first filter coefficient based on the difference value between the signal acquired by the microphone in the current frame signal and the reference signal to obtain a second filter coefficient.
Specifically, in the embodiment of the present application, secondary filtering processing is added, that is, secondary filtering is performed on a result after the first filtering, and compared with a traditional adaptive howling suppression method, the method of the present application is more obvious in loop gain improvement of a system.
A possible implementation manner is provided in this embodiment, before processing the initial frame signal of any frequency point in S102, the method may further include:
s100 (not shown in the figure), initial filter coefficients corresponding to the transfer function of the loudspeaker to microphone path in the loudspeaker system and an initial error signal are determined.
Wherein the initial error signal is determined based on the signal played by the speaker and the signal collected by the microphone in the initial frame signal, and the initial filter coefficient.
For example: for the nth frame (initial frame) signal of the frequency point k, a corresponding initial error signal can be determined according to the following formula:
Figure BDA0003816514980000111
wherein the content of the first and second substances,
Figure BDA0003816514980000112
and Y (n, k) is the nth frame of the frequency point k in the signals collected by the microphone, and U (n, k) is the nth frame of the frequency point k in the signals played by the loudspeaker.
In S102, the following processing is performed on the initial frame signal of any frequency point:
based on the initial filter coefficients, the initial error signal and the initial frame signal, corresponding first filter coefficients and a first error signal are obtained.
Based on the first filter coefficient, the first error signal and the initial frame signal, a corresponding second filter coefficient and a second error signal are obtained.
And comparing the power spectrums corresponding to the first error signal and the second error signal, and determining the error signal corresponding to the smaller power spectrum as the signal to be amplified corresponding to the initial frame signal.
Wherein the initial frame is determined based on the number of adaptive filters.
Specifically, in this embodiment, an initial filter coefficient corresponding to a transfer function from a speaker to a microphone in a public address system may be estimated based on a debug audio, and the estimation method may use an off-line filter coefficient calculation method, and for brevity of description, a specific calculation process is not described herein again.
In the embodiment of the present application, the adaptive filter used is Normalized Least Mean Square (NLMS) adaptive filtering, and includes a set of filters, and the initial frame may be determined based on the number of the set of filters, for example: the set of filters includes 10, and the initial frame of any frequency point is the 10 th frame. When the 10 th frame signal is processed, the 1 st to 10 th frame signals are required. While the 10 th frame signal is processed to obtain the corresponding output signal, the coefficients of the initial filter need to be updated twice to obtain the coefficients of the second filter, so as to be used for the 11 th frame signal processing. Similarly, when the 11 th frame signal is processed, the 2 nd to 11 th frame signals are required. While the 11 th frame signal is processed to obtain a corresponding output signal, the coefficients of the filter need to be updated twice for the 12 th frame signal processing, so as to process the audio signal frame by frame and frequency point by frequency point.
It should be noted that, in this embodiment, processing each frame signal is an M frame signal before referring to the frame signal, so that echo and howling can be eliminated more cleanly, that is, the howling suppression effect is better. Wherein M is an integer greater than 1, which is the number of filters in the filter structure.
Specifically, in this embodiment, the initial frame signal of any frequency point may be filtered by the initial error signal based on the corresponding initial filter coefficient to obtain a corresponding first filter coefficient and a first error signal, and then based on the first filtering result, that is: and the first error signal filters the initial frame signal again to obtain a corresponding second filter coefficient and a second error signal. And finally, comparing the power spectrums corresponding to the first error signal and the second error signal, and determining the error signal corresponding to the smaller power spectrum as the signal to be amplified corresponding to the initial frame signal.
A possible implementation manner is provided in the embodiment of the present application, and the process of determining the first error signal includes:
and determining a corresponding first filter coefficient based on the preset updating step length, the initial filter coefficient, the initial error signal and a signal played by the loudspeaker in the initial frame signal.
And based on the first filter coefficient, filtering the signal played by the loudspeaker in the initial frame signal to obtain a corresponding first filtering signal.
The difference between the signal acquired by the microphone in the initial frame signal and the first filtered signal is determined as a first error signal.
Specifically, in this embodiment, the filter coefficient (the first filter coefficient in the foregoing) corresponding to the nth frame signal of the frequency point k obtained by the first update may be estimated according to the following formula:
Figure BDA0003816514980000131
wherein the content of the first and second substances,
Figure BDA0003816514980000132
in order to estimate the initial filter coefficient corresponding to the nth frame signal of the frequency point k, E (n, k) is an initial error signal corresponding to the nth frame signal of the frequency point k, U (n, k) is the nth frame signal of the frequency point k in the signals played by the speaker, α is an update step length, δ is a smaller integer, which is generally 0.0001, and thus, the stability performance is prevented from being reduced due to an excessively small inner product of the signal U (n, k).
In this embodiment, the first error signal corresponding to the nth frame signal at frequency point k may be obtained according to the following formula:
Figure BDA0003816514980000133
wherein Y (n, k) is the nth frame signal of the frequency point k in the signals collected by the microphone,
Figure BDA0003816514980000134
the first filtered signal is corresponding to the nth frame signal of the frequency point k.
A possible implementation manner is provided in the embodiment of the present application, and the process of determining the second error signal includes:
and obtaining a corresponding second filter coefficient based on the preset updating step length, the first filter coefficient, the first error signal and a signal played by the loudspeaker in the initial frame signal.
And based on the second filter coefficient, filtering the signal played by the loudspeaker in the initial frame signal to obtain a corresponding second filtered signal.
And determining the difference between the signal collected by the microphone in the initial frame signal and the second filtering signal as a second error signal.
Specifically, in this embodiment, the second filter coefficient after updating the first filter coefficient for the second time may be estimated according to the following formula:
Figure BDA0003816514980000135
wherein the content of the first and second substances,
Figure BDA0003816514980000136
is a first filter coefficient, E 1 And (n, k) is a first error signal corresponding to the nth frame signal of the frequency point k, U (n, k) is the nth frame signal of the frequency point k in the signals played by the loudspeaker, alpha is an updating step length, delta is a smaller integer and is generally 0.0001, and the stability performance is prevented from being reduced due to the fact that the inner product of the signal U (n, k) is too small.
In this embodiment, the second error signal may be derived according to the following equation:
Figure BDA0003816514980000137
wherein Y (n, k) is the nth frame signal of the frequency point k in the signals collected by the microphone,
Figure BDA0003816514980000141
and the second filtered signal corresponds to the nth frame signal of the frequency point k.
The embodiment of the present application provides a possible implementation manner, and a process of processing any non-initial frame signal of any frequency point includes:
obtaining a first filter coefficient and a first error signal corresponding to a current frame signal based on a first filter coefficient and a first error signal obtained by updating in the previous frame signal processing process and the current frame signal;
obtaining a second filter coefficient and a second error signal corresponding to the current frame signal based on a first filter coefficient and a first error signal corresponding to the current frame signal and the current frame signal;
and comparing the power spectrums corresponding to the first error signal and the second error signal corresponding to the current frame signal, and determining the error signal corresponding to the smaller power spectrum as the signal to be amplified corresponding to the current frame signal.
In this embodiment, if the current frame signal (non-initial frame signal) is the 20 th frame signal of a certain frequency point, the 20 th frame signal is processed based on the first filter coefficient and the first error signal obtained in the 19 th frame signal processing process of the frequency point to obtain a corresponding output signal, and the first filter coefficient is updated twice at the same time so as to be used when the 21 st frame signal is processed.
It should be understood that, in this embodiment, the specific process of processing the non-initial frame signal of any frequency point is similar to the specific process of processing the initial frame signal in the above-described embodiment.
It should be noted that, in this embodiment, when processing the current frame signal at any frequency point, the first filter system obtained in the previous frame signal processing process and the first error signal obtained after the previous frame signal processing are used.
For example: the error signal corresponding to the n +1 th frame (any non-initial frame) signal of the frequency point k can be determined according to the following formula:
Figure BDA0003816514980000142
wherein Y (n +1, k) is collected by a microphoneThe (n + 1) th frame signal of the frequency point k in the signal,
Figure BDA0003816514980000143
for updating initial filter coefficient corresponding to nth frame signal of frequency point k
Figure BDA0003816514980000144
The resulting first filter coefficients).
The first error signal corresponding to the (n + 1) th frame signal of the frequency point k may be determined according to the following formula:
Figure BDA0003816514980000151
wherein Y (n +1, k) is the n +1 frame signal of the frequency point k in the signals collected by the microphone,
Figure BDA0003816514980000152
a second filtered signal corresponding to the (n + 1) th frame signal at frequency point k,
Figure BDA0003816514980000153
for updating initial filter coefficient corresponding to n +1 frame signal of frequency point k
Figure BDA0003816514980000154
Transposing the latter first filter coefficients.
The first filter coefficient obtained by updating the filter coefficient corresponding to the (n + 1) th frame signal of the frequency point k is obtained according to the following formula:
Figure BDA0003816514980000155
wherein the content of the first and second substances,
Figure BDA0003816514980000156
a first filter coefficient obtained by updating the initial filter coefficient corresponding to the nth frame signal of the frequency point k, wherein E (n +1, k) is the (n + 1) th frame signal of the frequency point kA corresponding initial error signal.
The second error signal corresponding to the (n + 1) th frame signal of the frequency point k may be determined according to the following formula:
Figure BDA0003816514980000157
wherein, Y (n +1, k) is the n +1 frame signal of the frequency point k in the signals collected by the microphone,
Figure BDA0003816514980000158
a second filtered signal corresponding to the (n + 1) th frame signal at frequency point k,
Figure BDA0003816514980000159
and the transposition of the second filter coefficient is obtained for updating the first filter coefficient corresponding to the (n + 1) th frame signal of the frequency point k.
Wherein the second filter coefficient is obtained according to the following formula:
Figure BDA00038165149800001510
wherein the content of the first and second substances,
Figure BDA00038165149800001511
a first filter coefficient obtained for updating the filter coefficient corresponding to the (n + 1) th frame signal of the frequency point k, E 1 And (n +1, k) is a first error signal corresponding to the n +1 frame signal of the frequency point k.
In this embodiment, in the process of calculating the error signal, if n =1, U =0, that is, when the 1 st frame input signal is processed, the speaker does not output any signal yet.
Upon determining the first error signal and the second error signal, the corresponding output signals may be determined according to the following equation:
U'(n+1,k)=C(n+1,k)*min(E 1 (n+1,k)、E 2 (n+1,k)}
wherein G (n +1, k) representsFirst error signal E corresponding to (n + 1) th frame of frequency point k 1 (n +1, k) and a second error signal E 2 The smaller of (n +1, k) performs signal processing including automatic gain control, signal amplification, power amplification, and the like, and U' (n +1, k) represents an output signal after signal processing of the (n + 1) th frame at frequency point k.
In summary, according to the howling suppression method provided by the embodiment of the application, the filter structure is adopted for the howling suppression multiplexing echo cancellation, and the reference signal is the signal played by the loudspeaker when the filter coefficient is updated, and the secondary updating is performed, so that the loop gain from the microphone to the loudspeaker can be improved, and the howling suppression capability of the sound amplification system is further improved.
The embodiment of the present application further provides a howling suppression apparatus, including: the device comprises a signal processing module, an adder, a comparator, an adaptive filter, a fast Fourier transform module and an inverse transform module.
The input end of the adaptive filter is connected with the loudspeaker, the output end of the adaptive filter is connected with the input end of the adder, the output end of the adder is connected with the input end of the comparator, the output end of the comparator is connected with the input end of the signal processing module, and the output end of the signal processing module is connected with the loudspeaker.
The fast Fourier transform module is used for preprocessing the audio signal collected by the microphone and converting the audio signal into a frequency domain, wherein the audio signal comprises: signals collected by the microphone and signals played by the loudspeaker. The adaptive filter is used for performing echo suppression and howling suppression processing twice on the converted audio signal and outputting the processed audio signal to the adder.
The adder is used for subtracting the signals output by the adaptive filter twice from the converted audio signal and outputting the subtracted signals to the comparator. The comparator is used for comparing the two received signals and outputting a smaller signal to the signal processing module. And the signal processing module performs local amplification processing on the received signal and transmits the processed signal to the inverse Fourier transform module. And the inverse Fourier transform module is used for converting the received signals into a time domain to obtain a target audio frequency and transmitting the target audio frequency to a loudspeaker for playing.
The embodiment of the application provides a stereo set, includes: a loudspeaker and the howling suppression device provided in the above embodiment. The loudspeaker is connected with the input end of an adaptive filter in the howling suppression device, the output end of the adaptive filter is connected with the input end of an adder in the howling suppression device, the output end of the adder is connected with the input end of a comparator, the output end of the comparator is connected with the input end of a signal processing module in the howling suppression device, and the output end of the signal processing module is connected with the loudspeaker.
The method for transmitting the audio signal collected by the microphone into the sound system may include a wireless mode and a wired mode, for example, the sound system in this embodiment may be a bluetooth sound system, and data transmission of the audio signal is performed with the microphone in a bluetooth mode, or the sound system may be connected with the microphone in a WiFi or other local area network access mode.
The embodiment of the application provides a public address system, includes: the microphone, the speaker and the howling suppression device provided in the above embodiments are arranged between the microphone and the speaker, and the howling suppression device is configured to receive an audio signal collected by the microphone and output a generated target audio to the speaker for playing.
Fig. 4 is a schematic structural diagram of a processor according to an embodiment of the present disclosure. The processor 20 shown in fig. 4 includes: a signal conversion module 201 and a signal processing update module 202. Wherein, the first and the second end of the pipe are connected with each other,
the signal conversion module 201 is configured to pre-process an audio signal in the public address system, and convert the audio signal into a frequency domain, where the audio signal includes the audio signal; the signal collected by the microphone and the signal played by the loudspeaker.
The signal processing and updating module 202 is configured to perform the following processing on each frame signal of each frequency point in the converted audio signal: filtering the converted signal played by the loudspeaker for two times based on the adaptive filter; and determining a signal to be amplified based on the difference value between the converted signal collected by the microphone and the signal played by the loudspeaker after twice filtering, and updating the filter coefficient twice by taking the signal played by the loudspeaker in the current frame signal as a reference signal in the processing process for the next frame signal processing.
The signal conversion module 201 is further configured to convert the signal to be amplified into a time domain after performing amplification processing, so as to obtain a target audio.
The processor of the embodiment of the present application may execute the howling suppression method provided by the embodiment of the present application, and the implementation principle is similar, the actions performed by each module and unit in the processor of the embodiments of the present application correspond to the steps in the howling suppression method in the embodiments of the present application, and for the detailed functional description of each module of the processor, reference may be specifically made to the description in the corresponding howling suppression method shown in the foregoing, and details are not repeated here.
It should be noted that, the processor provided in the embodiment of the present application can implement all the method steps implemented by the method embodiment and achieve the same technical effect, and detailed descriptions of the same parts and beneficial effects as the method embodiment in this embodiment are omitted here.
It should be noted that, in this specification, each embodiment is described in a progressive manner, and each embodiment focuses on differences from other embodiments, and portions that are the same as and similar to each other in each embodiment may be referred to. For the device-like embodiment, since it is basically similar to the method embodiment, the description is simple, and reference may be made to the partial description of the method embodiment for relevant points.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present application. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s).
It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an example of the present application and is not intended to limit the scope of the present application, and various modifications and changes may be made to the present application by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present application shall be included in the protection scope of the present application. It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures.
The above are only optional embodiments of partial implementation scenarios in the present application, and it should be noted that, for those skilled in the art, other similar implementation means based on the technical idea of the present application are also within the scope of protection of the embodiments of the present application without departing from the technical idea of the present application.

Claims (10)

1. A howling suppression method, comprising:
preprocessing an audio signal in a public address system, and converting the audio signal into a frequency domain, wherein the audio signal comprises; signals collected by a microphone and signals played by a loudspeaker;
and performing the following processing processes on each frame signal of each frequency point in the converted audio signal:
filtering the converted signal played by the loudspeaker for two times based on the adaptive filter;
determining a signal to be amplified based on a difference value between a signal acquired by a converted microphone and a signal played by a loudspeaker after twice filtering, and updating a filter coefficient twice by taking a signal played by the loudspeaker in a current frame signal as a reference signal in a processing process for processing a next frame signal;
and converting the signal to be amplified into a time domain after amplification processing is carried out on the signal to be amplified to obtain a target audio.
2. The method of claim 1, wherein the process of updating the filter coefficients twice with the signal played by the speaker in the current frame signal as a reference signal comprises:
taking a signal played by a loudspeaker in a current frame signal as a reference signal, and updating the filter coefficient based on a difference value between a signal acquired by a microphone in the current frame signal and the reference signal to obtain a first filter coefficient;
and taking a signal played by the loudspeaker after the primary filtering in the current frame signal as a reference signal, and updating the first filter coefficient based on a difference value between a signal acquired by a microphone in the current frame signal and the reference signal to obtain the second filter coefficient.
3. The method of claim 2, wherein before processing the initial frame signal of any frequency point, the method further comprises:
determining an initial filter coefficient and an initial error signal corresponding to a transfer function of a loudspeaker to microphone path in the public address system;
the following processing procedures are carried out on the initial frame signal of any frequency point:
obtaining updated first filter coefficients and first error signals based on the initial filter coefficients, the initial error signals and the initial frame signals;
obtaining updated second filter coefficients and second error signals based on the first filter coefficients, the first error signals and the initial frame signals;
comparing power spectrums corresponding to the first error signal and the second error signal, and determining an error signal corresponding to a smaller power spectrum as a signal to be amplified corresponding to the initial frame signal;
wherein the initial error signal is determined based on the signals played by the speaker and the signals collected by the microphone in the initial frame signal, and the initial filter coefficients.
4. The method of claim 3, wherein determining the first error signal comprises:
determining a corresponding first filter coefficient based on a preset updating step length, the initial filter coefficient, the initial error signal and a signal played by a loudspeaker in the initial frame signal;
based on the first filter coefficient, filtering the signal played by the loudspeaker in the initial frame signal to obtain a corresponding first filtered signal;
and determining the difference between the signal collected by the microphone in the initial frame signal and the first filtering signal as the first error signal.
5. The method of claim 3, wherein determining the second error signal comprises:
obtaining a corresponding second filter coefficient based on a preset updating step length, the first filter coefficient, the first error signal and a signal played by a loudspeaker in the initial frame signal;
based on the second filter coefficient, filtering the signal played by the loudspeaker in the initial frame signal to obtain a corresponding second filtered signal;
and determining the difference between the signal collected by the microphone in the initial frame signal and the second filtered signal as the second error signal.
6. The method according to any one of claims 3-5, wherein the following process is performed for any non-initial frame signal of any frequency point:
obtaining a first filter coefficient and a first error signal corresponding to a current frame signal based on a first filter coefficient and a first error signal obtained by updating in the previous frame signal processing process and the current frame signal;
obtaining a second filter coefficient and a second error signal corresponding to the current frame signal based on a first filter coefficient and a first error signal corresponding to the current frame signal and the current frame signal;
and comparing the power spectrums corresponding to the first error signal and the second error signal corresponding to the current frame signal, and determining the error signal corresponding to the smaller power spectrum as the signal to be amplified corresponding to the current frame signal.
7. A howling suppression device, comprising: the device comprises a signal processing module, an adder, a comparator, an adaptive filter, a fast Fourier transform module and an inverse transform module;
the input end of the adaptive filter is connected with a loudspeaker, the output end of the adaptive filter is connected with the input end of the adder, the output end of the adder is connected with the input end of the comparator, the output end of the comparator is connected with the input end of the signal processing module, and the output end of the signal processing module is connected with the loudspeaker;
the fast Fourier transform module is used for preprocessing an audio signal collected by a microphone and converting the audio signal into a frequency domain, wherein the audio signal comprises a first audio signal and a second audio signal; signals collected by a microphone and signals played by a loudspeaker;
the adaptive filter is used for performing echo suppression and howling suppression processing twice on the converted audio signal and outputting the processed audio signal to the adder;
the adder is used for respectively subtracting the converted audio signal and the signals output by the adaptive filter twice and then outputting the subtracted signals to the comparator;
the comparator is used for comparing the two received signals and outputting a smaller signal to the signal processing module;
the signal processing module carries out local amplification processing on the received signal and then transmits the signal to the inverse Fourier transform module;
and the inverse Fourier transform module is used for converting the received signals into a time domain to obtain target audio and transmitting the target audio to a loudspeaker for playing.
8. A processor, comprising: a signal conversion module and a signal processing update module, wherein,
the signal conversion module is used for preprocessing an audio signal in a public address system and converting the audio signal into a frequency domain, wherein the audio signal comprises the audio signal; signals collected by a microphone and signals played by a loudspeaker;
the signal processing and updating module is used for performing the following processing processes on each frame signal of each frequency point in the converted audio signal: filtering the converted signal played by the loudspeaker for two times based on the adaptive filter; determining a signal to be amplified based on a difference value between a signal acquired by a converted microphone and a signal played by a loudspeaker after twice filtering, and updating a filter coefficient twice by taking a signal played by the loudspeaker in a current frame signal as a reference signal in a processing process for processing a next frame signal;
the signal conversion module is further configured to convert the signal to be amplified into a time domain after the signal to be amplified is amplified, so as to obtain a target audio.
9. A sound box, comprising: a speaker, and the howling suppression apparatus according to claim 7,
the loudspeaker is connected with the input end of an adaptive filter in the howling suppression device, the output end of the adaptive filter is connected with the input end of an adder in the howling suppression device, the output end of the adder is connected with the input end of a comparator, the output end of the comparator is connected with the input end of a signal processing module in the howling suppression device, and the output end of the signal processing module is connected with the loudspeaker.
10. A loudspeaker system, comprising: the device comprises a microphone, a loudspeaker and the howling suppression device as claimed in claim 7, wherein the howling suppression device is arranged between the microphone and the loudspeaker, and is configured to receive an audio signal collected by the microphone and output a target audio obtained by signal processing to the loudspeaker for playing.
CN202211028454.0A 2022-08-25 2022-08-25 Howling suppression method and device, sound box and sound amplification system Pending CN115278465A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211028454.0A CN115278465A (en) 2022-08-25 2022-08-25 Howling suppression method and device, sound box and sound amplification system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211028454.0A CN115278465A (en) 2022-08-25 2022-08-25 Howling suppression method and device, sound box and sound amplification system

Publications (1)

Publication Number Publication Date
CN115278465A true CN115278465A (en) 2022-11-01

Family

ID=83754483

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211028454.0A Pending CN115278465A (en) 2022-08-25 2022-08-25 Howling suppression method and device, sound box and sound amplification system

Country Status (1)

Country Link
CN (1) CN115278465A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835092A (en) * 2023-02-15 2023-03-21 南昌航天广信科技有限责任公司 Audio amplification feedback suppression method, system, computer and storage medium
CN118016042B (en) * 2024-04-09 2024-05-31 成都启英泰伦科技有限公司 Howling suppression method and device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115835092A (en) * 2023-02-15 2023-03-21 南昌航天广信科技有限责任公司 Audio amplification feedback suppression method, system, computer and storage medium
CN115835092B (en) * 2023-02-15 2023-05-09 南昌航天广信科技有限责任公司 Audio amplification feedback suppression method, system, computer and storage medium
CN118016042B (en) * 2024-04-09 2024-05-31 成都启英泰伦科技有限公司 Howling suppression method and device

Similar Documents

Publication Publication Date Title
CN109727604B (en) Frequency domain echo cancellation method for speech recognition front end and computer storage medium
US20190222691A1 (en) Data driven echo cancellation and suppression
JP4161628B2 (en) Echo suppression method and apparatus
JP4377952B1 (en) Adaptive filter and echo canceller having the same
CN110176244B (en) Echo cancellation method, device, storage medium and computer equipment
JP4957810B2 (en) Sound processing apparatus, sound processing method, and sound processing program
WO2006040734A1 (en) Echo cancellation
JPWO2007049644A1 (en) Echo suppression method and apparatus
WO2012153452A1 (en) Echo erasing device and echo detection device
JP2002528995A (en) Method and apparatus for providing echo suppression using frequency domain non-linear processing
Gil-Cacho et al. Wiener variable step size and gradient spectral variance smoothing for double-talk-robust acoustic echo cancellation and acoustic feedback cancellation
CN115175063A (en) Howling suppression method and device, sound box and sound amplification system
JP5469564B2 (en) Multi-channel echo cancellation method, multi-channel echo cancellation apparatus and program thereof
CN115278465A (en) Howling suppression method and device, sound box and sound amplification system
US8804981B2 (en) Processing audio signals
WO2012157788A1 (en) Audio processing device, audio processing method, and recording medium on which audio processing program is recorded
JP3756828B2 (en) Reverberation elimination method, apparatus for implementing this method, program, and recording medium therefor
CN113194385B (en) Subband self-adaptive feedback elimination method and system based on step size control
JP5937451B2 (en) Echo canceling apparatus, echo canceling method and program
JP2004349796A (en) Sound echo canceling method, apparatus thereof, program and recording medium thereof
JP6143702B2 (en) Echo canceling apparatus, method and program
JP4903843B2 (en) Adaptive filter and echo canceller having the same
JP4964267B2 (en) Adaptive filter and echo canceller having the same
JPWO2012157783A1 (en) Audio processing apparatus, audio processing method, and recording medium recording audio processing program
JP2012205161A (en) Voice communication device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination