WO2016053019A1 - 잡음이 포함된 오디오 신호를 처리하는 방법 및 장치 - Google Patents
잡음이 포함된 오디오 신호를 처리하는 방법 및 장치 Download PDFInfo
- Publication number
- WO2016053019A1 WO2016053019A1 PCT/KR2015/010370 KR2015010370W WO2016053019A1 WO 2016053019 A1 WO2016053019 A1 WO 2016053019A1 KR 2015010370 W KR2015010370 W KR 2015010370W WO 2016053019 A1 WO2016053019 A1 WO 2016053019A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- signal
- audio signal
- energy
- noise
- frequency
- Prior art date
Links
- 230000005236 sound signal Effects 0.000 title claims abstract description 189
- 238000000034 method Methods 0.000 title claims abstract description 49
- 230000001629 suppression Effects 0.000 claims abstract description 31
- 230000003111 delayed effect Effects 0.000 claims description 11
- 238000010586 diagram Methods 0.000 description 15
- 230000008569 process Effects 0.000 description 6
- 230000035939 shock Effects 0.000 description 6
- 238000003672 processing method Methods 0.000 description 5
- 230000009467 reduction Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000014509 gene expression Effects 0.000 description 2
- 230000003595 spectral effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012217 deletion Methods 0.000 description 1
- 230000037430 deletion Effects 0.000 description 1
- 230000003203 everyday effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L21/0232—Processing in the frequency domain
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R25/00—Deaf-aid sets, i.e. electro-acoustic or electro-mechanical hearing aids; Electric tinnitus maskers providing an auditory perception
- H04R25/50—Customised settings for obtaining desired overall acoustical characteristics
- H04R25/505—Customised settings for obtaining desired overall acoustical characteristics using digital signal processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02166—Microphone arrays; Beamforming
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R1/00—Details of transducers, loudspeakers or microphones
- H04R1/20—Arrangements for obtaining desired frequency or directional characteristics
- H04R1/32—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
- H04R1/40—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
- H04R1/403—Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers loud-speakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/11—Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/15—Transducers incorporated in visual displaying devices, e.g. televisions, computer displays, laptops
Definitions
- the present invention relates to a method and apparatus for processing an audio signal containing noise.
- Hearing devices may amplify and transmit external sounds to a user.
- the user can better recognize the sound through the hearing aid.
- the user since the user may be exposed to various noise environments in everyday life, the user may feel uncomfortable unless the hearing aid properly removes and outputs the noise included in the audio signal.
- the present invention relates to a method and apparatus for processing an audio signal including noise, and more particularly, to an audio signal processing method and apparatus for removing noise while minimizing sound distortion.
- noise included in an audio signal may be effectively removed while minimizing distortion of sound quality of the audio signal.
- FIG. 1 is a diagram illustrating an internal structure of a terminal device for processing an audio signal according to an exemplary embodiment.
- FIG. 2 is a flowchart illustrating a method of processing an audio signal according to an exemplary embodiment.
- FIG. 3 is a diagram illustrating an example of an impact sound and a target signal according to an exemplary embodiment.
- FIG. 4 is a diagram illustrating an example of an audio signal processed according to an exemplary embodiment.
- FIG. 5 is a block diagram illustrating a method of processing an audio signal for removing noise according to an exemplary embodiment.
- FIG. 6 is a block diagram illustrating a method of processing an audio signal for removing noise according to an exemplary embodiment.
- FIG. 7 is a flowchart illustrating a method of processing an audio signal for removing noise according to an exemplary embodiment.
- FIG. 8 is an exemplary diagram illustrating an example of processing an audio signal for removing noise according to an exemplary embodiment.
- FIG. 9 is a block diagram illustrating an internal structure of an apparatus for processing an audio signal according to an exemplary embodiment.
- a method of processing an audio signal comprising: obtaining an audio signal in a frequency domain for a plurality of frames; Dividing the frequency band into a plurality of sections; Obtaining energy for the plurality of sections; Detecting an audio signal including noise based on the energy difference between the plurality of sections; And applying a suppression gain to the detected audio signal.
- the detecting of the audio signal including the noise may include obtaining energy for the plurality of frames; Detecting an audio signal including noise based on at least one of an energy difference between the plurality of frames and an energy value of a predetermined frame.
- Applying the suppression gain includes determining the suppression gain based on the energy of the audio signal from which the noise was detected.
- the energy difference between the frequency bands is a difference between the energy of the first frequency section and the energy of the second frequency section, and the second frequency section is a section of the frequency band higher than the first frequency section.
- a method of processing an audio signal comprising: acquiring a front signal and a rear signal; Obtaining a coherence between the delayed signal and the forward signal; Based on the coherence, determining a gain value; Obtaining a fixed beamforming signal by obtaining a difference between the rear signal to which the delay is applied and the front signal; And applying the gain value to the fixed beamforming signal to output the gain value.
- the acquiring of the coherence may include dividing a frequency band into at least two sections; Acquiring the coherence for the high frequency period among the divided periods, and determining the gain value based on the coherence for the high frequency period, for the directionality of the target signal of the audio signal. Determining; And determining a gain value for a low frequency section of the divided sections based on the directionality.
- the determining of the gain value may include estimating noise of the front signal; Based on the estimated noise, determining a gain value for the low frequency section.
- a terminal apparatus for processing an audio signal comprising: a receiver configured to acquire an audio signal in a frequency domain for a plurality of frames; Splitting a frequency band into a plurality of sections, obtaining energy for the plurality of sections, detecting an audio signal containing noise based on the energy difference between the plurality of sections, and suppressing the detected audio signal A control unit applying a gain; And an output unit converting the audio signal processed by the controller into a signal in a time domain and outputting the converted signal.
- a terminal apparatus for processing an audio signal comprising: a receiver configured to acquire a front signal and a rear signal; A fixed beamforming is obtained by obtaining a coherence between the delayed rear signal and the front signal, determining a gain value based on the coherence, and obtaining a difference between the delayed rear signal and the front signal.
- a controller which acquires a signal and applies the gain value to the fixed beamforming signal; And an output unit converting the fixed beamforming signal to which the gain value is applied and converting the signal into a time domain signal.
- any part of the specification is to “include” any component, this means that it may further include other components, except to exclude other components unless otherwise stated.
- a part is “connected” with another part, this includes not only the case where it is “directly connected” but also the case where it is “electrically connected” with another element between them.
- part refers to a hardware component, such as software, FPGA or ASIC, and “part” plays certain roles. However, “part” is not meant to be limited to software or hardware.
- the “unit” may be configured to be in an addressable storage medium and may be configured to play one or more processors.
- a “part” refers to components such as software components, object-oriented software components, class components, and task components, processes, functions, properties, procedures, Subroutines, segments of program code, drivers, firmware, microcode, circuits, data, databases, data structures, tables, arrays and variables.
- the functionality provided within the components and “parts” may be combined into a smaller number of components and “parts” or further separated into additional components and “parts”.
- FIG. 1 is a diagram illustrating an internal structure of a terminal device for processing an audio signal according to an exemplary embodiment.
- the terminal device 100 may include a converter 110 and 160, a band energy obtainer 120, a noise detector 130, and a gain determiner 140.
- the terminal device 100 may be a terminal device that can be used by a user.
- the terminal device 100 may include a hearing device, a smart television, an ultra high definition (UHD) TV, a monitor, a personal computer, a laptop computer, a mobile phone, a tablet PC, Navigation terminals, smart phones, personal digital assistants (PDAs), portable multimedia players (PMPs), and digital broadcast receivers may be included.
- the terminal device 100 may include various types of devices.
- the terminal device 100 may include a microphone capable of receiving an externally generated sound, and may receive an audio signal through the microphone or may receive an audio signal from an external device.
- the terminal device 100 may remove noise included in the audio signal by detecting noise from the received audio signal and applying a suppression gain to the section where the noise is detected. As the suppression gain is applied to the audio signal, the size of the audio signal can be reduced.
- Noise that may be included in the audio signal may refer to a signal except for a target signal.
- the target signal may be, for example, a speech signal that the user wants to listen to.
- the noise may include, for example, living noise or impact sound other than the target signal.
- the terminal device 100 may detect a section including the noise excluding the target signal from the audio signal and apply a suppression gain to remove the noise to the audio signal.
- the converter 110 may convert the received audio signal of the time domain into an audio signal of the frequency domain.
- the transform unit 110 may perform a discrete fourier transform on the audio signal in the time domain to obtain an audio signal in the frequency domain composed of a plurality of frames.
- a delay time may be generated as the impact sound generated initially is not removed.
- the terminal device 100 may process the audio signal in the frequency domain on a frame-by-frame basis, and may remove and output the noise of the audio signal in real time without a delay time, as compared to the method of processing the noise in the time domain.
- the band energy obtainer 120 may obtain energy for a predetermined frequency section by using the audio signal in the frequency domain.
- the band energy obtainer 120 may divide a frequency band into two or more frequency sections and obtain energy for each frequency section.
- Energy can be expressed as power, norm value, intensity, amplitude, decibel value, and so on. For example, energy for each frequency interval may be obtained as in Equation 1 below.
- Y (w, n) represents the energy value of the frequency w in the frame n.
- Log conversion is performed on an average value of energy values included in a predetermined frequency section, whereby Y ch .N (n) may have an energy value in decibel (dB) units.
- the energy for a predetermined frequency section may be determined as a representative value such as an average value, a median value, etc. of energy values for each frequency included in the predetermined frequency section.
- the energy for a predetermined frequency interval may be determined in various ways.
- the noise detector 130 may detect a section in which noise exists based on the energy for each frequency section obtained by the band energy obtainer 120.
- the noise detector 130 may detect an audio signal including noise based on an energy difference between frequency sections.
- the noise detector 130 may determine whether noise is included in the audio signal in units of frames.
- the audio signal including the impact sound has a very large energy for a short time, so when the audio signal including the impact sound is transmitted to the user, the user may feel uncomfortable due to the very loud sound.
- the impact sound has a very large energy for a short time, and the energy of the impact sound can be concentrated in the high frequency band. Therefore, when the impact sound is included in the audio signal, the energy of the high frequency band may be larger than the energy of the low frequency section.
- the noise detector 130 may detect the audio signal including the impact sound by using the features of the audio signal including the impact sound described above.
- the noise detector 130 may detect an audio signal including an impact sound using energy for each frequency section obtained by the band energy obtainer 120.
- the noise detector 130 may detect an audio signal including an impact sound based on a difference or a ratio between energy for a low frequency section and energy for a high frequency section. For example, an energy difference between frequency sections may be obtained as shown in Equation 2 below.
- Equation 2 Y ch .L (n) and Y ch .H (n) mean energy of a low frequency section and energy of a high frequency section, respectively.
- the difference between the energy of the low frequency section and the energy of the high frequency section may be used to detect the impact sound, but the ratio of the energy of the low frequency section and the energy of the high frequency section may be used to detect the impact sound instead of the difference value.
- the energy of the low frequency or high frequency section may be determined as a representative value of the energy for each frequency included in each section obtained according to Equation 1 described above.
- the noise detector 130 may determine that the corresponding audio signal includes an impact sound.
- the impact sound may be detected based on the difference or ratio of energy between frequency sections, even if the target signal suddenly increases, the sound quality may be minimized due to an erroneous determination of the impact sound. For example, even if the speaker's voice is suddenly loud, there is a high possibility that the difference or ratio of energy between frequency sections is maintained, and thus the possibility of being incorrectly judged as a shock sound may be lowered.
- the noise detector 130 may detect the audio signal including the impact sound in consideration of the fact that the energy of the audio signal including the impact sound rapidly increases for a short time.
- the noise detector 130 may further determine whether an energy difference of the audio signal between frames is greater than or equal to a reference value, and determine whether the corresponding audio signal includes an impact sound.
- the energy for the predetermined frame may be obtained from the sum of the energy for each frequency section obtained by the band energy obtainer 120. For example, an energy difference between frames may be obtained as shown in Equation 3 below.
- Equation 3 Y ch .N (n) and Y ch .N (n-1) mean energy for frame n and energy for frame n-1, respectively.
- the energy for a given frame can be obtained according to Equation 1 described above.
- the noise detector 130 may determine whether the energy of the current frame is greater than or equal to a predetermined reference value in consideration of the fact that the audio signal including the impact sound has absolutely large energy.
- the noise detector 130 may determine whether the audio signal of the current frame includes an impact sound based on the energy difference between frames, the energy difference between frequency sections, and the energy size of the current frame.
- Equation 4 Y th , fd th , and bd th represent reference values for the energy magnitude of the current frame, the energy difference between frames, and the energy difference between frequency intervals, respectively.
- an impact sound may be detected based on an energy difference between frames, an energy difference between frequency sections, and an energy size of a current frame.
- the impact sound may be detected based on one or more of the three values described above. Can be detected.
- the gain determiner 140 may determine the suppression gain value.
- the suppression gain value may be applied to the audio signal determined by the noise detector 130 to include the impact sound. As the suppression gain value is applied to the audio signal, the size of the audio signal including the impact sound may be reduced.
- the suppression gain value may be determined by, for example, Equation 5 below.
- G (w, n) represents a suppression gain value that can be applied to the frequency w of the audio signal of the frame n.
- Y ch . N (w N , n) represents the audio signal to which the suppression gain is applied.
- the suppression gain may be determined according to the energy level of the audio signal to which the suppression gain is applied, as shown in Equation 5 below.
- the suppression gain may be determined to be equal to or less than the maximum MaXGain value.
- the suppression gain can be determined in various ways, without being limited to the examples described above.
- the suppression gain determined by the gain determiner 140 may be applied to the audio signal in the frequency domain by the calculator 150.
- the audio signal to which the suppression gain is applied may be converted into an audio signal in the time domain by the converter 160 and output.
- FIG. 2 is a flowchart illustrating a method of processing an audio signal according to an exemplary embodiment.
- the terminal device 100 may obtain audio signals of frequency domains for a plurality of frames.
- the terminal device 100 may convert the received audio signal of the time domain into an audio signal of the frequency domain.
- the terminal device 100 may divide the frequency band into a plurality of sections, and in operation S230, obtain energy for the plurality of sections.
- the energy for each section may be determined as representative values such as an average value, a median value, and the like of energy values for each frequency.
- the terminal device 100 may detect an audio signal including noise based on the energy difference between the plurality of sections. For example, the terminal device 100 may detect an audio signal including an impact sound based on a difference or a ratio between energy for a low frequency section and energy for a high frequency section. The terminal device 100 may detect an audio signal including an impact sound on a frame basis.
- the terminal device 100 may apply a suppression gain to the audio signal detected in operation S240. As the suppression gain is applied to the audio signal, the energy level of the audio signal may be reduced. As the size of the audio signal including the impact sound is reduced, the audio signal from which the impact sound is removed may be output.
- FIG. 3 is a diagram illustrating an example of an impact sound and a target signal according to an exemplary embodiment.
- Reference numeral 310 denotes an impact sound in the time domain
- 320 illustrates a voice signal that is a target signal in the time domain. Referring to 310 and 320, both are characterized by a sharp increase in size for a short time.
- the 330 illustrates signals in the frequency domain corresponding to the impact sounds 310 and 320 and the voice signal.
- the energy of the high frequency region is not greater than that of the low frequency region, and the energy is spread evenly in a predetermined frequency range.
- the energy of the high frequency region is larger than the energy of the low frequency region, and the energy is concentrated in a high frequency section compared to the voice signal.
- the terminal device 100 may detect an audio signal including the impact sound by using the energy that the impact sound is concentrated in a higher frequency section than the voice signal. For example, the terminal device 100 may detect an audio signal including an impact sound based on a difference or a ratio between energy of a high frequency region and energy of a low frequency region.
- FIG. 4 is a diagram illustrating an example of an audio signal processed according to an exemplary embodiment.
- 410 is an audio signal before processing
- 420 is a diagram showing an audio signal from which a shock sound is removed by applying a suppression gain.
- an audio signal including an impact sound may be detected based on a difference or a ratio between energy of a high frequency region and energy of a low frequency region. Therefore, although it does not correspond to the impact sound, the suppression gain may not be applied to a section in which the magnitude of energy rapidly increases, such as 411 and 412.
- FIG. 5 is a block diagram illustrating a method of processing an audio signal for removing noise according to an exemplary embodiment.
- the method of processing the audio signal illustrated in FIG. 5 may be performed by the terminal device 100 described above.
- the terminal device 100 may include a microphone capable of receiving an externally generated sound, and may receive an audio signal through the microphone or may receive an audio signal from an external device.
- the terminal device 100 may process the audio signal according to the method illustrated in FIG. 5.
- the audio signal from which the impact sound is removed may be obtained by being divided into a front signal and a rear signal.
- the terminal device 100 may process the audio signal according to the method shown in FIG. 5, and then remove the impact sound of the audio signal according to the method shown in FIGS. 1 to 2.
- the terminal device 100 may include a front microphone capable of receiving a front signal and a rear microphone capable of receiving a rear signal.
- the front microphone and the rear microphone are located at a distance from each other and may receive different audio signals according to the direction of the audio signal.
- the terminal device 100 may remove noise of the audio signal by using the directionality of the audio signal.
- the front and rear microphones of the terminal device 100 may collect sounds coming from various directions. For example, in a situation where the user talks face to face with another speaker, the terminal device 100 may set the sound coming from the front of the user as the target signal, and process the sound having no directivity as noise. The terminal device 100 may perform audio signal processing to remove noise based on the difference between the audio signals collected through the front and rear microphones.
- the terminal device 100 may perform audio signal processing for noise removal based on a coherence indicating the degree of matching of the front and rear signals. As the front and rear signals coincide, it may be determined as noise having no directivity. Accordingly, the terminal device 100 may determine that the corresponding audio signal includes noise as the coherence value is larger, and apply a gain value smaller than 1 to the audio signal.
- the distance between the front and rear microphones may be designed to about 0.7 ⁇ 1cm for miniaturization.
- the correlation between the audio signals received through the two microphones increases, so that the noise canceling performance using the directionality of the signals may be lowered.
- the terminal device 100 may apply a delay to a rear signal and perform noise cancellation based on coherence between the rear signal and the front signal to which the delay is applied.
- the coherence value may be smaller in the case of the audio signal in the forward direction, and the coherence value may be larger in the case of the audio signal in the backward direction. Therefore, even if the distance between the front and rear microphones is narrowed and the correlation between the audio signals is increased, the coherence value of the omnidirectional audio signal including the target signal is determined to be a smaller value, so that the noise canceling performance may be improved.
- fast fourier transform (FFT) transforms 510 and 520 may be performed to convert a signal in a frequency domain with respect to a rear signal to which a front signal and a delay are applied 515.
- FFT fast fourier transform
- Various methods for converting an audio signal into a signal in the frequency domain but not limited to the above-described FFT may be used.
- the delay application 515 and the FFT transform 520 for the backward signal may be performed in the reverse order and not in the order shown.
- the terminal device 100 may obtain a gain value for the low frequency band based on the coherence value of the high frequency band instead of obtaining the coherence value of the low frequency band.
- the terminal device 100 may divide the frequency band into at least two sections at 525 and 530, and obtain a coherence value between the front signal and the rear signal to which the delay is applied in the high frequency section.
- the terminal device 100 may divide the frequency band into a plurality of sections based on the frequency band having a high correlation due to the narrow front and rear microphone intervals.
- the coherence value ⁇ fb may be determined as a value between 0 and 1. As the front and back signals have a higher correlation, the coherence value may be determined to be close to one.
- Equation 6 ⁇ ff and ⁇ bb denote a power spectral density (PSD) for a front signal and a rear signal to which a delay ( ⁇ ) is applied, respectively, and ⁇ fb denotes a cross power density (CSD; cross power spectral). density).
- PSD power spectral density
- ⁇ fb denotes a cross power density (CSD; cross power spectral). density).
- ⁇ can be determined by a value between 0 and 1.
- a coherence value indicating the correlation of the two signals may be determined.
- the coherence value can be determined in various ways.
- the coherence value is determined using the delayed rear signal, so that the coherence value for the audio signal in the forward direction is determined to be smaller and the coherence value for the audio signal in the backward direction is larger. Can be determined. Therefore, even if the distance between the front and rear microphones is narrowed to increase the correlation between the audio signals, the coherence value of the audio signal in the omnidirectional direction including the target signal may be determined to a smaller value, thereby improving noise reduction performance.
- the terminal device 100 may determine a gain value that may be applied in the high frequency band based on the coherence value.
- the gain G h may be determined as in Equation 7 below.
- the G h value may be determined to be different from each other according to the frequency w h value. Since the coherence value for the frequency component including the omnidirectional audio signal may have a value close to zero, the gain may be determined to be close to one. Therefore, the frequency component including the omni-directional audio signal can be preserved intact. On the other hand, since the coherence value for the frequency component including the audio signal in the backward direction may have a value close to 1, the gain may be determined to be close to zero. Therefore, the frequency component including the audio signal in the backward direction can be reduced in size.
- the G h value may be determined based on the real part, the imaginary part or the magnitude coherence of the coherence value.
- the gain G h value can be determined in various ways based on the coherence value, without being limited to the example described above.
- the gain value for the low frequency band that may be determined at 550 may be determined based on the coherence value of the high frequency band as described above. For example, a gain G ′ l value for the low frequency band may be determined according to Equation 8 below.
- the gain G l value may be determined by estimating a noise signal N f included in the front signal Y f at 535.
- the noise included in the front audio signal can be estimated in various ways.
- the terminal device 100 may detect noise included in the front audio signal based on the characteristic of the noise signal.
- the gain G l value may be determined to be a small value so that the magnitude of the corresponding frequency component is smaller as the noise signal is larger.
- the gain G ′ l value may be determined at 550 based on the gain G l value and the coherence ⁇ fb value of the high frequency band.
- the terminal device 100 estimates the direction of the target signal according to the change amount of the coherence ⁇ fb value at 540, and determines the gain G ′ l value of the low frequency band based on the direction of the target signal.
- the coherence value at a predetermined frequency component may be close to zero.
- the predetermined frequency component may be determined according to the characteristics of the target signal.
- a predetermined frequency component may be determined among 200 to 3500 Hz intervals which are frequency intervals of speech.
- the coherence value may be close to 1 in a predetermined frequency section.
- the terminal device 100 when the target signal is a forward, it is possible to determine the gain G 'l The value of the low frequency band so that the noise suppression components according to the estimated noise suppression signal to the gain G l. In addition, when the target signal is in the backward direction, the terminal apparatus 100 may determine the gain G ' l value of the low frequency band to be smaller than the gain G l value so that the target signal and the noise component in the backward direction are suppressed together. .
- the terminal device 100 may obtain a fixed beamforming signal by obtaining a difference between the front signal and the rear signal to which the delay is applied.
- the fixed beamforming signal may remove the backward audio signal and the forward audio signal may include the enhanced audio signal.
- the fixed beamforming signal may be obtained according to Equation 9 below.
- the terminal device 100 may remove the backward noise signal by applying the gains obtained at 540 and 550 to the fixed beamforming signal at 560.
- the gain may be applied to the fixed beamforming signal according to Equation 10 below.
- the terminal device 100 may convert the signal in the frequency domain into a signal in the time domain by performing an FFT inverse transform and output the converted signal.
- the gain of the low frequency band may be determined without estimating the direction of the target signal 540. As shown in FIG. 6, the gain of the low frequency band may be determined by the gain G 1 determined based on the estimated noise of the front signal.
- FIG. 7 is a flowchart illustrating a method of processing an audio signal for removing noise according to an exemplary embodiment.
- the terminal device 100 may acquire a front signal and a rear signal of an audio signal.
- the terminal device 100 may acquire the front signal and the rear signal through the front and rear microphones.
- the terminal device 100 may obtain a coherence value for the rear signal and the front signal to which the delay is applied. After applying the delay to the backward signal, the terminal device 100 can obtain a coherence value between the delayed rear signal and the front signal. Therefore, even if the correlation between the audio signals increases due to the narrow spacing of the front and rear microphones, the coherence value of the omnidirectional audio signal including the target signal may be determined to a smaller value, thereby removing noise. This can be improved.
- the terminal device 100 may determine a gain value based on the coherence. As the coherence value is closer to 1, the coherence value corresponds to the signal in the backward direction, and thus the gain value may be determined to remove the signal in the backward direction. On the other hand, as the coherence value is closer to 0, the signal corresponds to the omnidirectional signal, and thus the gain value may be determined to maintain the omnidirectional signal.
- the terminal device 100 may obtain a difference between the delayed rear signal and the front signal to obtain a fixed beamforming signal.
- the fixed beamforming signal may remove the backward audio signal and the forward audio signal may include the enhanced audio signal.
- the terminal device 100 may apply the gain value determined in operation S730 to the fixed beamforming signal and output the same.
- the terminal device 100 may convert the fixed beamforming signal to which the gain value is applied and convert the signal into a time domain signal.
- the terminal device 100 may estimate a noise signal of the front signal in the low frequency band and obtain a gain value for noise reduction in the low frequency band based on the estimated noise signal.
- the terminal device 100 may determine the directivity of the target signal based on the coherence value of the high frequency band, and obtain a gain value for the low frequency band based on the directivity of the target signal.
- FIG. 8 is an exemplary diagram illustrating an example of processing an audio signal for removing noise according to an exemplary embodiment.
- 810 illustrates an audio signal before removing noise according to the exemplary embodiment illustrated in FIGS. 5 to 7.
- 820 illustrates an audio signal after removing noise according to the exemplary embodiment shown in FIGS. 5 to 7.
- the rearward signal may be effectively removed by applying a delay to the rearward signal.
- FIG. 9 is a block diagram illustrating an internal structure of an apparatus for processing an audio signal according to an exemplary embodiment.
- the terminal device 900 that processes an audio signal may include a receiver 910, a controller 920, and an output unit 930.
- the receiver 910 may receive an audio signal through a microphone. Alternatively, the receiver 910 may receive an audio signal from an external device. The receiver 910 may receive the front signal and the rear signal through the front and rear microphones.
- the controller 920 may detect noise from the audio signal received by the receiver 910, and apply noise suppression to the audio signal of the region where the noise is detected, thereby performing noise reduction.
- the controller 920 may detect a region including the impact sound based on the difference in energy between the frequency bands, and apply a suppression gain to the detected region.
- the controller 920 may remove the backward signal from the audio signal by determining a gain value to be applied to the audio signal based on the coherence between the delayed rear signal and the front signal.
- the output unit 930 may convert the audio signal processed by the controller 920 into a signal in the time domain and then output the converted signal.
- the output unit 930 may convert the audio signal to which the gain value is applied to the audio signal of the partial section by the controller 920 and then output the converted audio signal.
- the output unit 930 may apply the gain value determined based on the coherence to the fixed beamforming signal of the audio signal and output the same.
- the output unit 930 may output an audio signal in the time domain through the speaker.
- noise included in an audio signal may be effectively removed while minimizing distortion of sound quality of the audio signal.
- the method according to some embodiments may be embodied in the form of program instructions that may be executed by various computer means and recorded on a computer readable medium.
- the computer readable medium may include program instructions, data files, data structures, etc. alone or in combination.
- Program instructions recorded on the media may be those specially designed and constructed for the purposes of the present invention, or they may be of the kind well-known and available to those having skill in the computer software arts.
- Examples of computer-readable recording media include magnetic media such as hard disks, floppy disks, and magnetic tape, optical media such as CD-ROMs, DVDs, and magnetic disks, such as floppy disks.
- Examples of program instructions include not only machine code generated by a compiler, but also high-level language code that can be executed by a computer using an interpreter or the like.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- General Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Quality & Reliability (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Neurosurgery (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims (15)
- 오디오 신호를 처리하는 방법에 있어서,복수의 프레임에 대한 주파수 도메인의 오디오 신호를 획득하는 단계;주파수 대역을 복수 개의 구간으로 분할하는 단계;상기 복수 개의 구간에 대한 에너지를 구하는 단계;상기 복수 개의 구간들 간 에너지 차이에 기초하여, 잡음이 포함된 오디오 신호를 검출하는 단계; 및상기 검출된 오디오 신호에 서프레션(suppression) 게인을 적용하는 단계를 포함하는, 방법.
- 제1항에 있어서, 상기 잡음이 포함된 오디오 신호를 검출하는 단계는상기 복수의 프레임에 대한 에너지를 구하는 단계;상기 복수 개의 프레임들 간 에너지 차이 및 소정 프레임의 에너지 값 중 적어도 하나에 기초하여, 잡음이 포함된 오디오 신호를 검출하는 단계를 포함하는, 방법.
- 제1항에 있어서, 상기 서프레션 게인을 적용하는 단계는상기 잡음이 검출된 오디오 신호의 에너지에 기초하여 상기 서프레션 게인을 결정하는 단계를 포함하는, 방법.
- 제1항에 있어서, 상기 주파수 대역 간 에너지 차이는제1 주파수 구간의 에너지와 제2 주파수 구간의 에너지 간 차이이고,상기 제2 주파수 구간은 상기 제1 주파수 구간보다 높은 주파수 대역의 구간인, 방법.
- 오디오 신호를 처리하는 방법에 있어서,전방(front) 신호 및 후방(back) 신호를 획득하는 단계;딜레이가 적용된 상기 후방 신호와 상기 전방 신호 간 코히어런스(coherence)를 획득하는 단계;상기 코히어런스에 기초하여, 게인 값을 결정하는 단계; 및딜레이가 적용된 상기 후방 신호와 상기 전방 신호 간 차이를 구하여 고정된 빔포밍(fixed beamforming) 신호를 획득하는 단계;상기 고정된 빔포밍 신호에 상기 게인 값을 적용시켜 출력하는 단계를 더 포함하는, 방법.
- 제5항에 있어서, 상기 코히어런스를 획득하는 단계는주파수 대역을 적어도 두 개의 구간으로 분할하는 단계;상기 분할된 구간 중 고주파수 구간에 대한 상기 코히어런스를 획득하는 단계를 포함하고,상기 게인 값을 결정하는 단계는상기 고주파수 구간에 대한 코히어런스에 기초하여, 상기 오디오 신호의 타겟 신호에 대한 방향성을 결정하는 단계;상기 방향성에 기초하여, 상기 분할된 구간 중 저주파수 구간에 대한 게인 값을 결정하는 단계를 포함하는, 방법.
- 제6항에 있어서, 상기 게인 값을 결정하는 단계는상기 전방 신호의 잡음을 추정하는 단계;상기 추정된 잡음에 기초하여, 상기 저주파수 구간에 대한 게인 값을 결정하는 단계를 포함하는, 방법.
- 오디오 신호를 처리하는 단말 장치에 있어서,복수의 프레임에 대한 주파수 도메인의 오디오 신호를 획득하는 수신부;주파수 대역을 복수 개의 구간으로 분할하고, 상기 복수 개의 구간에 대한 에너지를 구하고, 상기 복수 개의 구간들 간 에너지 차이에 기초하여, 잡음이 포함된 오디오 신호를 검출하고, 상기 검출된 오디오 신호에 서프레션 게인을 적용하는 제어부; 및상기 제어부에 의해 처리된 오디오 신호를 시간 도메인의 신호로 변환하여 출력하는 출력부를 포함하는, 단말 장치.
- 제8항에 있어서, 상기 제어부는상기 복수의 프레임에 대한 에너지를 구하고, 상기 복수 개의 프레임들 간 에너지 차이 및 소정 프레임의 에너지 값 중 적어도 하나에 기초하여, 잡음이 포함된 오디오 신호를 검출하는, 단말 장치.
- 제8항에 있어서, 상기 제어부는상기 잡음이 검출된 오디오 신호의 에너지에 기초하여 상기 서프레션 게인을 결정하는, 단말 장치.
- 제8항에 있어서, 상기 주파수 대역 간 에너지 차이는제1 주파수 구간의 에너지와 제2 주파수 구간의 에너지 간 차이이고,상기 제2 주파수 구간은 상기 제1 주파수 구간보다 높은 주파수 대역의 구간인, 단말 장치.
- 오디오 신호를 처리하는 단말 장치에 있어서,전방 신호 및 후방 신호를 획득하는 수신부;딜레이가 적용된 상기 후방 신호와 상기 전방 신호 간 코히어런스를 획득하고, 상기 코히어런스에 기초하여, 게인 값을 결정하고, 딜레이가 적용된 상기 후방 신호와 상기 전방 신호 간 차이를 구하여 고정된 빔포밍 신호를 획득하고, 상기 고정된 빔포밍 신호에 상기 게인 값을 적용시키는 제어부; 및상기 게인값이 적용된 고정된 빔포밍 신호를 시간 도메인의 신호로 변환하여 출력하는 출력부를 포함하는, 단말 장치.
- 제12항에 있어서, 상기 제어부는주파수 대역을 적어도 두 개의 구간으로 분할하고, 상기 분할된 구간 중 고주파수 구간에 대한 상기 코히어런스를 획득하고,상기 고주파수 구간에 대한 코히어런스에 기초하여, 상기 오디오 신호의 타겟 신호에 대한 방향성을 결정하고, 상기 방향성에 기초하여, 상기 분할된 구간 중 저주파수 구간에 대한 게인 값을 결정하는, 단말 장치.
- 제13항에 있어서, 상기 제어부는상기 전방 신호의 잡음을 추정하고, 상기 추정된 잡음에 기초하여, 상기 저주파수 구간에 대한 게인 값을 결정하는, 단말 장치.
- 제1항 내지 제7항 중 어느 한 항에 있어서, 상기 방법을 구현하기 위한 프로그램이 기록된 컴퓨터로 판독 가능한 기록 매체.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US15/516,071 US10366703B2 (en) | 2014-10-01 | 2015-10-01 | Method and apparatus for processing audio signal including shock noise |
KR1020177003323A KR102475869B1 (ko) | 2014-10-01 | 2015-10-01 | 잡음이 포함된 오디오 신호를 처리하는 방법 및 장치 |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201462058252P | 2014-10-01 | 2014-10-01 | |
US201462058267P | 2014-10-01 | 2014-10-01 | |
US62/058,252 | 2014-10-01 | ||
US62/058,267 | 2014-10-01 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016053019A1 true WO2016053019A1 (ko) | 2016-04-07 |
Family
ID=55630968
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/KR2015/010370 WO2016053019A1 (ko) | 2014-10-01 | 2015-10-01 | 잡음이 포함된 오디오 신호를 처리하는 방법 및 장치 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10366703B2 (ko) |
KR (1) | KR102475869B1 (ko) |
WO (1) | WO2016053019A1 (ko) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018137731A (ja) * | 2016-12-23 | 2018-08-30 | ジーエヌ ヒアリング エー/エスGN Hearing A/S | 音響インパルス抑制を用いる聴覚デバイスおよび関連する方法 |
CN109643554A (zh) * | 2018-11-28 | 2019-04-16 | 深圳市汇顶科技股份有限公司 | 自适应语音增强方法和电子设备 |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106205628B (zh) * | 2015-05-06 | 2018-11-02 | 小米科技有限责任公司 | 声音信号优化方法及装置 |
US10629226B1 (en) * | 2018-10-29 | 2020-04-21 | Bestechnic (Shanghai) Co., Ltd. | Acoustic signal processing with voice activity detector having processor in an idle state |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080002990A (ko) * | 2005-04-21 | 2008-01-04 | 에스알에스 랩스, 인크. | 오디오 잡음을 감소시키는 시스템 및 방법 |
WO2010146711A1 (ja) * | 2009-06-19 | 2010-12-23 | 富士通株式会社 | 音声信号処理装置及び音声信号処理方法 |
KR20110057596A (ko) * | 2009-11-24 | 2011-06-01 | 삼성전자주식회사 | 잡음 환경의 입력신호로부터 잡음을 제거하는 방법 및 그 장치, 잡음 환경에서 음성 신호를 강화하는 방법 및 그 장치 |
KR101254989B1 (ko) * | 2011-10-14 | 2013-04-16 | 한양대학교 산학협력단 | 2채널 디지털 보청기 및 2채널 디지털 보청기의 빔포밍 방법 |
KR20130045867A (ko) * | 2010-07-15 | 2013-05-06 | 비덱스 에이/에스 | 보청기 시스템에서의 신호 처리 방법 및 보청기 시스템 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20030007657A1 (en) | 2001-07-09 | 2003-01-09 | Topholm & Westermann Aps | Hearing aid with sudden sound alert |
US8098844B2 (en) * | 2002-02-05 | 2012-01-17 | Mh Acoustics, Llc | Dual-microphone spatial noise suppression |
US7492889B2 (en) * | 2004-04-23 | 2009-02-17 | Acoustic Technologies, Inc. | Noise suppression based on bark band wiener filtering and modified doblinger noise estimate |
KR100716984B1 (ko) * | 2004-10-26 | 2007-05-14 | 삼성전자주식회사 | 복수 채널 오디오 신호의 잡음 제거 방법 및 장치 |
US7983425B2 (en) | 2006-06-13 | 2011-07-19 | Phonak Ag | Method and system for acoustic shock detection and application of said method in hearing devices |
JP5093108B2 (ja) * | 2006-07-21 | 2012-12-05 | 日本電気株式会社 | 音声合成装置、方法、およびプログラム |
US8515097B2 (en) * | 2008-07-25 | 2013-08-20 | Broadcom Corporation | Single microphone wind noise suppression |
WO2011049515A1 (en) * | 2009-10-19 | 2011-04-28 | Telefonaktiebolaget Lm Ericsson (Publ) | Method and voice activity detector for a speech encoder |
US20140193009A1 (en) | 2010-12-06 | 2014-07-10 | The Board Of Regents Of The University Of Texas System | Method and system for enhancing the intelligibility of sounds relative to background noise |
JP6069830B2 (ja) * | 2011-12-08 | 2017-02-01 | ソニー株式会社 | 耳孔装着型収音装置、信号処理装置、収音方法 |
-
2015
- 2015-10-01 US US15/516,071 patent/US10366703B2/en active Active
- 2015-10-01 KR KR1020177003323A patent/KR102475869B1/ko active IP Right Grant
- 2015-10-01 WO PCT/KR2015/010370 patent/WO2016053019A1/ko active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20080002990A (ko) * | 2005-04-21 | 2008-01-04 | 에스알에스 랩스, 인크. | 오디오 잡음을 감소시키는 시스템 및 방법 |
WO2010146711A1 (ja) * | 2009-06-19 | 2010-12-23 | 富士通株式会社 | 音声信号処理装置及び音声信号処理方法 |
KR20110057596A (ko) * | 2009-11-24 | 2011-06-01 | 삼성전자주식회사 | 잡음 환경의 입력신호로부터 잡음을 제거하는 방법 및 그 장치, 잡음 환경에서 음성 신호를 강화하는 방법 및 그 장치 |
KR20130045867A (ko) * | 2010-07-15 | 2013-05-06 | 비덱스 에이/에스 | 보청기 시스템에서의 신호 처리 방법 및 보청기 시스템 |
KR101254989B1 (ko) * | 2011-10-14 | 2013-04-16 | 한양대학교 산학협력단 | 2채널 디지털 보청기 및 2채널 디지털 보청기의 빔포밍 방법 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018137731A (ja) * | 2016-12-23 | 2018-08-30 | ジーエヌ ヒアリング エー/エスGN Hearing A/S | 音響インパルス抑制を用いる聴覚デバイスおよび関連する方法 |
US11304010B2 (en) | 2016-12-23 | 2022-04-12 | Gn Hearing A/S | Hearing device with sound impulse suppression and related method |
CN109643554A (zh) * | 2018-11-28 | 2019-04-16 | 深圳市汇顶科技股份有限公司 | 自适应语音增强方法和电子设备 |
Also Published As
Publication number | Publication date |
---|---|
KR102475869B1 (ko) | 2022-12-08 |
US20170309293A1 (en) | 2017-10-26 |
KR20170065488A (ko) | 2017-06-13 |
US10366703B2 (en) | 2019-07-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2018008885A1 (ko) | 영상처리장치, 영상처리장치의 구동방법 및 컴퓨터 판독가능 기록매체 | |
US10354639B2 (en) | Automatic noise cancellation using multiple microphones | |
US9913022B2 (en) | System and method of improving voice quality in a wireless headset with untethered earbuds of a mobile device | |
US11070910B2 (en) | Processing device and a processing method for voice communication | |
CN106782584B (zh) | 音频信号处理设备、方法和电子设备 | |
US8046219B2 (en) | Robust two microphone noise suppression system | |
KR101120970B1 (ko) | 이동 오디오 디바이스에 대한 자동 음량 및 다이나믹 레인지 조절 | |
US9363596B2 (en) | System and method of mixing accelerometer and microphone signals to improve voice quality in a mobile device | |
WO2016053019A1 (ko) | 잡음이 포함된 오디오 신호를 처리하는 방법 및 장치 | |
US9313572B2 (en) | System and method of detecting a user's voice activity using an accelerometer | |
KR101540896B1 (ko) | 전자 디바이스 상에서의 마스킹 신호 생성 | |
US9438985B2 (en) | System and method of detecting a user's voice activity using an accelerometer | |
US8711219B2 (en) | Signal processor and signal processing method | |
US9361902B2 (en) | Earhole-wearable sound collection device, signal processing device, and sound collection method | |
US11373665B2 (en) | Voice isolation system | |
WO2012161555A2 (ko) | 방향성 마이크 어레이를 이용한 신호 분리시스템 및 그 제공방법 | |
US20120057717A1 (en) | Noise Suppression for Sending Voice with Binaural Microphones | |
KR20200009035A (ko) | 상관 기반 근접장 검출기 | |
WO2016089049A1 (ko) | 스피커의 위치 정보에 기초하여, 오디오 신호를 출력하는 방법 및 디바이스 | |
CN110875056B (zh) | 语音转录设备、系统、方法、及电子设备 | |
KR20110132245A (ko) | 음성 신호 처리 장치 및 음성 신호 처리 방법 | |
WO2016056683A1 (ko) | 전자 장치 및 이의 잔향 제거 방법 | |
US20150356964A1 (en) | Audio signal processing circuit and electronic device using the same | |
KR101702561B1 (ko) | 음원출력장치 및 이를 제어하는 방법 | |
US9729967B2 (en) | Feedback canceling system and method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 15847581 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20177003323 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15516071 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 15847581 Country of ref document: EP Kind code of ref document: A1 |