CN110876097B - Signal processing method, signal processing apparatus, and recording medium - Google Patents

Signal processing method, signal processing apparatus, and recording medium Download PDF

Info

Publication number
CN110876097B
CN110876097B CN201910787652.7A CN201910787652A CN110876097B CN 110876097 B CN110876097 B CN 110876097B CN 201910787652 A CN201910787652 A CN 201910787652A CN 110876097 B CN110876097 B CN 110876097B
Authority
CN
China
Prior art keywords
microphone
sound
microphones
speaker
output
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910787652.7A
Other languages
Chinese (zh)
Other versions
CN110876097A (en
Inventor
杠慎一
金森丈郎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Intellectual Property Corp of America
Original Assignee
Panasonic Intellectual Property Corp of America
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Panasonic Intellectual Property Corp of America filed Critical Panasonic Intellectual Property Corp of America
Publication of CN110876097A publication Critical patent/CN110876097A/en
Application granted granted Critical
Publication of CN110876097B publication Critical patent/CN110876097B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • H04R2430/25Array processing for suppression of unwanted side-lobes in directivity characteristics, e.g. a blocking matrix

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Provided are a signal processing method, a signal processing apparatus, and a recording medium. The signal processing method comprises the following steps: a gain control step (S1001) for multiplying at least one of M signals output from M microphones by a gain so that sound pressure levels of sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones (M is an integer of 2 or more) are equal to each other; a delay step (S1002) of delaying at least one of the M signals to cancel a difference in time of the M signals generated by a difference between arrival times of sounds arriving at the M microphones from a sound source; and a filter application step (S1003) for suppressing, from the M signals obtained in the gain control step (S1001) and the delay step (S1002), a signal representing sound output from a sound source located within a predetermined distance.

Description

Signal processing method, signal processing apparatus, and recording medium
Technical Field
The present disclosure relates to a signal processing method, a signal processing apparatus, and the like.
Background
As a prior art, there are the following techniques: the present invention relates to a microphone array that weights sounds output from respective sound sources based on distance attenuation of sounds input to a plurality of microphones constituting an array microphone, suppresses sounds output from a distant place of the microphones, and emphasizes sounds output from a nearby place (see patent document 1). In addition, as a conventional technique, there is a technique of: in order to suppress a sound output from a distant place of a microphone, a sound output from a nearby place is emphasized, or a sound output from a nearby place of the microphone is suppressed, and a filter designed using a transmission characteristic of a direct sound and a transmission characteristic of a reflected sound is applied to the acquired sound (see patent document 2).
Documents of the prior art
Patent literature
Patent document 1: japanese patent No. 5123595
Patent document 2: japanese patent No. 5486694
Disclosure of Invention
However, in the above-described related art, it is difficult to effectively suppress output sounds from sound sources existing in the vicinity of the microphone.
Accordingly, the present disclosure provides a signal processing method and the like capable of suppressing output sounds from a sound source existing in the vicinity of a microphone more effectively than ever.
A signal processing method according to an aspect of the present disclosure includes: a gain control step of multiplying at least one of M signals output from M microphones by a gain so that sound pressure levels of the M signals indicating sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones are equal, where M is an integer of 2 or more; a delaying step of delaying at least one of the M signals to cancel a difference in time of the M signals generated by a difference between arrival times of respective sounds arriving at the M microphones from the sound source; and a filter application step of applying a filter to the M signals obtained in the gain control step and the delay step to generate a signal in which a sound output from the sound source located within the predetermined distance from a microphone located closest to the sound source among the M microphones is suppressed.
These general and specific aspects may be realized by an apparatus, a method, an integrated circuit, a computer program, or a computer-readable recording medium such as a CD-ROM, or may be realized by any combination of an apparatus, a method, an integrated circuit, a computer program, and a recording medium.
Effects of the invention
The signal processing method and the like according to one aspect of the present disclosure can suppress output sounds from sound sources present in the vicinity of a microphone more effectively than ever.
Drawings
Fig. 1 is a diagram showing a positional relationship between a speaker and a speaker in the embodiment.
Fig. 2 is a block diagram of a signal processing device that performs signal processing in the time domain in the embodiment.
Fig. 3 is a block diagram of a signal processing device that performs signal processing in the frequency domain in the embodiment.
Fig. 4 is a flowchart showing the sequence of a signal processing method in the embodiment.
Fig. 5 is a graph showing frequency characteristics after signal processing for a nearby sound source in the embodiment.
Fig. 6 is a graph showing frequency characteristics after signal processing for a distant sound source in the embodiment.
Fig. 7 is a diagram showing a specific example of an emergency notification system (e-call) to which the signal processing method according to the embodiment is applied.
Fig. 8 is a diagram showing a case where an emergency notification system (e-call) to which the signal processing method according to the embodiment is applied is installed in a vehicle interior.
Description of the reference numerals
1 speaker
2. 3 Signal processing device
10a, 91 speaker
10b output destination
20a, 20b, 92a, 92b microphones
30 gain control part
40 delay part
50 filter application part
60 distance information part
70a, 70b frequency conversion unit
80 time signal conversion part
90 Emergency reporting system (e-call)
93 push button
94 driver
100. 200, 300, 400 lines
Detailed Description
(insight underlying the present disclosure)
In order to mount an emergency notification system that is subject to obligation on a vehicle, a relatively small module in which a microphone and a speaker are integrated has been used in the emergency notification system, which is advantageous in that tuning can be performed independently of the vehicle type. In the emergency notification system, it is assumed that a passenger such as a driver communicates with an operator or the like in an emergency. Here, in a relatively small module in which a microphone is close to a speaker, sound output from the speaker is input to the microphone. Therefore, a noise cancellation technique of suppressing a sound input to the microphone among sounds output from the speaker is adopted in this module.
However, since the speaker is small, when the speaker outputs a relatively large sound, distortion with respect to an input signal from a sound source is easily generated in the output sound. This distortion is difficult to cancel by conventional echo cancellation techniques. Therefore, in the case where the output sound from the speaker is input to the microphone, the distorted portion is not suppressed by the echo cancellation technique.
In addition, in the technique described in patent document 1, when the sounds are simultaneously output from both the vicinity of the microphone and the distant place, it is difficult to extract and emphasize the sounds output from the vicinity, and the sounds are also easily affected by the surrounding noise. In addition, the technique described in patent document 2 has a problem that it is necessary to measure the transmission characteristics of the sound output from the sound source from the positions of the microphone and the sound source in advance. In addition, in the technique described in patent document 2, since the reflection state of sound greatly changes depending on the environment, it is necessary to consider the sound pickup environment when using the transmission characteristics.
Therefore, a signal processing method according to an aspect of the present disclosure includes: a gain control step of multiplying at least one of M signals output from M microphones by a gain so that sound pressure levels of the M signals indicating sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones are equal, where M is an integer of 2 or more; a delaying step of delaying at least one of the M signals to eliminate a difference in time of the M signals generated by a difference between arrival times of sounds arriving at the M microphones from the sound source; and a filter application step of applying a filter to the M signals obtained in the gain control step and the delay step to generate a signal in which a sound output from the sound source located within the predetermined distance from a microphone located closest to the sound source among the M microphones is suppressed.
Thus, the signal processing method according to one aspect of the present disclosure can suppress the output sound itself from a sound source located within a predetermined distance from the microphone closest to the sound source. In addition, the signal processing method according to one aspect of the present disclosure can efficiently suppress the output sound from the sound source close to the microphone and not suppress the output sound from the sound source far from the microphone by multiplying the output sound from the sound source close to the microphone and the output sound from the sound source far from the microphone by the same gain. In addition, the signal processing method according to an aspect of the present disclosure can suppress sound that is output from a sound source such as a small speaker and cannot be suppressed by echo cancellation techniques. That is, the signal processing method according to an aspect of the present disclosure can suppress output sound from a sound source close to a microphone, including sound distortion.
For example, the predetermined distance may be a distance within 3 times of the longest distance among the spatial intervals of the M microphones.
Thus, the signal processing method according to one aspect of the present disclosure can suppress output sound from a sound source located within 3 times the longest interval among the M microphone intervals from the microphone closest to the sound source. A range represented by a distance within 3 times of the longest interval among the intervals of each of the M microphones from the microphone located closest to the sound source is a range in which the sound pressure levels of the sounds input to each of the M microphones are significantly different. Thus, the signal processing method according to one aspect of the present disclosure can effectively suppress sound output from a sound source located within a predetermined distance from the microphone.
For example, the predetermined distance may be a distance at which the sound pressure level of the sound reaching the microphone located closest to the sound source from the sound source is 4/3 times or more the sound pressure level of the sound reaching the microphone located farthest from the sound source among the M microphones.
Thus, the signal processing method according to one aspect of the present disclosure can suppress the sound from the sound source located at such a distance that the sound pressure level input to the microphone located closest to the sound source becomes 4/3 times or more the sound pressure level input to the microphone located farthest from the sound source. The sound pressure level input to the microphone located at the position closest to the sound source is equal to or more than 4/3 times the sound pressure level input to the microphone located at the position farthest from the sound source, and represents a substantially lower limit value of the sound pressure difference entering the microphone for suppressing the sound from the sound source close to the microphone. Thus, the signal processing method according to one aspect of the present disclosure can effectively suppress sound output from a sound source located within a predetermined distance from the microphone.
For example, the signal processing method according to an aspect of the present disclosure may further include a step of calculating a spatial position where the sound source exists.
Thus, the signal processing method of an aspect of the present disclosure can calculate the distances from the M microphones to the sound source. Thus, the signal processing method according to one aspect of the present disclosure can autonomously suppress output sound from a sound source within a predetermined distance from a microphone closest to the sound source.
For example, the gain control step, the delay step, and the filter application step may be performed in the frequency domain.
Thus, the signal processing method of an aspect of the present disclosure can process a signal output from a microphone in a frequency domain. This makes it possible to relatively easily process the signal.
For example, the gain control step, the delay step, and the filter application step may be performed in the time domain.
Thus, the signal processing method of an aspect of the present disclosure can process a signal output from a microphone in a time domain. This enables processing of the signal based on the time and the intensity of the signal.
In addition, a signal processing device according to an aspect of the present disclosure includes: a gain control unit that multiplies at least one of M signals output from M microphones, where M is an integer equal to or greater than 2, by a gain so that sound pressure levels of the M signals indicating sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones are equal to each other; a delay section that delays at least one of the M signals to cancel a difference in time of the M signals generated by a difference between arrival times of respective sounds arriving at the M microphones from the sound source; and a filter application unit configured to apply a filter to the M signals obtained from the gain control unit and the delay unit, thereby generating a signal in which a sound output from the sound source located within the predetermined distance from a microphone located closest to the sound source among the M microphones is suppressed.
Thus, the signal processing device according to one aspect of the present disclosure can achieve the same effects as those of the signal processing method described above.
In addition, the program according to an embodiment of the present disclosure may cause a computer to execute the signal processing method.
Thus, the program according to one embodiment of the present disclosure can provide the same effects as those of the signal processing method described above.
Hereinafter, embodiments will be specifically described with reference to the drawings.
The embodiments described below are all general or specific examples. The numerical values, shapes, materials, constituent elements, arrangement positions and connection modes of the constituent elements, steps, order of the steps, and the like shown in the following embodiments are merely examples and do not limit the scope of the claims. Among the components in the following embodiments, components that are not recited in the independent claims indicating the uppermost concept will be described as arbitrary components. In addition, the drawings are not necessarily strict drawings. In the drawings, substantially the same components are denoted by the same reference numerals, and redundant description is omitted or simplified.
(embodiment mode)
[ positional relationship between speaker and speaker in one embodiment of the present disclosure ]
Fig. 1 is a diagram showing a positional relationship between a speaker and a speaker in the embodiment. As shown in fig. 1, a speaker 10a is disposed close to a microphone 20 a. For example, the distance between the speaker 10a and the microphone 20a is about 1 cm. In addition, the distance between the speaker 10a and the microphone 20a may be several cm. In addition, the microphone 20b is disposed close to the microphone 20 a. For example, the distance between the microphone 20a and the microphone 20b is about 1 cm. In addition, the distance between the microphone 20a and the microphone 20b may be several cm.
Here, the distance between the speaker 10a and the microphone 20b is longer than the distance between the speaker 10a and the microphone 20 a. The distance between the speaker 10a and the microphone 20b is about several times the distance between the speaker 10a and the microphone 20 a. For example, the distance between the speaker 10a and the microphone 20b is about twice the distance between the speaker 10a and the microphone 20a, and is about 2 cm.
Next, as shown in fig. 1, speaker 1 is located in a place distant from speaker 10a as viewed from microphone 20 a. For example, the distance between speaker 1 and microphone 20a is about 50 cm. In addition, the distance between speaker 1 and microphone 20a may also be several 10cm or several m. In addition, speaker 1 is located in a place distant from speaker 10a as viewed from microphone 20 b. For example, the distance between speaker 1 and microphone 20b is about 51 cm. In addition, the distance between speaker 1 and microphone 20b may also be several 10cm or several m. Here, there is little difference between the distance between speaker 1 and microphone 20b and the distance between speaker 1 and microphone 20 a. For example, the distance between speaker 1 and microphone 20b is about 1.02 times the distance between speaker 1 and microphone 20a, which is about 51 cm.
Generally, sound decreases in sound pressure level in proportion to distance. Thus, the distance between the speaker 10a and the microphone 20b is about 2 times the distance between the speaker 10a and the microphone 20a, and therefore the magnitude of the output sound of the speaker 10a entering the microphone 20a is about 2 times the magnitude of the output sound of the speaker 10a entering the microphone 20 b. On the other hand, since the distance between speaker 1 and microphone 20b is about 1.02 times the distance between speaker 1 and microphone 20a, the magnitude of the sound emitted by speaker 1 entering microphone 20a is about 1.02 times the magnitude of the sound emitted by speaker 1 entering microphone 20 b. That is, the size of the sound uttered by speaker 1 entering microphone 20a and the size of the sound uttered by speaker 1 entering microphone 20b are almost not different.
In this way, a sensitivity difference with respect to sound collection occurs between the microphone 20a and the microphone 20b with respect to the sound emitted from the speaker 10 a. For example, the sensitivity difference between the microphone 20a and the microphone 20b is about 6 dB. On the other hand, with respect to the sound uttered by the speaker 1, a sensitivity difference relating to the sound pickup hardly occurs between the microphone 20a and the microphone 20 b.
In addition, as long as the relationship that speaker 10a, microphone 20a, and microphone 20b are arranged close to each other and speaker 1 is sufficiently distant from microphone 20a and microphone 20b is maintained, the distance between speaker 1, speaker 10a, microphone 20a, and microphone 20b may be any distance. Speaker 1 may be another speaker different from speaker 10 a. Speaker 1 may be a sound source other than a speaker.
[ Signal processing method and Signal processing device in one embodiment of the present disclosure ]
Fig. 2 is a block diagram showing signal processing in the time domain in the embodiment. The signal processing device 2 in one embodiment of the present disclosure includes a microphone 20a, a microphone 20b, a gain control unit 30, a delay unit 40, a filter application unit 50, a distance information unit 60, and an output destination 10 b.
The above configuration is a configuration in the case of performing signal processing in the time domain. Here, the speaker 10a is a specific example of a sound source.
The microphones 20a and 20b acquire sounds and the like and convert the sounds into signals. The microphone can be a moving-coil microphone or an aluminum ribbon microphone. The microphone may be a condenser microphone, a laser optical microphone, or the like.
Here, the microphone 20a is located closer to the speaker 10a than the microphone 20 b.
The delay unit 40 delays the signal output from the microphone 20a by a predetermined time. Since the microphone 20a is located closer to the speaker 10a than the microphone 20b, the output sound from the speaker 10a reaches the microphone 20a before the output sound from the speaker 10a reaches the microphone 20 b. By delaying the signal output from the microphone 20a, the signal output from the microphone 20a can be made time-coincident with the signal output from the microphone 20 b. That is, by delaying the signal output from the microphone 20a, the phase of the signal output from the microphone 20a and the phase of the signal output from the microphone 20b can be matched.
The time for delaying the signal output from the microphone 20a may be determined according to the spatial intervals of the microphones 20a and 20 b. The delay unit 40 may delay the signal output from the microphone 20a by a predetermined time, or may determine the time to delay the signal output from the microphone 20a as needed by a predetermined algorithm or the like. The delay unit 40 may perform convolution operation using an all-pass filter or an FIR (Finite Impulse Response) filter among IIR (Infinite Impulse Response) filters in order to delay the signal output from the microphone 20 a. In addition, filters other than the FIR filter and the IIR filter may be designed and used as the filter.
The gain control unit 30 multiplies the signal output from the microphone 20b by a predetermined gain. The gain by which the signal output from the microphone 20b is multiplied is determined by the positional relationship of the microphone 20a, the microphone 20b, and the speaker 10 a. The positional relationship may be stored as data in the distance information unit 60. The gain control unit 30 may read data on the positional relationship among the microphone 20a, the microphone 20b, and the speaker 10a stored in the distance information unit 60 and use the data for determining the gain.
For example, the gain control unit 30 may determine the value of the gain to be multiplied by the signal output from the microphone 20b based on the ratio of the distance between the microphone 20b and the speaker 10a to the distance between the microphone 20a and the speaker 10 a.
Here, a specific example of the gain control unit 30 determining the gain will be described. For example, when the microphone 20a is used as a reference, the gain by which the signal output from the microphone 20b is multiplied is 2. This is because the space between the speaker 10a and the microphone 20a is 1cm, and the space between the speaker 10a and the microphone 20b is 2 cm. Therefore, the gain by which the signal output from the microphone 20b is multiplied is calculated to be 2cm/1cm ═ 2.
The filter application unit 50 performs a filtering process on the signals output from the microphones 20a and 20 b. For example, the signal output from the microphone 20a and subjected to the delay processing by the delay unit 40 is subtracted from the signal output from the microphone 20b and multiplied by the gain control unit 30.
The delay unit 40, the gain control unit 30, and the filter application unit 50 are implemented by a processor and a memory. In this case, the delay unit 40, the gain control unit 30, and the filter application unit 50 are implemented by a program stored in a memory. The functionality of the processor and the memory may utilize the functionality provided by cloud computing. The delay unit 40, the gain control unit 30, and the filter application unit 50 may be implemented by dedicated logic circuits without using a processor.
The distance information unit 60 is a storage unit that holds data relating to the positional relationship of the microphone 20a, the microphone 20b, and the speaker 10 a. The distance information unit 60 may hold the positional relationship of the microphone 20a, the microphone 20b, and the speaker 10a in the form of a database. The distance information section 60 is implemented by a memory.
And an output destination 10b for outputting the signal processed by the filter application unit 50. The output destination 10b may be a speaker that outputs the output signal as sound, or may be a memory that stores the output signal. The output destination 10b may be the same as or different from the speaker 10 a.
Fig. 3 is a block diagram showing signal processing in the frequency domain in the embodiment. The signal processing device 3 in one embodiment of the present disclosure is configured by a microphone 20a, a microphone 20b, a delay unit 40, a gain control unit 30, a filter application unit 50, a distance information unit 60, a frequency conversion unit 70a, a frequency conversion unit 70b, a time signal conversion unit 80, and an output destination 10 b. The above-described configuration is a configuration in the case of performing signal processing in the frequency domain.
The microphone 20a, the microphone 20b, the delay unit 40, the gain control unit 30, the filter application unit 50, the distance information unit 60, and the output destination 10b are the same as those described with reference to fig. 2.
The frequency conversion units 70a and 70b convert the time domain signal into a frequency domain signal. As an algorithm for converting a signal in the time domain into a signal in the frequency domain, fourier transform is used. In addition, as an algorithm for converting a signal in the time domain into a signal in the frequency domain, discrete fourier transform or fast fourier transform may be used.
The time signal conversion unit 80 converts the frequency domain signal into a time domain signal. As an algorithm for converting a signal in the frequency domain into a signal in the time domain, an inverse fourier transform is used.
The frequency conversion unit 70a, the frequency conversion unit 70b, and the time signal conversion unit 80 are implemented by a processor and a memory. The functionality of the processor and memory may also utilize the functionality provided by cloud computing. The frequency conversion unit 70a, the frequency conversion unit 70b, and the time signal conversion unit 80 may be implemented by dedicated logic circuits without using a processor.
N shown in the figure indicates the number of frequency bins.
Here, the Memory described in fig. 2 and 3 may be a RAM (random Access Memory) or a DRAM (dynamic random Access Memory). The Memory may be an SRAM (Static random Access Memory) or a semiconductor integrated circuit. The Memory may be a ROM (Read Only Memory) or a flash Memory. In addition, the functionality of the memory may also utilize the functionality provided by cloud computing.
Fig. 4 is a flowchart showing the procedure of the signal processing method of the present disclosure in the embodiment.
First, the microphones 20a and 20b acquire sounds (step S1000). The microphones 20a and 20b convert the acquired sound into a signal and output the signal. In this case, the number of microphones for acquiring sound may be M (M is an integer of 2 or more).
Next, the gain control section 30 multiplies the signal output from the microphone 20b by a gain so that the sound pressure levels of the signal output from the microphone 20b and the signal output from the microphone 20a are equal to each other (step S1001). In this case, the gain control unit 30 may apply a gain to at least one of the signals so that the sound pressure levels of the signals output from the M microphones are equal to each other.
The gain control unit 30 may perform the process of step S1001 when the speaker 10a is located at a position within 3 times the distance between the microphone 20a and the microphone 20b from the microphone closest to the speaker 10 a. Further, the gain control unit 30 may perform the process of step S1001 when the speaker 10a is located at a position where the sound pressure level of the output sound from the speaker 10a entering the microphone closest to the speaker 10a is 4/3 times or more the sound pressure level of the output sound from the speaker 10a entering the microphone 20 b. This is because if the sound pressure level of the output sound from the speaker 10a input to the microphone 20a and the sound pressure level of the output sound from the speaker 10a input to the microphone 20b do not have a certain difference or more, the effective effect of the signal processing method of the present disclosure cannot be obtained. When the speaker 10a is located at a position at a distance of 4/3 times or more the sound pressure level of the output sound from the speaker 10a entering the microphone closest to the speaker 10a as compared with the sound pressure level of the output sound from the speaker 10a entering the microphone 20b, a considerable suppression effect on the sound output from the speaker 10a can be obtained, and the influence on the output sound from other sound sources can be suppressed.
The signal processing method of the present disclosure may further include a calculation unit that calculates a distance between the microphone 20a or the microphone 20b and the speaker 10 a. The calculation unit may be a laser type length measuring sensor or the like. In addition, for example, in the calibration stage, the calculation unit may calculate the distance between the microphone 20a or the microphone 20b and the speaker 10a based on the sound pressure level of the sound output from the speaker 10a and input to the microphone 20a and the microphone 20 b.
Here, the gain control unit 30 may calculate the value of the gain using the data on the positional relationship of the microphone 20a, the microphone 20b, and the speaker 10a stored in the distance information unit 60. The gain control unit 30 may select an appropriate value from predetermined gain values.
When the signal is processed in the frequency domain, the signal in the time domain is converted into the signal in the frequency domain by the frequency converter 70a and the frequency converter 70b before the step S1001 is performed. In the signal conversion from the time domain to the frequency domain, fourier transform, discrete fourier transform, or fast fourier transform may be used.
Then, the delay section 40 delays the signal output from the microphone 20a so that the temporal difference between the signal output from the microphone 20a and the signal output from the microphone 20b disappears (step S1002). That is, the delay unit 40 delays the signal output from the microphone 20a so that the phase of the signal output from the microphone 20a matches the phase of the signal output from the microphone 20 b. At this time, the delay section 40 may delay at least one of the M signals to eliminate a temporal difference of the M signals output from the M microphones.
Next, the filter application unit 50 applies a filter for suppressing a signal indicating a sound output from the speaker 10a located within a predetermined distance from the microphone 20a to the signal output from the gain control unit 30 and the delay unit 40 (step S1003). Here, the filter to be applied may be a filter that performs a process of subtracting the signal output from the delay unit 40 from the signal output from the gain control unit 30. That is, the filter to be applied may be a filter that adds a signal obtained by multiplying the signal output from the delay unit 40 by-1 to the signal output from the gain control unit 30. The filter application unit 50 may apply a filter to the obtained signal to suppress a signal indicating the sound of a sound source within a predetermined distance. Here, the number of signals may be M.
In the case of signal processing in the frequency domain, after step S1003 is performed, the time signal conversion unit 80 converts the signal in the frequency domain into a signal in the time domain. In the signal conversion from the frequency domain to the time domain, an inverse fourier transform may also be used.
Here, the sequence of the signal processing method ends.
The signal processing in steps S1000 to S1003 shown in fig. 4 may be performed in the time domain or in the frequency domain.
Further, step S1001 and step S1002 may be executed in the order replaced.
In addition, the signal processing method according to an embodiment of the present disclosure performs processing corresponding to BF (Beam Former) processing by using the gain control unit 30 and the delay unit 40. Therefore, the signal processing method according to an aspect of the present disclosure can suppress output sound from a sound source close to the microphone even when sound is output simultaneously from both the sound source close to the microphone and the sound source far from the microphone.
Frequency characteristics after signal processing for a nearby sound source and a distant sound source
Fig. 5 is a graph showing frequency characteristics after signal processing for nearby sound sources in the embodiment. Line 100 represents a signal output from microphone 20a when sound is output from speaker 10a located in the vicinity of microphone 20 a. In addition, a line 300 represents a signal output from the filter application section 50 when sound is output from a speaker located in the vicinity of the microphone 20 a. White noise is used as the sound output from the speaker 10 a.
When sound is output from the speaker 10a located in the vicinity of the microphone 20a, the signal output from the microphone 20a is m1(t), and when sound is output from the speaker 10a located in the vicinity of the microphone 20a, the signal output from the microphone 20b is m2 (t). Here, t represents a sample of time after discretization. The delay time τ is the difference between the time when the output sound from the speaker 10a reaches the microphone 20a and the time when the output sound from the speaker 10a reaches the microphone 20 b. The gain given by the gain control unit 30 to m2(t) is denoted by G. Here, the signal output from the filter application unit 50 is y1 (t). The filter application unit 50 performs processing of y1(t) ═ gxm 2(t) -m 1(t) × h (τ). H (τ) represents a filter for delaying the signal by a time τ.
As described above, the gain G is determined by the ratio of the distance between the speaker 10a and the microphone 20a and the distance between the speaker 10a and the microphone 20 b. Specifically, the gain G is determined such that the sound pressure level of the output sound from the speaker 10a indicated by the signal output from the microphone 20b and the sound pressure level of the output sound from the speaker 10a indicated by the signal output from the microphone 20a are at the same level.
Further, the phase of the signal indicating the output sound from the speaker 10a output from the microphone 20b and the phase of the signal indicating the output sound from the speaker 10a output from the microphone 20a match each other through the processing of the delay unit 40.
Thus, the signal y1(t) is subjected to subtraction processing of the signals whose sound pressure levels match the phases, so that the sound pressure level of the signal y1(t) becomes lower than the sound pressure level of m1 (t). As shown in FIG. 5, line 100 shows-40 dB and line 300 shows-80 dB. That is, the output sound from the speaker 10a is suppressed by the signal processing method of the present disclosure.
For example, consider a case where the distance between the speaker 10a and the microphone 20a is 1cm, the distance between the microphone 20a and the microphone 20b is 1cm, and the distance between the speaker 10a and the microphone 20b is 2 cm. At this time, the gain G by which the signal m2(t) is multiplied is 2 according to the positional relationship of the speaker 10a, the microphone 20a, and the microphone 20 b. In addition, the sound pressure level of the signal m2(t) is 1/2 of the sound pressure level of the signal m1(t) according to the positional relationship of the speaker 10a, the microphone 20a, and the microphone 20 b. Thus, signal m2(t) is multiplied by gain 2, so that the sound pressure level is substantially equal to signal m1 (t). Further, the delay unit 40 gives a delay time equal to the difference between the time when the output sound from the speaker 10a reaches the microphone 20a and the time when the output sound from the speaker 10a reaches the microphone 20b to the signal m1(t), and thereby the phases of the signal m1(t) and the signal m2(t) are matched. Thus, by subtracting the signal m1(t) whose phase matches the signal m2(t) from the signal m2(t) to which the gain 2 is given, the sound pressure of the signal indicating the output sound from the speaker 10a approaches 0. Thus, the signal processing method according to one aspect of the present disclosure can suppress a signal representing the output sound from the speaker 10 a.
Fig. 6 is a graph showing frequency characteristics after signal processing for a distant sound source in the embodiment. Line 200 represents the signal output from microphone 20a when speaker 1 located remote from microphone 20a makes a sound. In addition, a line 400 represents a signal output from the filter application unit 50 when the speaker 1 located far from the microphone 20a makes a sound. White noise is used as the sound generated by speaker 1.
When the speaker 1 located far from the microphone 20a makes a sound, the signal output from the microphone 20a is set to m '1 (t), and when the speaker 1 located far from the microphone 20a makes a sound, the signal output from the microphone 20b is set to m' 2 (t). Here, t represents a sample of time after discretization. The delay time τ is the difference between the time when the sound generated by speaker 1 reaches microphone 20a and the time when the sound generated by speaker 1 reaches microphone 20 b. The gain given to m' 2(t) by the gain control unit 30 is G. Here, the signal output from the filter application unit 50 is y2 (t). The filter application unit 50 performs processing of y2(t) ═ G × m '2 (t) -m' 1(t) × h (τ). H (τ) represents a filter for delaying the signal by a time τ.
The gain G is determined as described above. Further, the phase of the sound generated by speaker 1 input to microphone 20b and the phase of the sound generated by speaker 1 input to microphone 20a match each other through the processing of filter application unit 50.
Thus, the sound pressure level of the signal y2(t) is substantially equal to the sound pressure level of m' 1 (t). This is because m '1 (t) and m' 2(t) have almost no difference in sound pressure level. The reason why m '1 (t) and m' 2(t) have little difference in sound pressure level is that the difference between the distance between speaker 1 and microphone 20a and the distance between speaker 1 and microphone 20b occupies a small proportion of the distance between speaker 1 and microphone 20a or the distance between speaker 1 and microphone 20 b.
Therefore, the sound pressure level of the signal y2(t) is not nearly equal to 0. For example, the sound pressure level of signal y2(t) is approximately equal to signal m' 1 (t). As shown in FIG. 6, line 200 shows-40 dB, and likewise, line 400 shows-40 dB. That is, the sound uttered by speaker 1 is not suppressed by the signal processing method of the present disclosure.
For example, consider a case where the distance between speaker 1 and microphone 20a is 50cm, the distance between microphone 20a and microphone 20b is 1cm, and the distance between speaker 1 and microphone 20b is 51 cm. At this time, the gain G by which the signal m' 2(t) is multiplied is 2 according to the positional relationship of the speaker 1, the microphone 20a, and the microphone 20 b. In addition, the sound pressure level of the signal m' 2(t) is 50/51 of the sound pressure level of the signal m1(t) according to the positional relationship of the speaker 10a, the microphone 20a, and the microphone 20 b. Thereby, the signal m '2 (t) is multiplied by the gain 2, and the sound pressure level becomes about 2 times the signal m' 1 (t). Further, the delay unit 40 adds a delay time equal to the difference between the time when the sound from speaker 1 reaches the microphone 20a and the time when the sound from speaker 1 reaches the microphone 20b to the signal m ' 1(t), so that the phases of the signal m ' 1(t) and the signal m ' 2(t) are matched. Thus, by subtracting the signal m ' 1(t) whose phase matches the signal m ' 2(t) from the signal m ' 2(t) to which the gain 2 is added, the sound pressure of the signal indicating the sound generated by the speaker 1 approaches the sound pressure of the signal m ' 1(t) or the signal m ' 2 (t). Thus, the signal processing method according to an embodiment of the present disclosure does not suppress a signal indicating a sound generated by speaker 1.
That is, the signal processing method according to an aspect of the present disclosure can form a blind spot in a range within a specific distance in sound collection using a plurality of microphones.
Fig. 7 is a diagram showing a specific example of an emergency notification system (e-call) to which the signal processing method of the present disclosure in the embodiment is applied. The emergency notification system 90 includes a speaker 91, a microphone 92a, a microphone 92b, and a button 93. For example, the emergency notification system 90 may be a box-shaped module such as a cube. In addition, the shape of the emergency notification system 90 is not limited to the above-described shape. The emergency notification system 90 may be in the form of a box such as a cube, a polyhedron, a cylinder, or a sphere.
The user can press the button 93 to notify that the emergency is in an emergency. Then, the user can send a sound to an operator or the like through the microphones 92a and 92 b. In addition, the user can hear the sound of the operator or the like from the speaker 91.
Since the emergency notification system 90 is a relatively small-sized module, the speaker 91 is also small-sized. The speaker 91, the microphone 92a, and the microphone 92b are disposed close to each other. As shown in fig. 7, the positional relationship of the speaker 91, the microphone 92a, and the microphone 92b in the emergency notification system 90 is similar to the positional relationship of the speaker 10a, the microphone 20a, and the microphone 20b in the signal processing apparatus of an aspect of the present disclosure.
Fig. 8 is a diagram showing a case where an emergency notification system (e-call) to which the signal processing method of the present disclosure in the embodiment is applied is installed in a vehicle interior. In fig. 8, an emergency notification system 90 is mounted on the ceiling between the driver's seat and the passenger seat in the vehicle interior. Specifically, the emergency notification system 90 is installed in the vicinity of a place where a cabin light or the like is installed between a driver's seat and a front passenger seat in the vehicle cabin. The place where the emergency notification system 90 is installed in the vehicle interior is not limited to the ceiling. The emergency notification system 90 may be mounted on an instrument panel or the like in the vehicle interior.
Next, the positional relationship among the speaker 91, the microphone 92a, the microphone 92b, and the driver 94 when the emergency notification system 90 is actually installed in the vehicle interior will be described. In the emergency notification system 90, a microphone 92a and a microphone 92b are provided in proximity. The speaker 91 is also provided in the emergency notification system 90 in proximity to the microphones 92a and 92 b. The distance between the speaker 91, the microphone 92a, and the microphone 92b is several mm to several cm. On the other hand, the driver 94 who makes a sound toward the speaker 91 is away from the microphones 92a and 92 b. Specifically, the driver 94 is present at a distance of several 10cm from the microphones 92a and 92 b.
As shown in fig. 8, the positional relationship of the speaker 91, the microphone 92a, the microphone 92b, and the driver 94 in the emergency notification system 90 is similar to the positional relationship of the speaker 10a, the microphone 20b, and the speaker 1 in the signal processing apparatus of an embodiment of the present disclosure.
[ supplement ]
The configuration of the signal processing device 2 shown in fig. 2 is a configuration in which the microphones are two microphones 20a and 20b, but is not limited to this. The number of microphones may be M (M is an integer of 2 or more). Here, the M microphones include at least two microphones having different distances from the speaker 10 a.
When M microphones are provided, two of the M microphones may be selected, and the configurations shown in fig. 2 to 4 may be applied. A plurality of sets of two microphones selected from the M microphones may be selected, and the configurations shown in fig. 2 to 4 may be applied to each of the plurality of sets.
In the case of M microphones, the microphone located closest to the speaker 10a is connected to the delay unit. The delay unit and the gain control unit are connected to (M-1) microphones other than the microphone closest to the speaker 10 a. Then, the filter application unit applies filtering processing to signals output from each of (M-1) microphones other than the microphone closest to the speaker 10a, input to the (M-1) delay units, output from the (M-1) delay units, input to the (M-1) gain control units, and output from the (M-1) gain control units, and signals output from the microphone located closest to the speaker 10a, input to the delay units, and output from the delay units. The signal subjected to the filtering process is input to the output destination 10b and output from the output destination 10 b.
The signals output from the (M-1) microphones other than the microphone closest to the speaker 10a and input to the (M-1) delay units, then output from the (M-1) delay units, further input to the (M-1) gain control units, and then output from the (M-1) gain control units may be signals output from the (M-1) microphones other than the microphone closest to the speaker 10a and input to the (M-1) gain control units, then output from the (M-1) gain control units, further input to the (M-1) delay units, and then output from the (M-1) delay units.
For example, the filtering process performed here may be as follows: signals output from each of the (M-1) microphones other than the microphone closest to the speaker 10a are multiplied by values of appropriate gains, respectively, so that the sum of the sound pressure level of a signal output from the microphone located closest to the speaker 10a and input to the delay section, and the sound pressure level of a signal output from each of the (M-1) microphones other than the microphone located closest to the speaker 10a is equal. Next, the filtering process performed here may be a process of: signals output from each of (M-1) microphones other than the microphone closest to the speaker 10a, input to the (M-1) delay units, output from the (M-1) delay units, input to the (M-1) gain control units, and output from the (M-1) gain control units are subtracted from the signals output from the microphone located closest to the speaker 10a, input to the delay units, and output from the (M-1) gain control units. Conversely, the filtering process to be performed next may be the following process: the signal output from the microphone located closest to the speaker 10a and output from the delay unit after input to the (M-1) delay units is subtracted from the signal output from each of the (M-1) microphones other than the microphone closest to the speaker 10a and output from the (M-1) delay units, then output from the (M-1) delay units, further input to the (M-1) gain control units, and then output from the (M-1) gain control units.
The filtering process performed here may be a process of: signals output from each of (M-1) microphones other than the microphone closest to the speaker 10a, input to the (M-1) delay units, output from the (M-1) delay units, further input to the (M-1) gain control units, and then output from the (M-1) gain control units are subtracted from a signal obtained by multiplying a signal output from the microphone closest to the speaker 10a and input to the delay unit by M times.
The filtering process performed here may be another process as long as it is a process capable of suppressing a signal indicating output sound from the speaker 10a acquired by the M microphones using signals acquired by the M microphones.
When the signal processing device 2 is configured by M microphones, the above configuration may be provided for each of N (N is an integer of 2 or more) sound sources (speakers, etc.).
The configuration of the signal processing device 3 shown in fig. 3 is a configuration in which the microphones are two microphones 20a and 20b, but is not limited to this. The number of microphones may be M (M is an integer of 2 or more). When M microphones are provided, the microphone located closest to the speaker 10a is connected to the frequency conversion unit and the delay unit. Each of the (M-1) microphones other than the microphone closest to the speaker 10a is connected to the frequency conversion unit, the delay unit, and the gain control unit. Then, a filter processing is performed in a filter application unit on signals output from each of (M-1) microphones other than the microphone closest to the speaker 10a, the signals being input to (M-1) delay units via a frequency conversion unit, then output from the (M-1) delay units, further input to the (M-1) gain control units, and then output from the (M-1) gain control units, and on signals output from the microphone located closest to the speaker 10a, input to the delay units via the frequency conversion unit, and then output from the delay units. The signal subjected to the filtering process is input to the time signal conversion unit, output from the time signal conversion unit, input to the output destination 10b, and output from the output destination 10 b.
The signals output from the (M-1) microphones other than the microphone closest to the speaker 10a, input to the (M-1) delay units via the frequency conversion unit, output from the (M-1) delay units, input to the (M-1) gain control units, and output from the (M-1) gain control units may be signals output from the (M-1) microphones other than the microphone closest to the speaker 10a, input to the (M-1) gain control units via the frequency conversion unit, output from the (M-1) gain control units, input to the (M-1) delay units, and output from the (M-1) delay units.
For example, the filtering process performed here may be a process of: signals output from the respective (M-1) microphones other than the microphone closest to the speaker 10a are multiplied by appropriate gain values so that the sum of the sound pressure level of the signal output from the microphone located at the position closest to the speaker 10a, the signal input to the delay unit via the frequency conversion unit and then output from the delay unit, and the sound pressure level of the signal output from the respective (M-1) microphones other than the microphone closest to the speaker 10a and having passed through the frequency conversion unit is equal to each other. Next, the filtering process performed here may be as follows: signals output from each of (M-1) microphones other than the microphone closest to the speaker 10a, input to the delay unit via the frequency conversion unit, and output from the delay unit are subtracted from signals output from the microphone located closest to the speaker 10a, input to the (M-1) delay units via the frequency conversion unit, output from the (M-1) delay units, input to the (M-1) gain control units, and output from the (M-1) gain control units. Conversely, the filtering process to be performed next may be the following process: the signal output from the microphone located closest to the speaker 10a and output from the delay unit after being input to the (M-1) delay units via the frequency conversion unit is subtracted from the signal output from each of the (M-1) microphones other than the microphone closest to the speaker 10a, output from the (M-1) delay units via the frequency conversion unit, further input to the (M-1) gain control units, and then output from the (M-1) gain control units.
The filtering process performed here may be a process of: signals output from each of (M-1) microphones other than the microphone closest to the speaker 10a, input to the (M-1) delay units via the frequency conversion unit, output from the (M-1) delay units, input to the (M-1) gain control units, and output from the (M-1) gain control units are subtracted from a signal obtained by multiplying a signal output from the microphone closest to the speaker 10a, input to the delay units via the frequency conversion unit, and output from the delay units, after input to the (M-1) gain control units.
The filtering process performed here may be another process as long as it can suppress a signal indicating the output sound from the speaker 10a acquired by the M microphones using the signals acquired by the M microphones.
When the signal processing device 3 is configured by M microphones, the above configuration may be provided for each of N (N is an integer of 2 or more) sound sources (speakers, etc.).
In the present embodiment, each component may be configured by dedicated hardware, or may be realized by executing a software program suitable for each component. Each component may be realized by a program execution unit such as a CPU or a processor reading out and executing a software program recorded in a recording medium such as a hard disk or a semiconductor memory.
Specifically, each of the signal Processing apparatuses 2 and 3 may include a Processing circuit (Processing circuit) and a Storage device (Storage) electrically connected to the Processing circuit and accessible from the Processing circuit.
The processing circuit includes at least one of dedicated hardware and a program execution unit, and executes processing using the storage device. In addition, in the case where the processing circuit includes a program execution unit, the storage device stores a software program executed by the program execution unit.
Here, software for realizing the signal processing method of the present embodiment is a program as follows.
That is, the program causes a computer to execute a signal processing method including: a gain control step of multiplying at least one of M signals output from M (M is an integer equal to or greater than 2) microphones by a gain so that sound pressure levels of the M signals indicating sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones are equal to each other; a delaying step of delaying at least one of the M signals to eliminate a difference in time of the M signals generated by a difference between arrival times of sounds arriving at the M microphones from the sound source; and a filter application step of generating a signal in which a sound output from the sound source located within the predetermined distance from a microphone located closest to the sound source among the M microphones is suppressed by applying a filter to the M signals obtained in the gain control step and the delay step.
Each component may be a circuit as described above. These circuits may be integrated into one circuit or may be different circuits. Each component may be implemented by a general-purpose processor, or may be implemented by a dedicated processor.
In addition, the processing executed by a specific component may be executed by another component.
While the signal processing method, the signal processing device 2, and the signal processing device 3 have been described above based on the embodiments, the signal processing method and the signal processing device are not limited to the embodiments. The present invention is not limited to the above embodiments, and various modifications and combinations of the components in the different embodiments can be made without departing from the spirit of the present disclosure.
Industrial applicability of the invention
The present disclosure can be applied to, for example, an emergency call system (e-call), a smartphone, a TV conference system, or a microphone and a speaker for a conference.

Claims (7)

1. A signal processing method, comprising:
a gain control step of multiplying at least one of M signals output from M microphones by a gain so that sound pressure levels of the M signals indicating sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones are equal, where M is an integer of 2 or more;
a delaying step of delaying at least one of the M signals to cancel a difference in time of the M signals generated by a difference between arrival times of respective sounds arriving at the M microphones from the sound source; and
a filter application step of applying a filter to the M signals obtained in the gain control step and the delay step to generate a signal in which a sound output from the sound source located within the predetermined distance is suppressed,
the sound source is located within the prescribed distance from a microphone, among the M microphones, located closest to the sound source,
the predetermined distance is a distance at which a sound pressure level of a sound reaching a microphone located closest to the sound source from the sound source is 4/3 times or more greater than a sound pressure level of a sound reaching a microphone located farthest from the sound source among the M microphones.
2. The signal processing method according to claim 1,
the prescribed distance is a distance within 3 times of the longest of the spatial intervals of the M microphones.
3. The signal processing method according to claim 1 or 2,
further comprising the step of calculating the spatial position at which the sound source is present.
4. The signal processing method according to claim 1 or 2,
the gain control step, the delay step, and the filter application step are performed in the frequency domain.
5. The signal processing method according to claim 1 or 2,
the gain control step, the delay step and the filter application step are performed in the time domain.
6. A signal processing device is characterized by comprising:
a gain control unit that multiplies at least one of M signals output from M microphones, where M is an integer equal to or greater than 2, by a gain so that sound pressure levels of the M signals indicating sounds arriving at the M microphones from a sound source located within a predetermined distance from the M microphones are equal to each other;
a delay section that delays at least one of the M signals to cancel a difference in time of the M signals generated by a difference between arrival times of respective sounds arriving at the M microphones from the sound source; and
a filter application unit configured to apply a filter to the M signals obtained from the gain control unit and the delay unit to generate a signal in which a sound output from the sound source located within the predetermined distance is suppressed,
the sound source is located within the prescribed distance from a microphone located closest to the sound source among the M microphones,
the predetermined distance is a distance at which a sound pressure level of a sound reaching a microphone located closest to the sound source from the sound source is 4/3 times or more greater than a sound pressure level of a sound reaching a microphone located farthest from the sound source among the M microphones.
7. A computer-readable recording medium having recorded thereon a program for causing a computer to execute the signal processing method according to claim 1 or 2.
CN201910787652.7A 2018-08-29 2019-08-26 Signal processing method, signal processing apparatus, and recording medium Active CN110876097B (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US201862724234P 2018-08-29 2018-08-29
US62/724234 2018-08-29
JP2019-078676 2019-04-17
JP2019078676A JP2020036304A (en) 2018-08-29 2019-04-17 Signal processing method and signal processor

Publications (2)

Publication Number Publication Date
CN110876097A CN110876097A (en) 2020-03-10
CN110876097B true CN110876097B (en) 2022-07-26

Family

ID=69668839

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910787652.7A Active CN110876097B (en) 2018-08-29 2019-08-26 Signal processing method, signal processing apparatus, and recording medium

Country Status (2)

Country Link
JP (1) JP2020036304A (en)
CN (1) CN110876097B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112153547A (en) * 2020-09-03 2020-12-29 海尔优家智能科技(北京)有限公司 Audio signal correction method, audio signal correction device, storage medium and electronic device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1640971A1 (en) * 2004-09-23 2006-03-29 Harman Becker Automotive Systems GmbH Multi-channel adaptive speech signal processing with noise reduction
CN101543089A (en) * 2006-11-22 2009-09-23 株式会社船井电机新应用技术研究所 Voice input device, its manufacturing method and information processing system
CN104041073A (en) * 2011-12-06 2014-09-10 苹果公司 Near-field null and beamforming
CN104717587A (en) * 2013-12-13 2015-06-17 Gn奈康有限公司 Apparatus And A Method For Audio Signal Processing

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7577262B2 (en) * 2002-11-18 2009-08-18 Panasonic Corporation Microphone device and audio player
JP4990981B2 (en) * 2007-10-04 2012-08-01 パナソニック株式会社 Noise extraction device using a microphone
US9183844B2 (en) * 2012-05-22 2015-11-10 Harris Corporation Near-field noise cancellation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1640971A1 (en) * 2004-09-23 2006-03-29 Harman Becker Automotive Systems GmbH Multi-channel adaptive speech signal processing with noise reduction
CN101543089A (en) * 2006-11-22 2009-09-23 株式会社船井电机新应用技术研究所 Voice input device, its manufacturing method and information processing system
CN104041073A (en) * 2011-12-06 2014-09-10 苹果公司 Near-field null and beamforming
CN104717587A (en) * 2013-12-13 2015-06-17 Gn奈康有限公司 Apparatus And A Method For Audio Signal Processing

Also Published As

Publication number Publication date
CN110876097A (en) 2020-03-10
JP2020036304A (en) 2020-03-05

Similar Documents

Publication Publication Date Title
US9721583B2 (en) Integrated sensor-array processor
KR101449433B1 (en) Noise cancelling method and apparatus from the sound signal through the microphone
US8189810B2 (en) System for processing microphone signals to provide an output signal with reduced interference
JP4664116B2 (en) Active noise suppression device
KR101798120B1 (en) Apparatus and method for improving the perceived quality of sound reproduction by combining active noise cancellation and perceptual noise compensation
CN104715750B (en) Sound system including engine sound synthesizer
CN111261138B (en) Noise reduction system determination method and device, and noise processing method and device
US8462962B2 (en) Sound processor, sound processing method and recording medium storing sound processing program
EP3627851B1 (en) Signal processing method and signal processing device
JP4973655B2 (en) Adaptive array control device, method, program, and adaptive array processing device, method, program using the same
CN111128210A (en) Audio signal processing with acoustic echo cancellation
JP2004187283A (en) Microphone unit and reproducing apparatus
JP7411576B2 (en) Proximity compensation system for remote microphone technology
US20040258255A1 (en) Post-processing scheme for adaptive directional microphone system with noise/interference suppression
JP6645322B2 (en) Noise suppression device, speech recognition device, noise suppression method, and noise suppression program
CN110876097B (en) Signal processing method, signal processing apparatus, and recording medium
JP7124506B2 (en) Sound collector, method and program
JP2005514668A (en) Speech enhancement system with a spectral power ratio dependent processor
CN110689900A (en) Signal enhancement method and device, computer readable storage medium and electronic equipment
CN109308907B (en) single channel noise reduction
US11765504B2 (en) Input signal decorrelation
JP2015070291A (en) Sound collection/emission device, sound source separation unit and sound source separation program
US11122366B2 (en) Method and apparatus for attenuation of audio howling
JP7147849B2 (en) Sound collector, method and program
JP4893317B2 (en) Audio signal processing apparatus, audio signal processing method, and audio signal processing program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant