GB2453118A - Generating a speech audio signal from multiple microphones with suppressed wind noise - Google Patents

Generating a speech audio signal from multiple microphones with suppressed wind noise Download PDF

Info

Publication number
GB2453118A
GB2453118A GB0718683A GB0718683A GB2453118A GB 2453118 A GB2453118 A GB 2453118A GB 0718683 A GB0718683 A GB 0718683A GB 0718683 A GB0718683 A GB 0718683A GB 2453118 A GB2453118 A GB 2453118A
Authority
GB
United Kingdom
Prior art keywords
frequency domain
signal
subband signal
domain subband
subband
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB0718683A
Other versions
GB0718683D0 (en
GB2453118B (en
Inventor
Holly Francois
David Pearce
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Motorola Solutions Inc
Original Assignee
Motorola Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Motorola Inc filed Critical Motorola Inc
Priority to GB0718683A priority Critical patent/GB2453118B/en
Publication of GB0718683D0 publication Critical patent/GB0718683D0/en
Priority to PCT/US2008/075701 priority patent/WO2009042385A1/en
Publication of GB2453118A publication Critical patent/GB2453118A/en
Application granted granted Critical
Publication of GB2453118B publication Critical patent/GB2453118B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2410/00Microphones
    • H04R2410/07Mechanical or electrical reduction of wind noise generated by wind passing a microphone
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/03Synergistic effects of band splitting and sub-band processing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Quality & Reliability (AREA)
  • Otolaryngology (AREA)
  • General Health & Medical Sciences (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

An apparatus comprises input processors (105, 107) for receiving audio signals from at least a first microphone (101) and a second microphone (103). FFT processors (109, 111) generate a first and second frequency domain subband signal from the audio signals. A combine processor (113) then generates a combined subband signal from the frequency domain subband signals and a synthesis processor (117) generates an output audio signal in response to the combined subband signal. The combine processor (113) generates the combined subband signal by, for each subband, selecting a subband magnitude for the combined subband signal as a lowest magnitude of a subband magnitude for the first frequency domain subband signal and a subband magnitude for the second frequency domain subband signal. The phase of all subbands may be selected as the phase from one of the first and second frequency domain subband signals. The invention may in particular provide efficient suppression of wind noise in multi-microphone systems. The invention relates to generating a speech audio signal from multiple microphones with suppressed wind noise.

Description

METHOD AND APPARATUS FOR GENERATING AN AUDIO SIGNAL FROM
MULTIPLE MICROPHONES
Field of the invention
The invention relates to a method and apparatus for generating an audio signal from multiple microphones and in particular, but not exclusively, to generating a speech audio signal with suppressed wind noise.
Background of the Invention
Capture of audio signals by microphones is increasingly used in diverse and natural environments rather than in studio environments where the audio environment is closely controlled. For example, mobile phones are often used in outdoor noisy environments. A particularly significant noise contributor in such cases is wind noise which can cause significant problems when a device is used outside. Wind noise can be extremely annoying to a listener. For example, for speech communication it can reduce intelligibility at low wind levels and at high wind levels it can make the speech completely unintelligible.
In order to provide an improved speech signal with reduced background noise some devices use more than one microphone.
For example, some speech devices have more than one microphone thereby allowing some directional audio beamforming towards the speech source to be implemented.
However, it has been found that directional filtering amplifies the effects of wind noise which accordingly becomes an even more significant problem for multi-microphone systems.
Wind noise is predominantly caused by turbulence at the microphone ports and therefore has a different characteristic to acoustic background noise. This fact has been exploited by different processing algorithms to attempt to detect and suppress the wind noise. However, existing algorithms tend to be suboptimal and specifically tend to be inefficient, complex, resource demanding, impractical and/or to provide suboptimal performance.
Hence, an improved system would be advantageous and in particular a system allowing generation of an improved quality audio signal from microphones, increased flexibility, reduced complexity, facilitated implementation, improved suppression of wind noise and/or improved performance would be advantageous.
Sununary of the Invention Accordingly, the Invention seeks to preferably mitigate, alleviate or eliminate one or more of the above mentioned disadvantages singly or in any combination.
According to an aspect of the invention there is provided an apparatus for generating an audio signal, the apparatus comprising: means for receiving a first audio signal from a first microphone; means for receiving a second audio signal from a second microphone; first frequency means for generating a first frequency domain subband signal from the first audio signal; second frequency means for generating a second frequency domain subband signal from the second audio signal; combining means for generating a combined frequency domain subband signal from the first frequency domain subband signal and the second frequency domain subband signal; generating means for generating the audio signal in response to the combined frequency domain subband signal; wherein the combining means is arranged to select, for each subband of the combined frequency domain subband signal, a subband magnitude for the combined frequency domain subband signal as a lowest magnitude of a subband magnitude for the first frequency domain subband signal and a subband magnitude for the second frequency domain subband signal.
The invention may provide improved performance and may in particular allow an improved quality audio signal to be generated from at least two microphones and/or may facilitate implementation and/or reduce complexity and/or resource demand. The invention may in particular allow an effective suppression of wind noise without requiring high complexity and resource demanding suppression algorithms to be executed.
The subbands of the frequency domain subband signals may for example be Fourier transform subbands generated e.g. by applying a Discrete Fourier Transform (DFT) or specifically a Fast Fourier Transform (FFT) to the time domain signal. As another example the subbands may be QMF (Quadrature Mirror Filter) subbands resulting from filtering of the time domain signals using a QMF filter bank. The subbands may be of equal bandwidth or the bandwidth of the individual subbands may vary for different subbands. For example, the bandwidth of each subband may be selected to reflect the psycho-acoustic importance of frequencies within the subband.
The generating means may comprise means for synthesizing the audio signal by a conversion from the frequency domain to the time domain. Such conversion may include windowing, overlap-and-add techniques etc. The processing may be performed in individual time intervals. Specifically, the audio signals from the microphones may be divided into time frames with each time frame subsequently being individually processed to generate an output audio signal for the frame.
According to an optional feature of the invention, the combining means is arranged to select one of the first frequency domain subband signal and the second frequency domain subband signal as a phase reference frequency domain subband signal, and for each subband of the combined frequency domain subband signal to set a subband phase as a subband phase of a corresponding subband of the phase reference frequency domain subband signal.
In some embodiments, the combined frequency domain subband signal may thus comprise subband values with all phases selected from one microphone and magnitudes selected from both microphones depending on e.g. which magnitude is the lowest. E.g. the phases for all subbands may be set equal to the phases of the subbands of either the first frequency domain subband signal or of the second frequency domain subband signal. The feature may in particular allow improved audio quality and may e.g. reduce or eliminate perceptive artefacts introduced by the processing. A simple, low resource implementation may furthermore be achieved.
According to an aspect of the invention there is provided a method of generating an audio signal, the method comprising: receiving a first audio signal from a first microphone; receiving a second audio signal from a second microphone; generating a first frequency domain subband signal from the first audio signal; generating a second frequency domain subband signal from the second audio signal; generating a combined frequency domain subband signal from the first frequency domain subband signal and the second frequency domain subband signal; and generating the audio signal in response to the combined frequency domain subband signal; and wherein generating the combined frequency domain subband signal comprises for each subband of the combined frequency domain subband signal selecting a subband magnitude for the combined frequency domain subband signal as a lowest magnitude of a subband magnitude for the first frequency domain subband signal and a subband magnitude for the second frequency domain subband signal.
These and other aspects, features and advantages of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter.
Brief Description of the Drawings
Embodiments of the invention will be described, by way of example only, with reference to the drawings, in which FIG. 1 illustrates an example of a device for generating an audio signal from a plurality of microphones in accordance with some embodiments of the invention; FIG. 2 illustrates an example of a method of generating an audio signal in accordance with some embodiments of the invention; and FIG. 3 illustrates an example of a high pass filter suitable for the device of FIG. 1.
Detailed Description of Some Embodiments of the Invention The following description focuses on embodiments of the invention applicable to suppression of wind noise in a multi-microphone apparatus and in particular to suppression of wind noise in a mobile phone having two microphones.
However, it will be appreciated that the invention is not limited to this application but may be applied to many other systems and applications including for example systems using more than two microphones.
FIG. 1 illustrates an example of a device for generating an audio signal from a plurality of microphones in accordance with some embodiments of the invention.
In the described example, the device is a mobile phone with two microphones 101, 103 where the first microphone 101 is mounted at the front of the mobile phone and the second microphone 103 is mounted at the back of the mobile phone.
In the example, the two microphones 101, 103 are substantially omni-directional microphones which are not designed to have a specific beam-pattern or directional preference. The use of two microphones 101, 103 allows the use of other signal processing algorithms known in the art to reduce acoustic noise (e.g. background noise), as well as the use of algorithms to reduce wind noise.
The first microphone 101 is coupled to a first input processor 105 and the second microphone 103 is coupled to a second input processor 107. The input processors 105, 107 comprise functionality for amplifying and sampling the microphone signals as well as for equalising and compensating the received signals for imbalances between the two microphones, the processing paths and/or the audio channels from a speech source to each of the two microphones 101, 103 as will be described in more detail later. The output signals of the first input processor 105 and second input processor 107 are thus sampled balanced time domain audio signals.
The first input processor 105 is coupled to a first FFT processor 109 and the second input processor 107 is coupled to a second FFT processor 111. The FFT processors 109, 111 convert the time domain audio signals into a frequency subband domain. Specifically, the audio signals are divided into time frames and the samples of each frame are converted to frequency domain subband samples by applying a suitable Fast Fourier Transform (FFT) to the time domain samples. For example, each frame may have a duration of 10 msec and a 128 point FFT may be applied. The resulting frequency domain subband samples are complex values having both a magnitude value and a phase.
It will be appreciated that the conversion to the frequency domain may also include e.g. the application of a window function as will be well known to the person skilled in the art. Also, it will be appreciated that although the described example uses an FF1 resulting in subbands with equal bandwidths other transforms may be used. For example, the subbands may be generated using a Quadrature Mirror Filter bank to generate QMF subbands. In some embodiments, the subbands are generated with different bandwidths and especially the bandwidth of each individual subband may be selected depending on the psycho-acoustic relevance of frequencies in the subband.
Thus the output of the first FF1 processor 109 is a first frequency domain subband signal representing the audio signal from the first microphone 101 and the output of the second FFT processor 111 is a second frequency domain subband signal representing the audio signal from the second microphone 103. The first FF1 processor 109 and the second FF1 processor 111 are coupled to a combine processor 113 which is arranged to generate a combined frequency domain subband signal from the first frequency domain subband signal and the second frequency domain subband signal.
In the example the combine processor 113 is arranged to generate the combined frequency domain subband signal by individually generating the subband values for each subband.
Specifically, for each subband, the combine processor 113 selects the subband magnitude for the combined frequency domain subband signal as the lowest magnitude of the subband magnitude for the first subband signal and the subband magnitude of the subband for the second frequency domain subband signal. Thus, each subband magnitude is selected as the lowest value of the subband magnitudes of the first and second frequency domain subband signals.
In some embodiments, the phase of the subband value may also be selected as the subband phase of the frequency domain subband signal having the lowest subband magnitude. However, in the described example, the phase of all subbands is selected from only one of the frequency domain subband signals. For example, all subband phases of the combined frequency domain subband signal can be set to the phase value of the corresponding subband of the first frequency domain subband signal. This approach may in many embodiments provide improved sound quality due to the preservation of phase coherence between the individual subbands of the combined frequency domain subband signal.
In the example of FIG. 2, the combine processor 113 is fed to an optional filter processor 115 which is arranged to perform an optional high pass filtering of the combined frequency domain subband signal. The high pass filtering of the combined frequency domain subband signal may in many scenarios improve sound quality and especially wind noise tends to have a strong low frequency component which can be effectively suppressed by high pass filtering. The high pass filtering can be efficiently implemented as only the combined frequency domain subband signal needs to be filtered. As the signal is a frequency domain signal this may be achieved simply by multiplying the subband values by appropriate weights reflecting the desired frequency response of the filter.
The filter processor 115 is coupled to a synthesis processor 117 which generates a time domain audio signal from the combined frequency domain subband signal. Specifically, the synthesis processor 117 may comprise functionality for performing the inverse transform of the time domain to frequency domain transform that was applied to the audio signals from the microphones 101, 103. In the example, the synthesis processor 117 performs an inverse FFT (iFFT) on the combined frequency domain subband signal to generate the time domain audio signal. It will be appreciated that the synthesis processor 117 may comprise functionality for e.g. windowing and applying overlap and add techniques to ensure e.g. coherency between the different frames.
Thus, the apparatus of FIG. 1 generates an output audio signal from the audio signals captured by two microphones.
The output signal is generated by selecting the magnitude of subbands with the lowest wind noise (thus having the lowest magnitude, as wind noise will generally result in an increased magnitude) . As the wind noise will generally be incoherent for the two microphones 101, 103, the selection of the lowest magnitude for each bin/subband will tend to minimise the total amount of wind noise in the combined frequency domain subband signal. Specifically, the device of FIG. 1 compares the signals from two microphones and for each subband always selects the signal with the least wind noise. The wind turbulence generated at the microphones is typically fairly independent of each other so at one extreme there are times when one microphone has turbulence and the other does not. As the noise increases both microphones tend to be affected but typically there still remain areas of the short term spectrum where the impact at one microphone is lower than the other. Therefore by selecting the speech level in each subband (and for each frame) from the spectrum of the microphone that is least affected, a much improved output speech signal can be obtained.
Furthermore, the algorithm is easy to implement and does not require high complexity or demand significant storage or computational resources.
The operation of the apparatus of FIG. 1 will be described in more detail with reference to FIG. 2 which illustrates an example of a method of generating an audio signal in accordance with some embodiments of the invention.
The method initiates in step 201 wherein the first input processor 105 receives the first audio signal from the first microphone 101 and the second input processor 107 receives the second audio signal from the second microphone 103. The first input processor 105 and second input processor 107 may specifically amplify and sample the received signal to generate sampled signals which are then divided into frames and fed to the first FFT processor 109 and second FFT processor 111.
Step 201 is followed by step 203 wherein the first FFT processor 109 and second FFT processor 111 convert the signals into the frequency domain using an FFT algorithm.
Thus, in step 203, the first frequency domain subband signal and the second frequency domain subband signal are generated.
Step 203 is followed by step 205 wherein the first and second frequency domain subband signals are equalised. The equalisation may he achieved by modifying one signal or may be achieved by a modification of both signals.
The equalisation between the signals seeks to compensate, reduce or eliminate differences between the two microphones and associated processing and audio paths. The equalisation may seek to compensate for differences inherent in the microphones and/or for differences in the audio channel from the speech source to the microphone and/or for differences in the two processing paths of the device (e.g. amplifier gain differences) . The equalisat ion can comprise a gain equalisation and/or a phase equalisation and in the specific example both gain and phase equalisation is performed.
Specifically, in the example, at least the first FFT processor 109 comprises functionality for providing a gain adjustment of the second frequency domain subband signal relative to the first frequency domain subband signal prior to generating the combined frequency domain subband signal.
In the example, the gain compensation is performed by scaling the subband values (e.g. the complex subband value or the magnitude of the subband value for a polar representation) of at least some of the subbands of the second frequency domain subband signal. Specifically, the subband value of the subbands may be multiplied by a gain compensation factor. If only gain compensation is performed, the gain compensation factor may be a single scalar value.
In some embodiments, the gain compensation may include a static gain compensation which seeks to compensate for static differences between the two audio capture means.
Specifically, a sensitivity indication may be determined to reflect the difference in sensitivity between the first and second microphones 101, 103. This sensitivity indication may for example be included as a calibration factor measured during manufacture of the device and the value can be stored in the device. The second FFT processor 111 can then scale all subband values of the frequency domain subband signal such that the effective sensitivity becomes the same for the two microphones.
It will be appreciated that in some embodiments a single gain factor may be determined to reflect the sensitivity difference between the first and second microphones 101, 103. However, in other embodiments, the frequency dependent sensitivities of the microphones 101, 103 may be measured and a different gain compensation value may be used for each subband.
In some embodiments, the gain compensation may alternatively or additionally include a dynamic gain compensation which is adapted to the current conditions. For example, as a user moves the mobile phone relative to his mouth, the audio channels between the user's mouth (the speech source) and the microphones 101, 103 will change. Thus, the signal level at each of the microphones 101, 103 will change dynamically.
In some embodiments, the device comprises functionality for dynamically estimating the signal level of the desired speech component for each of the microphones 101, 103 and for dynamically compensating for differences in these signal levels.
Specifically, the first FFT processor 109 and second FET processor 111 comprise functionality for measuring a power measure for the speech component for the first and second frequency domain subband signals. The gain compensation is then set to compensate for the difference therein. Thus, following the gain compensation, the speech components of the first and second frequency domain subband signals are approximately equal.
It will be appreciated that in different embodiments, different algorithms can be used to determine the power measures for the speech component and that any measure indicative of the signal level may be used, such as e.g. an average amplitude or energy estimate for the frame.
In the example, the power measures are determined in response to a subband magnitude of a subset of subbands of the first frequency domain subband signal. Specifically, the FFT processors 109, 111 comprise functionality for detecting a speech segment in the captured signals. It will be appreciated that many different algorithms for detecting the presence of speech will be known to the skilled person. As a simple example, a filtered signal level may be compared to a threshold and speech may be considered to be present if the threshold is exceeded.
During the detected speech segment, the magnitude values of a number of the high frequency subbands are filtered (over several frames and e.g. using a leaky filter) to generate a filtered high frequency signal level indication for the signal. As the wind noise is predominantly low frequency noise, this signal level indication provides a good estimate of the signal level of the speech component (isolated from the wind noise component) and can thus be used as a power measure.
The number of high frequency subbands that are used, may depend on the individual embodiment. However, it has been found that particularly good results can be achieved in many scenarios by using a low number of subbands. In particular, it has been found that using less than twenty subbands provides high performance. In the specific example, a subset of fifteen high frequency subbands is used.
The power measures generated in this way by the first FFT processor 109 and the second FFT processor 111 for the first frequency domain subband signal and the second frequency domain subband signal respectively are then used to dynamically and adaptively adjust the gain for the first (and/or second) frequency domain subband signal such that the power measures following the scaling are approximately equal.
Thus, the mobile phone illustrated in FIG. 1 comprises means for equalising the captured signals such that the signal levels of the speech components of the frequency domain subband signals are approximately equal. This ensures that the subband value of the two signals which has the lowest magnitude is likely to correspond to the subband value that comprises least wind noise. Thus, the equalisation improves the performance in environments where characteristics of the microphones 101, 103 may be different and/or where the audio path between the speech source and the microphones 101, 103 are unknown and/or dynamically varying. Thus, the gain equalisation may increase the environments in which efficient wind noise suppression can be achieved.
It will be appreciated that equalisation is an optional feature and that e.g. in fixed environments where matched microphones are used in a static relationship to each other the speech source may be implemented without equalisation.
In the device of FIG. 1, the first FFT processor 109 is furthermore arranged to provide a phase adjustment of the first frequency domain subband signal relative to the second frequency domain subband signal prior to generating the combined frequency domain subband signal. The phase adjustment may specifically be arranged to compensate for the difference in the delay from the speech source to each of the two microphones 101, 103. A delay in the time domain will correspond to a linear phase variation in the frequency domain which can easily be compensated by applying a corresponding phase compensation to the subband values of the frequency domain subband signals. Specifically, the second FFT processor 111 can perform a phase rotation by multiplying the complex subband values by a unity gain complex value with the desired phase. Alternatively, if polar representation is used, the desired phase compensation may simply be performed by subtracting or adding the desired phase rotation.
In some embodiments a fixed predetermined phase compensation value can be used. For example, during the design phase a typical distance between the microphones can he calculated based on the position of the microphones 101, 103 in the mobile phone. The difference in the distance can be converted into a typical delay difference and the frequency domain phase values corresponding to the delay difference can be calculated and stored in the phone. Although this equalisation may only be approximate if the actual position deviates from the assumed position, it will typically be sufficiently accurate to result in a high quality output audio signal.
The phase equalisation may provide improved audio quality and may in particular allow the phase of the combined frequency domain subband signal to be selected from different frequency domain subband signals in consecutive frames without introducing unacceptable quality degradation due to phase steps between the frames.
It will be appreciated that although the described example performs the equalisation in the frequency domain, some or all of the equalisation may in other embodiments be performed in the time domain and may specifically be performed by the first input processor 105 and/or the second input processor 107 prior to the conversion to the frequency domain. For example, in many embodiments, the gain equalisation may be performed in the time domain (e.g. by setting the gain value of a microphone amplifier) with the phase compensation being performed directly in the frequency domain. In some embodiments, the phase compensation may e.g. be performed by introduction of a time domain delay in the first input processor 105 or the second input processor 107.
Step 205 is followed by step 207 wherein the first subband of the first and second frequency domain subband signals is selected. Step 207 is followed by step 209 wherein a magnitude of the subband value in the selected first subband is calculated by the combine processor 113 for both the first and the second frequency domain subband signal. Step 209 is followed by step 211 wherein the two calculated magnitudes are compared to each other. The lowest calculated magnitude is then selected and the magnitude of the subband value of the first subband of the combined frequency domain subband signal is set to the selected value. Thus, for the selected subband, the combine processor 113 calculates the magnitude values and selects the lowest value for the combined frequency domain subband signal.
Step 211 is followed by step 213 wherein the combine processor 113 determines if all subbands have been processed. If not, the method returns to step 207 wherein the next subband is selected and the process of selecting the lowest magnitude for the selected subband is repeated.
When all the subbands have been processed, the combine processor 113 continues to select the phase of the subband values of the combined frequency domain subband signal. In the specific example, the subband phases are for each frame selected as the subband phases from either the first frequency domain subband signal or the second frequency domain subband signal. Thus, all subband phases within a given frame are selected from a single frequency domain subband signal thereby ensuring that phase discrepancies between the subbands are not introduced.
In some embodiments, the phase is al ways selected from the same frequency domain subband signal. E.g. the subband phases may always be selected as the subband phases of the frequency domain subband signal from the front microphone.
However, in the example of FIG. 1, a phase reference signal is individually selected for each frame and may thus vary from one frame to the next. Due to the phase equalisation performed by the first FFT processor 109 and/or the second FFT processor 111, the phase discrepancy between the two signals is maintained low so no unacceptable audio artefacts are introduced by switching between the signals in different frames. In the specific example, the phase reference signal is selected between the first frequency domain subband signal and the second frequency domain subband signal depending on the total power of each signal within the frame.
More specifically, the method continues in step 215 wherein a power measure is generated for each of the first and second frequency domain subband signals by combining the subband magnitude values for the subbands of the signals. In the specific example, the magnitude values generated in step 209 are simply summed for each signal resulting in a total accumulated magnitude value which is used as the power measure.
Step 215 is followed by step 217 wherein the combine processor 113 selects one of the frequency domain subband signals as the phase reference frequency domain subband signal depending on the power measure. Specifically, the frequency domain subband signal with the lowest power measure is selected as this is likely to have the least wind noise and thus is likely to have the most accurate phase values resulting in improved audio quality. Thus, the phase reference frequency domain subband signal is selected as the first frequency domain subband signal if the power measure is lower for this signal than the power measure for the second frequency domain subband signal. Otherwise, the phase reference frequency domain subband signal is selected as the second frequency domain subband signal.
Step 217 is followed by step 219 wherein the phase value of each subband of the combined frequency domain subband signal is set to the subband phase of the corresponding subband of the phase reference frequency domain subband signal. Thus, the phases of the subbands of the combined frequency domain subband signal are set equal to the phases of the captured frequency domain subband signal which is considered to have the lowest wind noise component.
The combine processor 113 accordingly generates a combined frequency domain subband signal wherein the magnitude and phase is generated differently for each subband value. The combine processor 113 may, based on the selected magnitude and phase values, generate the subband values as complex values using scalar or polar representations.
Step 219 is followed by step 221 wherein an optional filtering of the combined frequency domain subband signal is performed by the filter processor 115.
The filtering of the combined frequency domain subband signal may in some embodiments be performed by applying a fixed predetermined high pass filter. For example, a set of frequency domain filter coefficients may be stored having one coefficient for each subband. The filtering may then be achieved simply by multiplying the subband values of the combined frequency domain subband signal and the stored coefficients. A suitable filter characteristic has been found to be a high pass filter having a gain as a function of frequency which substantially follows a quarter sine wave curve as illustrated in FIG. 3.
Typically, wind noise tends to be significant at lower frequencies and there will often tend to be significant wind noise on both microphones at low frequencies. The high pass filtering may accordingly remove significant amounts of wind noise thereby resulting in improved quality. Furthermore, at lower frequencies both microphones tend to be affected by wind noise whereas at higher frequencies it is more likely that only one of the microphones is affected by wind noise.
Accordingly, the described wind noise suppression algorithm will tend to be most effective at higher frequencies and a synergistic effect is thus achieved between the selection of the lowest magnitude for individual subbands and the subsequent high pass filtering of the resulting combined signal.
In some embodiments, the filter processor 115 may dynamically vary the frequency response of the high pass filtering. Specifically, the cut-off frequency of the high pass filter may be varied in response to a noise indication for at least one of the captured audio signals. The cut-off frequency may be taken as any frequency representing the transition from a lower frequency stop band region to a higher frequency stop band region. Thus, for example, the cut-off frequency may be the 3 dB frequency wherein the gain has dropped 3 dB from the maximum gain.
The noise indication may be any indication of the amount of noise present at the captured signals. In the specific example, the noise indication is generated from the subband values of the first and second frequency domain subband signal. Specifically, the magnitude difference between the first and second frequency domain subband signal subband values is typically due to the presence of wind noise. Thus, an increasing magnitude variation between subband values of the first frequency domain subband signal and the second frequency domain subband signal is indicative of an increasing amount of wind noise. As a specific example, the noise indication may be generated as the sum of the ratio between the difference and the sum of the two subband values for all, or a low frequency subset of, subbands of the first and second frequency domain subband signal.
The dynamic variation of the cut-off frequency of the high pass filter may allow an improved adaptation of the strength of the filtering to the current noise indications. Thus, the dynamic variation may in many scenarios provide improved quality of the resulting audio signal.
Step 221 is followed by step 223 wherein the output audio signal is generated from the (optionally filtered) combined frequency domain subband signal. In particular, the synthesis processor 117 converts the combined frequency domain subband signal to the time domain by applying an iFFT to the signal. In the example, the synthesis processor 117 also employs windowing and overlap-and-add techniques as is well known in the field of digital signal processing.
Thus, the described method generates an audio output signal with suppressed noise. Specifically, in the example an easy to implement algorithm is used to generate a speech signal having suppressed wind noise. The algorithm may thus provide improved speech quality for e.g. a mobile phone.
It will be appreciated that the above description for clarity has described embodiments of the invention with reference to different functional units and processors.
However, it will be apparent that any suitable distribution of functionality between different functional units or processors may be used without detracting from the invention. For example, functionality illustrated to be performed by separate processors or controllers may be performed by the same processor or controllers. Hence, references to specific functional units are only to be seen as references to suitable means for providing the described functionality rather than indicative of a strict logical or physical structure or organization.
The invention can be implemented in any suitable form including hardware, software, firmware or any combination of these. The invention may optionally be implemented at least partly as computer software running on one or more data processors and/or digital signal processors. The elements and components of an embodiment of the invention may be physically, functionally and logically implemented in any suitable way. Indeed the functionality may be implemented in a single unit, in a plurality of units or as part of other functional units. As such, the invention may be implemented in a single unit or may be physically and functionally distributed between different units and processors.
Although the present invention has been described in connection with some embodiments, it is not intended to be limited to the specific form set forth herein. Rather, the scope of the present invention is limited only by the accompanying claims. Additionally, although a feature may appear to be described in connection with particular embodiments, one skilled in the art would recognize that various features of the described embodiments may be combined in accordance with the invention. In the claims, the term comprising does not exclude the presence of other elements or steps.
Furthermore, although individually listed, a plurality of means, elements or method steps may be implemented by e.g. a single unit or processor. Additionally, although individual features may be included in different claims, these may possibly be advantageously combined, and the inclusion in different claims does not imply that a combination of features is not feasible and/or advantageous. Also the inclusion of a feature in one category of claims does not imply a limitation to this category but rather indicates that the feature is equally applicable to other claim categories as appropriate. Furthermore, the order of features in the claims does not imply any specific order in which the features must be worked and in particular the order of individual steps in a method claim does not imply that the steps must be performed in this order. Rather, the steps may be performed in any suitable order.

Claims (20)

1. An apparatus for generating an audio signal, the apparatus comprising: means for receiving a first audio signal from a first microphone; means for receiving a second audio signal from a second microphone; first frequency means for generating a first frequency domain subband signal from the first audio signal; second frequency means for generating a second frequency domain subband signal from the second audio signal; combining means for generating a combined frequency domain subband signal from the first frequency domain subband signal and the second frequency domain subband signal; generating means for generating the audio signal in response to the combined frequency domain subband signal; wherein the combining means is arranged to select, for each subband of the combined frequency domain subband signal, a subband magnitude for the combined frequency domain subband signal as a lowest magnitude of a subband magnitude for the first frequency domain subband signal and a subband magnitude for the second frequency domain subband signal.
2. The apparatus of claim 1 wherein the combining means is arranged to select one of the first frequency domain subband signal and the second frequency domain subband signal as a phase reference frequency domain subband signal and to, for each subband of the combined frequency domain subband signal, set a subband phase to correspond to a subband phase of a corresponding subband of the phase reference frequency domain subband signal.
3. The apparatus of claim 2 wherein the combining means is arranged to select the phase reference frequency domain subband signal as the first frequency domain subband signal if a power measure for the first frequency domain subband signal is lower than a power measure for the second frequency domain subband signal and to select the phase reference frequency domain subband signal as the second frequency domain subband signal if the power measure for the second frequency domain subband signal is lower than the power measure for the first frequency domain subband signal.
4. The apparatus of claim 3 wherein the combining means is arranged to determine the power measure for the first frequency domain subband signal by combining subband magnitude values for subbands of the first frequency domain subband signal, and to determine the power measure for the second frequency domain subband signal by combining subband magnitude values for subbands of the second frequency domain subband signal.
5. The apparatus of claim 1 further comprising: gain adjustment means for providing a gain adjustment of the first frequency domain subband signal relative to the second frequency domain subband signal prior to generating the combined frequency domain subband signal.
6. The apparatus of claim 5 wherein the gain adjustment means is arranged to scale a subband value of at least some -bbands of the first frequency domain subband signal by a gain compensation factor.
7. The apparatus of claim 6 wherein the gain adjustment means is arranged to determine the gain compensation factor in response to a sensitivity indication for the first microphone relative to the second microphone.
8. The apparatus of claim 6 wherein the first audio signal comprises a first speech component, the second audio signal comprises a second speech component and the gain adjustment means is arranged to determine a first power measure for the first speech component and a second power measure for the second speech component and to determine the gain compensation factor in response to the first power measure and the second power measure.
9. The apparatus of claim 8 wherein the gain adjustment means is arranged to determine the first power measure in response to a subband magnitude of a subset of subbands of the first frequency domain subband signal.
10. The apparatus of claim 9 wherein the subset of subbands comprises less than 20 subbands.
11. The apparatus of claim 9 wherein the subset of subbands comprises high frequency subbands.
12. The apparatus of claim 1 further comprising phase adjustment means for providing a phase adjustment of the first frequency domain subband signal relative to the second frequency domain subband signal prior to generating the combined frequency domain subband signal.
13. The apparatus of claim 5 wherein the gain adjustment means is arranged to perform a phase rotation of a subband value of at least some subbands of the first frequency domain subband signal by a phase compensation value.
14. The apparatus o-f claim 1 wherein the generatinq means is arranged to high pass filter the combined frequency domain subband signal.
15. The apparatus of claim 14 wherein the generating means is arranged to vary a cut-off frequency of the high pass filtering in response to a noise indication for at least one of the first audio signal and the second audio signal.
16. The apparatus of claim 15 wherein the generating means is arranged to generate the noise indication in response to a magnitude variation between subband magnitudes of the first frequency domain subband signal and subband magnitudes of the second frequency domain subband signal.
17. The apparatus of claim 1 wherein the first audio signal comprises a wind noise component and a speech component from a speech source and the second audio signal comprises a different wind noise component and a speech component from the speech source, and wherein the combined frequency domain subband signal has a reduced wind noise component relative to both the first frequency domain subband signal and the second frequency domain subband signal.
18. The apparatus of claim 1 wherein the first and second microphones are substantially omni-directional microphones.
19. A method of generating an audio signal, the method comprising: receiving a first audio signal from a first microphone; receiving a second audio signal from a second microphone; generating a first frequency domain subband signal from the first audio signal; generating a second frequency domain subband signal from the second audio signal; generating a combined frequency domain subband signal from the first frequency domain subband signal and the second frequency domain subband signal; and generating the audio signal in response to the combined frequency domain subband signal; and wherein generating the combined frequency domain subband signal comprises for each subband of the combined frequency domain subband signal selecting a subband magnitude for the combined frequency domain subband signal as a lowest magnitude of a subband magnitude for the first frequency domain subband signal and a subband magnitude for the second frequency domain subband signal.
20. A computer program product enabling the carrying out of a method according to claim 19.
GB0718683A 2007-09-25 2007-09-25 Method and apparatus for generating and audio signal from multiple microphones Active GB2453118B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
GB0718683A GB2453118B (en) 2007-09-25 2007-09-25 Method and apparatus for generating and audio signal from multiple microphones
PCT/US2008/075701 WO2009042385A1 (en) 2007-09-25 2008-09-09 Method and apparatus for generating an audio signal from multiple microphones

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB0718683A GB2453118B (en) 2007-09-25 2007-09-25 Method and apparatus for generating and audio signal from multiple microphones

Publications (3)

Publication Number Publication Date
GB0718683D0 GB0718683D0 (en) 2007-10-31
GB2453118A true GB2453118A (en) 2009-04-01
GB2453118B GB2453118B (en) 2011-09-21

Family

ID=38670459

Family Applications (1)

Application Number Title Priority Date Filing Date
GB0718683A Active GB2453118B (en) 2007-09-25 2007-09-25 Method and apparatus for generating and audio signal from multiple microphones

Country Status (2)

Country Link
GB (1) GB2453118B (en)
WO (1) WO2009042385A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012109019A1 (en) * 2011-02-10 2012-08-16 Dolby Laboratories Licensing Corporation System and method for wind detection and suppression
CN103945291A (en) * 2014-03-05 2014-07-23 北京飞利信科技股份有限公司 Method and device for achieving orientation voice transmission through two microphones
EP2765787A1 (en) * 2013-02-07 2014-08-13 Sennheiser Communications A/S A method of reducing un-correlated noise in an audio processing device
EP2641346A4 (en) * 2010-11-18 2015-10-28 Hear Ip Pty Ltd Systems and methods for reducing unwanted sounds in signals received from an arrangement of microphones
IT201700040732A1 (en) * 2017-04-12 2018-10-12 Inst Rundfunktechnik Gmbh VERFAHREN UND VORRICHTUNG ZUM MISCHEN VON N INFORMATIONSSIGNALEN

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8724829B2 (en) 2008-10-24 2014-05-13 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for coherence detection
US8620672B2 (en) 2009-06-09 2013-12-31 Qualcomm Incorporated Systems, methods, apparatus, and computer-readable media for phase-based processing of multichannel signal
US8861745B2 (en) * 2010-12-01 2014-10-14 Cambridge Silicon Radio Limited Wind noise mitigation
US11665482B2 (en) 2011-12-23 2023-05-30 Shenzhen Shokz Co., Ltd. Bone conduction speaker and compound vibration device thereof
WO2014049192A1 (en) * 2012-09-26 2014-04-03 Nokia Corporation A method, an apparatus and a computer program for creating an audio composition signal
US10623854B2 (en) 2015-03-25 2020-04-14 Dolby Laboratories Licensing Corporation Sub-band mixing of multiple microphones
GB2555139A (en) 2016-10-21 2018-04-25 Nokia Technologies Oy Detecting the presence of wind noise
US10192566B1 (en) 2018-01-17 2019-01-29 Sorenson Ip Holdings, Llc Noise reduction in an audio system
BR112021004719A2 (en) 2018-09-12 2021-06-22 Shenzhen Voxtech Co., Ltd. signal processing device with multiple acoustic electrical transducers
CN110910893B (en) * 2019-11-26 2022-07-22 北京梧桐车联科技有限责任公司 Audio processing method, device and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06269084A (en) * 1993-03-16 1994-09-22 Sony Corp Wind noise reduction device
US20020037088A1 (en) * 2000-09-13 2002-03-28 Thomas Dickel Method for operating a hearing aid or hearing aid system, and a hearing aid and hearing aid system
US20040161120A1 (en) * 2003-02-19 2004-08-19 Petersen Kim Spetzler Device and method for detecting wind noise
US20070058822A1 (en) * 2005-09-12 2007-03-15 Sony Corporation Noise reducing apparatus, method and program and sound pickup apparatus for electronic equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1264382C (en) * 1999-12-24 2006-07-12 皇家菲利浦电子有限公司 Multichannel audio signal processing device
TW200305854A (en) * 2002-03-27 2003-11-01 Aliphcom Inc Microphone and voice activity detection (VAD) configurations for use with communication system
EP1581026B1 (en) * 2004-03-17 2015-11-11 Nuance Communications, Inc. Method for detecting and reducing noise from a microphone array
US20060013412A1 (en) * 2004-07-16 2006-01-19 Alexander Goldin Method and system for reduction of noise in microphone signals

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06269084A (en) * 1993-03-16 1994-09-22 Sony Corp Wind noise reduction device
US20020037088A1 (en) * 2000-09-13 2002-03-28 Thomas Dickel Method for operating a hearing aid or hearing aid system, and a hearing aid and hearing aid system
US20040161120A1 (en) * 2003-02-19 2004-08-19 Petersen Kim Spetzler Device and method for detecting wind noise
US20070058822A1 (en) * 2005-09-12 2007-03-15 Sony Corporation Noise reducing apparatus, method and program and sound pickup apparatus for electronic equipment

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2641346A4 (en) * 2010-11-18 2015-10-28 Hear Ip Pty Ltd Systems and methods for reducing unwanted sounds in signals received from an arrangement of microphones
EP2641346B1 (en) 2010-11-18 2016-10-05 Hear Ip Pty Ltd Systems and methods for reducing unwanted sounds in signals received from an arrangement of microphones
CN105792071A (en) * 2011-02-10 2016-07-20 杜比实验室特许公司 System and method for wind detection and suppression
CN103348686A (en) * 2011-02-10 2013-10-09 杜比实验室特许公司 System and method for wind detection and suppression
CN105792071B (en) * 2011-02-10 2019-07-05 杜比实验室特许公司 The system and method for detecting and inhibiting for wind
US9313597B2 (en) 2011-02-10 2016-04-12 Dolby Laboratories Licensing Corporation System and method for wind detection and suppression
CN103348686B (en) * 2011-02-10 2016-04-13 杜比实验室特许公司 For the system and method that wind detects and suppresses
US9761214B2 (en) 2011-02-10 2017-09-12 Dolby Laboratories Licensing Corporation System and method for wind detection and suppression
WO2012109019A1 (en) * 2011-02-10 2012-08-16 Dolby Laboratories Licensing Corporation System and method for wind detection and suppression
EP2765787A1 (en) * 2013-02-07 2014-08-13 Sennheiser Communications A/S A method of reducing un-correlated noise in an audio processing device
US9325285B2 (en) 2013-02-07 2016-04-26 Oticon A/S Method of reducing un-correlated noise in an audio processing device
EP2765787B1 (en) 2013-02-07 2019-12-11 Sennheiser Communications A/S A method of reducing un-correlated noise in an audio processing device
CN103945291A (en) * 2014-03-05 2014-07-23 北京飞利信科技股份有限公司 Method and device for achieving orientation voice transmission through two microphones
IT201700040732A1 (en) * 2017-04-12 2018-10-12 Inst Rundfunktechnik Gmbh VERFAHREN UND VORRICHTUNG ZUM MISCHEN VON N INFORMATIONSSIGNALEN
WO2018188697A1 (en) * 2017-04-12 2018-10-18 Institut für Rundfunktechnik GmbH Method and device for mixing n information signals
CN110720226A (en) * 2017-04-12 2020-01-21 无线电广播技术研究所有限公司 Method and apparatus for mixing N information signals
US10834502B2 (en) 2017-04-12 2020-11-10 Institut Fur Rundfunktechnik Gmbh Method and device for mixing N information signals
CN110720226B (en) * 2017-04-12 2021-12-31 无线电广播技术研究所有限公司 Method and apparatus for mixing N information signals

Also Published As

Publication number Publication date
GB0718683D0 (en) 2007-10-31
WO2009042385A4 (en) 2009-05-22
GB2453118B (en) 2011-09-21
WO2009042385A1 (en) 2009-04-02

Similar Documents

Publication Publication Date Title
GB2453118A (en) Generating a speech audio signal from multiple microphones with suppressed wind noise
US10327088B2 (en) Spatial audio processor and a method for providing spatial parameters based on an acoustic input signal
US6717991B1 (en) System and method for dual microphone signal noise reduction using spectral subtraction
US9173025B2 (en) Combined suppression of noise, echo, and out-of-location signals
US8249861B2 (en) High frequency compression integration
EP2673777B1 (en) Combined suppression of noise and out-of-location signals
US6549586B2 (en) System and method for dual microphone signal noise reduction using spectral subtraction
JP6014259B2 (en) Percentile filtering of noise reduction gain
KR101597752B1 (en) Apparatus and method for noise estimation and noise reduction apparatus employing the same
EP1806739B1 (en) Noise suppressor
RU2760097C2 (en) Method and device for capturing audio information using directional diagram formation
US20120314885A1 (en) Signal processing using spatial filter
WO2015139938A2 (en) Noise suppression
US8712076B2 (en) Post-processing including median filtering of noise suppression gains
JP2004502977A (en) Subband exponential smoothing noise cancellation system
JP2002530922A (en) Apparatus and method for processing signals
EP2597639A2 (en) Sound processing device
WO2007123047A1 (en) Adaptive array control device, method, and program, and its applied adaptive array processing device, method, and program
JP2005514668A (en) Speech enhancement system with a spectral power ratio dependent processor
JP2002538650A (en) Antenna processing method and antenna processing device
Martin et al. Binaural speech enhancement with instantaneous coherence smoothing using the cepstral correlation coefficient
Löllmann et al. Efficient Speech Dereverberation for Binaural Hearing Aids
SIGNAL SPEECH ENHANCEMENT
EP3516653A1 (en) Apparatus and method for generating noise estimates

Legal Events

Date Code Title Description
732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20110127 AND 20110202

732E Amendments to the register in respect of changes of name or changes affecting rights (sect. 32/1977)

Free format text: REGISTERED BETWEEN 20170831 AND 20170906