US9692379B2 - Adaptive audio capturing - Google Patents
Adaptive audio capturing Download PDFInfo
- Publication number
- US9692379B2 US9692379B2 US14/758,026 US201214758026A US9692379B2 US 9692379 B2 US9692379 B2 US 9692379B2 US 201214758026 A US201214758026 A US 201214758026A US 9692379 B2 US9692379 B2 US 9692379B2
- Authority
- US
- United States
- Prior art keywords
- audio
- amplitude
- audio channel
- signal amplitude
- signal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03G—CONTROL OF AMPLIFICATION
- H03G3/00—Gain control in amplifiers or frequency changers
- H03G3/20—Automatic control
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/005—Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L21/00—Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
- G10L21/02—Speech enhancement, e.g. noise reduction or echo cancellation
- G10L21/0208—Noise filtering
- G10L21/0216—Noise filtering characterised by the method used for estimating noise
- G10L2021/02161—Number of inputs available containing the signal or the noise to be suppressed
- G10L2021/02165—Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L25/00—Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
- G10L25/78—Detection of presence or absence of voice signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/11—Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
Definitions
- Embodiments of the present invention generally relate to audio processing, and more particularly, to a method, apparatus, computer program, and user terminal for adaptive audio capturing.
- a user terminal like a mobile phone, a tablet computer or a personal digital assistant (PDA) may have a plurality of audio capturing elements such as multiple microphones.
- Such configuration has become popular in past several years.
- those commercially available smart mobile phones are usually equipped with two or more microphones.
- some of them are designed to function as primary audio capturing elements and used to capture, for example, the foreground audio signals; while the audio capturing elements may function as reference or auxiliary ones and used to capture, for example, the background audio signals.
- a microphone located in the lower part of a mobile phone is generally supposed to be capable of capturing high quality of voice signals from a speaker.
- this microphone is usually used as a primary audio capturing element to capture the user's speech signal in a voice call.
- a microphone at another location may function as an auxiliary audio capturing element that may be used to capture the background noise for environmental noise estimation, noise inhibition, and the like.
- the spatial location of a user terminal with respect to the audio signal source and the surrounding environment will have an impact on the effects of audio capturing.
- the originally designed primary audio capturing element might be blocked or located opposite to the audio signal source of the user terminal, thereby rendering the originally designed primary audio capturing element incapable of capturing audio signals of high quality.
- an auxiliary or reference audio capturing element cannot be activated to function as a primary one in this situation, even if this element is now in a better or optimal position.
- the functionalities of audio capturing elements on the user terminal are fixed in design and manufacturing, and cannot be changed or switched adaptively in use. As a result, the quality of audio capturing will degrade.
- embodiments of the present invention propose a method, apparatus, computer program, and user terminal for adaptive audio capturing.
- embodiments of the present invention provide a method for adaptive audio capturing.
- the method comprising: obtaining an audio signal through an audio channel associated with an audio capturing element on a user terminal; calculating a signal amplitude for the audio channel by processing the obtained audio signal; and determining a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
- Other embodiments in this aspect include a corresponding computer program product.
- embodiments of the present invention provide an apparatus for adaptive audio capturing.
- the apparatus comprising: an obtaining unit configured to obtain an audio signal through an audio channel associated with an audio capturing element on a user terminal; a calculating unit configured to calculate a signal amplitude for the audio channel by processing the obtained audio signal; and a determining unit configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
- inventions of the present invention provide a user terminal.
- the user terminal comprises at least one processor; a plurality of audio capturing elements; and at least one memory coupled to the at least one processor and storing program of computer executable instructions, the computer executable instructions configured, with the at least one processor, to cause the mobile terminal to at least perform according to the method outlined in the above paragraphs.
- the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed.
- the most optimal audio capturing element may be adaptively determined as the primary element, while one or more other audio capturing elements may function as reference audio capturing elements accordingly. In this way, the quality of captured audio signals may be maintained at a high level in various conditions in use.
- FIG. 1 is a flowchart illustrating a method for adaptive audio capturing in accordance with an exemplary embodiment of the present invention
- FIG. 2 is a flowchart illustrating a method for adaptive audio capturing in accordance with another exemplary embodiment of the present invention
- FIGS. 3A and 3B are schematic diagrams illustrating an example of adaptive audio capturing in accordance with an exemplary embodiment of the present invention
- FIG. 4 is a block diagram illustrating an apparatus for adaptive audio capturing in accordance with an exemplary embodiment of the present invention.
- FIG. 5 is a block diagram illustrating a user terminal in accordance with an exemplary embodiment of the present invention.
- embodiments of the present invention provide a method, apparatus, and computer program product for adaptive audio capturing.
- the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed. As such the quality of captured audio signals may be maintained at a high level in various conditions in use.
- FIG. 1 a flowchart illustrating a method 100 for adaptive audio capturing in accordance with an exemplary embodiment of the present invention is shown.
- an audio signal is obtained at step S 101 through an audio channel associated with an audio capturing element on a user terminal.
- the user terminal is equipped with a plurality of audio capturing elements.
- audio capturing element refers to any suitable device that may be configured to capture, record, or otherwise obtain audio signals, such as microphones.
- Each audio capturing element is associated with an audio channel through which the audio signals captured by that audio capturing element may be passed to, for example, the processor or controller of the user terminal.
- the signal amplitude for the audio channel may comprise any information indicating the magnitude of the audio signals on that channel.
- the single amplitude calculated at step S 103 may comprise the signal magnitude in time domain, which may be expressed by the root mean square value of the audio signal, for example.
- the amplitude in frequency domain like the spectrum amplitude and/or power spectrum of the obtained audio signal may also be used as the signal amplitude. It will be appreciated that these are only some examples of signal amplitude and should not be constructed as a limit of the present invention. Any information capable of indicating the signal amplitude for an audio channel, whether currently known or developed in the future, may be used in connection with embodiments of the present invention. Specific examples in this regard will be detailed below with reference to FIG. 2 .
- the location of audio signal source (e.g., a speaker) with respect to the audio capturing elements on the user terminal will generally remain stable at least in a certain period of time. Therefore, in some exemplary embodiments, the signal amplitude calculated at step S 103 may comprise an average of the single amplitudes accumulated over a given time interval. In these embodiments, the average signal amplitudes may be used to determine the functionality of audio capturing elements for a next time interval, for example. Specific examples in this regard will be detailed below with reference to FIG. 2 .
- a functionality of the audio capturing element is determined based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
- the user terminal is equipped with one or more other audio capturing elements, each of which is associated with a corresponding audio channel.
- the signal amplitudes for one or more of these audio channels may be calculated in a similar way as described above.
- the signal amplitude for another audio channel may be calculated by method 100 or by a similar process associated with or dedicated to that audio channel.
- the functionality of the audio capturing element may be determined based on the signal amplitude for the associated audio channel and the further signal amplitudes for one or more further audio channels on the same user terminal.
- the associated audio capturing element may be used as a primary element and configured to, for example, capture the foreground audio signals (e.g., the user's speech signal in a voice call).
- the associated audio capturing element may be used as an auxiliary or reference audio capturing element and configured to capture background audio signals for the purpose of noise estimation, for example.
- Method 100 ends after step S 104 .
- the functionalities of multiple audio capturing elements may be determined adaptively according to the specific condition in real time. For example, assuming that a mobile phone has two microphones, one of which is a primary one for capturing the user's speech signal while the other is an auxiliary one for capturing background noise.
- the functionalities of these two microphones can be swapped accordingly. That is, the original auxiliary element is now changed to function as the primary audio capturing element, while the original primary audio capturing element may be changed to function as the auxiliary one or directly disabled.
- FIG. 2 shows a flowchart illustrating a method 200 for adaptive audio capturing in accordance with another exemplary embodiment of the present invention.
- an audio signal is obtained at step S 201 through an audio channel associated with an audio capturing element on a user terminal.
- the audio signal may be obtained from an audio channel associated with one of the microphones.
- Step S 201 corresponds to step S 101 described above with reference to FIG. 1 and will not detailed here.
- step S 202 the voice activity detection (VAD) is performed to determine whether there is a voice activity on one or more audio channels of the user terminal. If not, method 200 returns to step S 201 .
- VAD voice activity detection
- the subsequent steps are performed only if a voice activity exists. This is primarily due to the concerns of energy saving. That is, if there is no voice activity on the audio channel(s) of the user terminal, then it is unnecessary to calculate the signal amplitude and determine or change the functionalities of the audio capturing element. In this way, the user terminal may operate more effectively.
- the voice activity detection may be performed only on a single audio channel.
- the voice activity detection may be performed on the audio channel which is associated with the current primary audio capturing element on the user terminal.
- the voice activity detection may be performed on more than one audio channel. Only for the purpose of illustration, embodiments where the voice activity detection is performed on multiple audio channels will be described below.
- the voice activity detection is to be performed on a subset of voice channels (denoted as L sub ) which may comprise some or all the voice channels on the user terminal.
- the voice activity state in each of the voice channels in the set may be detected.
- the voice activity may be detected based on a certain feature(s) of the audio signals, for example, including but not limited to short-term energy, zero crossing rate, Cepstra feature, Itakura LPC spectrum distance, and/or periodical measurement of vowels.
- One or more of such features may be extracted from the audio signal and then compared with a predetermined threshold(s) to determine if the current frame is a voice frame or a noise frame.
- Any suitable voice activity detection algorithms or processes may be used in connection with embodiments of the present invention.
- the current overall voice activity state for the user terminal may be calculated as the sum of VAD(n) of each voice channel in the set L sub , which can be expressed as follows:
- the voice activity detection is optional.
- the signal amplitudes for different audio channels may be calculated and compared with each other to determine the functionalities of the associated audio capturing elements, as will be described below at steps S 203 and S 204 , without detecting the voice activities on the audio channels.
- step S 203 the signal amplitude for the audio channel is calculated by processing the obtained audio signal.
- the signal amplitude for the audio channel may comprise any information indicating the magnitude of the audio signals on that channel, including but not limited to the spectrum amplitude, the power spectrum, or any other information (either in time domain or in frequency domain) of the obtained audio signal.
- the power spectrum will be described as the signal amplitude.
- the obtained audio signal may be processed on a frame-by-frame basis. Windowing operation may be applied to each frame of the audio signal, and the windowed signal is subjected to discrete Fourier transform to obtain the spectrum of the frame which may be indicated as X j (n,k) wherein n is the sequence number of the frame, k indicates the serial number of the frequency point after the discrete Fourier Transformation.
- X j (n,k) may be calculated as follows:
- the window function may be any window functions suitable for audio signal processing, such as Hamming window, Hanning window, rectangular window, etc.
- the frame length may be within the range of 10-40 ms, for example, 20 ms.
- the discrete Fourier transform may be implemented through a Fast Fourier Transform (FFT).
- FFT Fast Fourier Transform
- the FFT may be directly applied to the windowed audio signal.
- the zero padding may be performed to enhance the frequency resolution and/or to meet the requirement that condition that the length of FFT is a multiple of the power of two. For example, applying the FFT to N points will obtain the spectrum value for N points.
- the sampling ratio F s may be 16 kHz
- the Hamming window may be selected
- the frame length may be 20 ms
- the inter-frame overlap may be 50%.
- n denotes the sequence number of the current frame
- j denotes a sequence number of the audio channel in consideration
- P X j X j (n,k) denotes the auto-power spectrum of the audio channel of the user terminal
- ⁇ j denotes the smoothing factor of the audio channel which could be set within the range of 0 to 1
- the user terminal may have a primary audio capturing element, and the audio channel associated with this primary audio capturing element may be referred to as primary audio channel (denoted as j m , for example).
- the relative signal amplitude of that audio channel with respect to the primary audio channel may be calculated, and may be optionally normalized.
- Such relative signal amplitude indicates the amplitude difference between the primary channel j m and another audio channel and may be used as the analysis criterion.
- the normalized relative signal amplitude of the channel j and the primary channel j m may be calculated as follows:
- the average signal amplitude for an audio channel(s) within a time interval may be calculated. It can be appreciated that the spatial location of the audio source with respect to the user terminal and its audio capturing elements would not probably change a lot within a short time period. Therefore, it is possible to improve the decision accuracy at the following step by detecting and analyzing the channel condition within a certain time interval. Only for the purpose of illustration, in the exemplary embodiments where the voice activity detection is performed and the relative power spectrum value is calculated as the signal amplitude, the average signal amplitude for an audio channel j may be calculated as follows:
- T denotes the length of time interval which may has a range of 1 ⁇ 10 s and typically 2 s in some exemplary embodiments
- n ⁇ T VAD denotes each frame having a voice activity within the time interval T
- k 1 and k 2 are the upper and lower thresholds of a frequency band, respectively.
- any information capable of indicating the signal amplitude for the audio channel and any combination thereof may be calculated at step S 203 .
- step S 204 the functionalities of the audio capturing elements may be determined based on the signal amplitude for the current audio channel and a further signal amplitude for one or more other audio channels of the user terminal.
- the functionalities of the audio capturing elements are determined based on their audio capturing capabilities in the specific situation. The audio capturing elements with higher capability in the current situation will play a major role in audio capturing.
- the average relative power spectrum values within the time interval T are calculated for one or more audio channels, these values may be ranked in a descending order
- ⁇ j ⁇ ⁇ j ⁇ ( t ) _ ⁇ ⁇ a 1 _ , ⁇ a 2 _ , ... ⁇ , ⁇ a L _ ⁇ , wherein ⁇ a 1 , a 2 , . . . , a L ⁇ is obtained by reordering ⁇ 1, 2, . . . , j, . . . L ⁇ . Then the audio capturing elements associated with the top ranked M audio channels, which are expected to have higher capturing capabilities in the current situation, may be classified into the primary group of audio capturing elements which are used to capture foreground audio signals (e.g., the voice signal from the speaker) in the next time interval.
- foreground audio signals e.g., the voice signal from the speaker
- those audio capturing elements associated with lower ranked audio channels may be classified into the auxiliary group of audio capturing elements used to capture background audio signals (e.g., the background noise) in the next time interval.
- the functionalities of audio capturing elements on the user terminal may be set adaptively and dynamically according to the current situation.
- the decision at step S 204 is not necessarily based on the average signal amplitude.
- the functionality may be determined based on instantaneous state of the audio channels. For example, the calculation of the signal amplitude (step S 203 ) may be performed periodically, and the instantaneous signal amplitudes for different audio channels at the time instant when the calculation is performed may be compared to determine the functionalities of the audio capturing elements.
- the mobile phone comprises a first microphone at the lower part of the front face of the phone and a second microphone at the top of the back face as the audio capturing elements.
- the first and second microphones have associated first and second audio channels, respectively.
- the sampling rate may be set to be 16 kHz, and the number of sampling point is 16 bit.
- the audio signals are captured in a large open office, with background noises being surrounded.
- the speaker first speaks facing the front face of the mobile phone, and then speaks facing the back face of the mobile phone.
- the time domain signals as captured are shown in FIG.
- the X-axis denotes time and the Y-axis coordinate denotes the signal amplitudes.
- the signal amplitudes for the first and second microphones are shown by plots 301 and 302 , respectively.
- the Hamming window is used as the window function
- the frame length is 20 ms
- the inter-frame overlap is 50%
- the zero padding is performed at the end of every frame of the audio signal
- the smoothing factor of the power spectrum a j 0.8
- the length of time interval T is selected as 2 s.
- the processed results are shown in FIG. 3B .
- the plot 303 when the speaker faces the front face of the mobile phone (before the time instant T 1 in FIG. 3A ), the signal amplitude for the first audio channel is higher than that of the second audio channel.
- the associated first microphone (MIC- 1 ) will function as the primary microphone.
- the second microphone (MIC- 2 ) will be changed as the primary microphone while the first microphone will instead function as the auxiliary one.
- FIG. 4 a block diagram illustrating an apparatus 400 for adaptive audio capturing in accordance with an exemplary embodiment of the present invention is shown.
- the apparatus 400 may be configured to carry out methods 100 and/or 200 as described above.
- the apparatus 400 comprises an obtaining unit 401 configured to obtain an audio signal through an audio channel associated with an audio capturing element on a user terminal.
- the apparatus 400 further comprises a calculating unit 402 configured to calculate a signal amplitude for the audio channel by processing the obtained audio signal.
- the apparatus 400 comprises a determining unit 403 configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
- the apparatus 400 may further comprise: a voice activity detecting unit configured to detect whether a voice activity exists on one or more audio channels of the user terminal, wherein the determining unit is configured to determine the functionality of the audio capturing element if the voice activity exists on the one or more audio channels.
- a voice activity detecting unit configured to detect whether a voice activity exists on one or more audio channels of the user terminal, wherein the determining unit is configured to determine the functionality of the audio capturing element if the voice activity exists on the one or more audio channels.
- the calculating unit 402 may comprise at least one of: a time domain amplitude calculating unit configured to calculate a time domain amplitude of the obtained audio signal; and a frequency domain amplitude calculating unit configured to calculate a frequency domain amplitude of the obtained audio signal.
- the calculating unit 402 may comprise an average amplitude calculating unit configured to calculate an average signal amplitude for the audio channel within a time interval.
- the further signal amplitude may comprise a further average signal amplitude for the at least one further audio channel within the time interval
- the determining unit 403 may comprise an average amplitude comparing unit configured to compare the average signal amplitude with the further average signal amplitude.
- the user terminal has a primary audio channel.
- the calculating unit 402 may comprise a relative amplitude calculating unit configured to calculate a relative amplitude of the audio channel with respect to the primary audio channel, and the further signal amplitude comprises a further relative amplitude of the at least one further audio channel with respect to the primary audio channel.
- the determining unit 403 may comprise a relative amplitude comparing unit configured to compare the relative amplitude with the further relative amplitude.
- the determining unit 403 may comprise a classifying unit configured to classify the audio capturing element into a primary group of audio capturing elements used to capture foreground audio signals or an auxiliary group of audio capturing elements used to capture background audio signals.
- FIG. 5 is a block diagram illustrating a user terminal in accordance with an exemplary embodiment of the present invention.
- the user terminal 500 may be embodied as a mobile phone. It should be understood, however, that a mobile phone is merely illustrative of one type of apparatus that would benefit from embodiments of the present invention and, therefore, should not be taken to limit the scope of embodiments of the present invention.
- the user terminal 500 includes an antenna(s) 512 in operable communication with a transmitter 514 and a receiver 516 .
- the user terminal 500 further includes at least one processor or controller 520 .
- the controller 520 may be comprised of a digital signal processor, a microprocessor, and various analog to digital converters, digital to analog converters, and other support circuits. Control and information processing functions of the user terminal 500 are allocated between these devices according to their respective capabilities.
- the user terminal 500 also comprises a user interface including output devices such as a ringer 522 , an earphone or speaker 524 , a plurality of microphones 526 as audio capturing elements and a display 528 , and user input devices such as a keyboard 530 , a joystick or other user input interface, all of which are coupled to the controller 520 .
- the user terminal 500 further includes a battery 534 , such as a vibrating battery pack, for powering various circuits that are required to operate the user terminal 500 , as well as optionally providing mechanical vibration as a detectable output.
- the user terminal 500 includes a media capturing element, such as a camera, video and/or audio module, in communication with the controller 520 .
- the media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission.
- the camera module 536 may include a digital camera capable of forming a digital image file from a captured image.
- the user terminal 500 may further include a universal identity module (UIM) 538 .
- the UIM 538 is typically a memory device having a processor built in.
- the UIM 538 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), etc.
- SIM subscriber identity module
- UICC universal integrated circuit card
- USIM universal subscriber identity module
- R-UIM removable user identity module
- the UIM 538 typically stores information elements related to a subscriber.
- the user terminal 500 may be equipped with at least one memory.
- the user terminal 500 may include volatile memory 540 , such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data.
- RAM volatile Random Access Memory
- the user terminal 500 may also include other non-volatile memory 542 , which can be embedded and/or may be removable.
- the non-volatile memory 542 can additionally or alternatively comprise an EEPROM, flash memory or the like.
- the memories can store any of a number of pieces of information, program, and data, used by the user terminal 500 to implement the functions of the user terminal 500 .
- the memories may store program of computer executable code, which may be configured, with the controller 520 , to cause the user terminal 500 to at least perform the steps of methods 100 and/or 200 as described above.
- the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed.
- the most optimal audio capturing element may be adaptively determined as the primary element, while one or more other audio capturing elements may function as reference audio capturing elements accordingly. In this way, the quality of captured audio signals may be maintained at a high level in various conditions in use.
- the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the exemplary embodiments of the present invention are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
- the apparatus 400 described above may be implemented as hardware, software/firmware, or any combination thereof.
- one or more units in the apparatus 400 may be implemented as software modules.
- some or all of the units may be implemented using hardware modules like integrated circuits (ICs), application specific integrated circuits (ASICs), system-on-chip (SOCs), field programmable gate arrays (FPGAs), and the like.
- ICs integrated circuits
- ASICs application specific integrated circuits
- SOCs system-on-chip
- FPGAs field programmable gate arrays
- FIGS. 1-2 may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s).
- methods 100 and/or 200 may be implemented by computer program codes contained in a computer program tangibly embodied on a machine readable medium.
- a machine readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
- the machine readable medium may be a machine readable signal medium or a machine readable storage medium.
- a machine readable storage medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- machine readable storage medium More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
- RAM random access memory
- ROM read-only memory
- EPROM or Flash memory erasable programmable read-only memory
- CD-ROM portable compact disc read-only memory
- magnetic storage device or any suitable combination of the foregoing.
- Computer program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
- the program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
Landscapes
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Otolaryngology (AREA)
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Acoustics & Sound (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
wherein R denotes the number of updating samples for the signal, N denotes the number of discrete Fourier transform points, and w(m) denotes a window function. In accordance with embodiments of the present invention, the window function may be any window functions suitable for audio signal processing, such as Hamming window, Hanning window, rectangular window, etc. The frame length may be within the range of 10-40 ms, for example, 20 ms.
P X
where n denotes the sequence number of the current frame, j denotes a sequence number of the audio channel in consideration, PX
wherein −1≦λj(n,k)≦1. It can be seen that when PX
when PX
and when j=jm, λj(n,k)≈0. The relative signal amplitudes for different audio channels may be compared to make the decision at step S204, as will be detailed below.
wherein T denotes the length of time interval which may has a range of 1˜10 s and typically 2 s in some exemplary embodiments, nεTVAD denotes each frame having a voice activity within the time interval T, and k1 and k2 are the upper and lower thresholds of a frequency band, respectively. The frequency band may be the one where voice energy is mainly concentrated. For example, if the sampling rate FS=16 kHz and the number of FFT points N=512, then the frequency band may be 200-3500 Hz. Accordingly,
wherein {a1, a2, . . . , aL} is obtained by reordering {1, 2, . . . , j, . . . L}. Then the audio capturing elements associated with the top ranked M audio channels, which are expected to have higher capturing capabilities in the current situation, may be classified into the primary group of audio capturing elements which are used to capture foreground audio signals (e.g., the voice signal from the speaker) in the next time interval. To the contrary, those audio capturing elements associated with lower ranked audio channels may be classified into the auxiliary group of audio capturing elements used to capture background audio signals (e.g., the background noise) in the next time interval. In this way, the functionalities of audio capturing elements on the user terminal may be set adaptively and dynamically according to the current situation.
Claims (12)
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2012/087963 WO2014101156A1 (en) | 2012-12-31 | 2012-12-31 | Adaptive audio capturing |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20150341006A1 US20150341006A1 (en) | 2015-11-26 |
| US9692379B2 true US9692379B2 (en) | 2017-06-27 |
Family
ID=49911154
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US14/758,026 Active US9692379B2 (en) | 2012-12-31 | 2012-12-31 | Adaptive audio capturing |
Country Status (4)
| Country | Link |
|---|---|
| US (1) | US9692379B2 (en) |
| EP (1) | EP2797080B1 (en) |
| CN (1) | CN104025699B (en) |
| WO (1) | WO2014101156A1 (en) |
Families Citing this family (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US9685156B2 (en) * | 2015-03-12 | 2017-06-20 | Sony Mobile Communications Inc. | Low-power voice command detector |
| KR20170024913A (en) * | 2015-08-26 | 2017-03-08 | 삼성전자주식회사 | Noise Cancelling Electronic Device and Noise Cancelling Method Using Plurality of Microphones |
| WO2017035771A1 (en) * | 2015-09-01 | 2017-03-09 | 华为技术有限公司 | Voice path check method, device, and terminal |
| BR112019013666A2 (en) * | 2017-01-03 | 2020-01-14 | Koninklijke Philips Nv | beam-forming audio capture device, operation method for a beam-forming audio capture device, and computer program product |
| WO2018173266A1 (en) * | 2017-03-24 | 2018-09-27 | ヤマハ株式会社 | Sound pickup device and sound pickup method |
| US10455319B1 (en) * | 2018-07-18 | 2019-10-22 | Motorola Mobility Llc | Reducing noise in audio signals |
| CN108965600B (en) * | 2018-07-24 | 2021-05-04 | Oppo(重庆)智能科技有限公司 | Voice pickup method and related product |
| CN112925502B (en) * | 2021-02-10 | 2022-07-08 | 歌尔科技有限公司 | Audio channel switching equipment, method and device and electronic equipment |
Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4241329A (en) * | 1978-04-27 | 1980-12-23 | Dialog Systems, Inc. | Continuous speech recognition method for improving false alarm rates |
| US6125288A (en) * | 1996-03-14 | 2000-09-26 | Ricoh Company, Ltd. | Telecommunication apparatus capable of controlling audio output level in response to a background noise |
| CN1732872A (en) | 2005-06-24 | 2006-02-15 | 清华大学 | Bidirectional digital modulating multi-channel artificial cochlea system |
| CN1794758A (en) | 2004-12-22 | 2006-06-28 | 美国博通公司 | Wireless telephone and method for processing audio single in the wireless telephone |
| CN101595452A (en) | 2006-12-22 | 2009-12-02 | Step实验室公司 | Near Field Vector Signal Enhancement |
| US20100081487A1 (en) * | 2008-09-30 | 2010-04-01 | Apple Inc. | Multiple microphone switching and configuration |
| US20120051548A1 (en) * | 2010-02-18 | 2012-03-01 | Qualcomm Incorporated | Microphone array subset selection for robust noise reduction |
| US20130344788A1 (en) * | 2012-06-22 | 2013-12-26 | GM Global Technology Operations LLC | Hvac system zone compensation for improved communication performance |
Family Cites Families (7)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US7464029B2 (en) * | 2005-07-22 | 2008-12-09 | Qualcomm Incorporated | Robust separation of speech signals in a noisy environment |
| JP2009089133A (en) * | 2007-10-01 | 2009-04-23 | Yamaha Corp | Sound emission and collection device |
| US8175291B2 (en) * | 2007-12-19 | 2012-05-08 | Qualcomm Incorporated | Systems, methods, and apparatus for multi-microphone based speech enhancement |
| US8411880B2 (en) * | 2008-01-29 | 2013-04-02 | Qualcomm Incorporated | Sound quality by intelligently selecting between signals from a plurality of microphones |
| US8041054B2 (en) * | 2008-10-31 | 2011-10-18 | Continental Automotive Systems, Inc. | Systems and methods for selectively switching between multiple microphones |
| US20110058683A1 (en) * | 2009-09-04 | 2011-03-10 | Glenn Kosteva | Method & apparatus for selecting a microphone in a microphone array |
| US20110317848A1 (en) * | 2010-06-23 | 2011-12-29 | Motorola, Inc. | Microphone Interference Detection Method and Apparatus |
-
2012
- 2012-12-31 CN CN201280017109.5A patent/CN104025699B/en active Active
- 2012-12-31 WO PCT/CN2012/087963 patent/WO2014101156A1/en not_active Ceased
- 2012-12-31 US US14/758,026 patent/US9692379B2/en active Active
-
2013
- 2013-12-12 EP EP13196818.2A patent/EP2797080B1/en active Active
Patent Citations (8)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| US4241329A (en) * | 1978-04-27 | 1980-12-23 | Dialog Systems, Inc. | Continuous speech recognition method for improving false alarm rates |
| US6125288A (en) * | 1996-03-14 | 2000-09-26 | Ricoh Company, Ltd. | Telecommunication apparatus capable of controlling audio output level in response to a background noise |
| CN1794758A (en) | 2004-12-22 | 2006-06-28 | 美国博通公司 | Wireless telephone and method for processing audio single in the wireless telephone |
| CN1732872A (en) | 2005-06-24 | 2006-02-15 | 清华大学 | Bidirectional digital modulating multi-channel artificial cochlea system |
| CN101595452A (en) | 2006-12-22 | 2009-12-02 | Step实验室公司 | Near Field Vector Signal Enhancement |
| US20100081487A1 (en) * | 2008-09-30 | 2010-04-01 | Apple Inc. | Multiple microphone switching and configuration |
| US20120051548A1 (en) * | 2010-02-18 | 2012-03-01 | Qualcomm Incorporated | Microphone array subset selection for robust noise reduction |
| US20130344788A1 (en) * | 2012-06-22 | 2013-12-26 | GM Global Technology Operations LLC | Hvac system zone compensation for improved communication performance |
Non-Patent Citations (1)
| Title |
|---|
| International Search Report corresponding to Application No. PCT/CN2012/087963; Date of Mailing: Oct. 10, 2013. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20150341006A1 (en) | 2015-11-26 |
| EP2797080B1 (en) | 2016-09-28 |
| CN104025699A (en) | 2014-09-03 |
| CN104025699B (en) | 2018-05-22 |
| WO2014101156A1 (en) | 2014-07-03 |
| EP2797080A2 (en) | 2014-10-29 |
| EP2797080A3 (en) | 2015-04-22 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US9692379B2 (en) | Adaptive audio capturing | |
| US11094323B2 (en) | Electronic device and method for processing audio signal by electronic device | |
| US20150350395A1 (en) | Detecting and switching between noise reduction modes in multi-microphone mobile devices | |
| EP3127114B1 (en) | Situation dependent transient suppression | |
| EP3526979B1 (en) | Method and apparatus for output signal equalization between microphones | |
| US9251804B2 (en) | Speech recognition | |
| US20130090926A1 (en) | Mobile device context information using speech detection | |
| US20230116052A1 (en) | Array geometry agnostic multi-channel personalized speech enhancement | |
| WO2019112468A1 (en) | Multi-microphone noise reduction method, apparatus and terminal device | |
| US20150095027A1 (en) | Key phrase detection | |
| CN102187388A (en) | Methods and apparatus for noise estimation in audio signals | |
| KR20130101943A (en) | Endpoints detection apparatus for sound source and method thereof | |
| CN113284504B (en) | Attitude detection method, device, electronic device and computer readable storage medium | |
| EP3140831B1 (en) | Audio signal discriminator and coder | |
| CN112233689B (en) | Audio noise reduction method, device, equipment and medium | |
| US20160027432A1 (en) | Speaker Dependent Voiced Sound Pattern Template Mapping | |
| CN112233688A (en) | Audio noise reduction method, device, equipment and medium | |
| CN103915099B (en) | Voice fundamental periodicity detection methods and device | |
| US10832687B2 (en) | Audio processing device and audio processing method | |
| US10136235B2 (en) | Method and system for audio quality enhancement | |
| CN110941455B (en) | Active wake-up method and device and electronic equipment | |
| CN115995234A (en) | Audio noise reduction method, device, electronic device and readable storage medium | |
| CN110431625B (en) | Voice detection method, voice detection device, voice processing chip and electronic equipment | |
| CN108182948B (en) | Voice acquisition processing method and device capable of improving voice recognition rate | |
| WO2017119901A1 (en) | System and method for speech detection adaptation |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SPREADTRUM COMMUNICATIONS (SHANGHAI) CO., LTD., CH Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIANG, BIN;WU, SHENG;LIN, FUHUEI;AND OTHERS;SIGNING DATES FROM 20150612 TO 20150616;REEL/FRAME:035915/0984 |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
| MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |