US9692379B2 - Adaptive audio capturing - Google Patents

Adaptive audio capturing Download PDF

Info

Publication number
US9692379B2
US9692379B2 US14/758,026 US201214758026A US9692379B2 US 9692379 B2 US9692379 B2 US 9692379B2 US 201214758026 A US201214758026 A US 201214758026A US 9692379 B2 US9692379 B2 US 9692379B2
Authority
US
United States
Prior art keywords
audio
amplitude
audio channel
signal amplitude
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US14/758,026
Other versions
US20150341006A1 (en
Inventor
Bin Jiang
Sheng Wu
Fuhuei Lin
Jingming Xu
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spreadtrum Communications Shanghai Co Ltd
Original Assignee
Spreadtrum Communications Shanghai Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spreadtrum Communications Shanghai Co Ltd filed Critical Spreadtrum Communications Shanghai Co Ltd
Assigned to SPREADTRUM COMMUNICATIONS (SHANGHAI) CO., LTD. reassignment SPREADTRUM COMMUNICATIONS (SHANGHAI) CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: XU, JINGMING, JIANG, BIN, LIN, FUHUEI, WU, SHENG
Publication of US20150341006A1 publication Critical patent/US20150341006A1/en
Application granted granted Critical
Publication of US9692379B2 publication Critical patent/US9692379B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G3/00Gain control in amplifiers or frequency changers
    • H03G3/20Automatic control
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0208Noise filtering
    • G10L21/0216Noise filtering characterised by the method used for estimating noise
    • G10L2021/02161Number of inputs available containing the signal or the noise to be suppressed
    • G10L2021/02165Two microphones, one receiving mainly the noise signal and the other one mainly the speech signal
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/78Detection of presence or absence of voice signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Definitions

  • Embodiments of the present invention generally relate to audio processing, and more particularly, to a method, apparatus, computer program, and user terminal for adaptive audio capturing.
  • a user terminal like a mobile phone, a tablet computer or a personal digital assistant (PDA) may have a plurality of audio capturing elements such as multiple microphones.
  • Such configuration has become popular in past several years.
  • those commercially available smart mobile phones are usually equipped with two or more microphones.
  • some of them are designed to function as primary audio capturing elements and used to capture, for example, the foreground audio signals; while the audio capturing elements may function as reference or auxiliary ones and used to capture, for example, the background audio signals.
  • a microphone located in the lower part of a mobile phone is generally supposed to be capable of capturing high quality of voice signals from a speaker.
  • this microphone is usually used as a primary audio capturing element to capture the user's speech signal in a voice call.
  • a microphone at another location may function as an auxiliary audio capturing element that may be used to capture the background noise for environmental noise estimation, noise inhibition, and the like.
  • the spatial location of a user terminal with respect to the audio signal source and the surrounding environment will have an impact on the effects of audio capturing.
  • the originally designed primary audio capturing element might be blocked or located opposite to the audio signal source of the user terminal, thereby rendering the originally designed primary audio capturing element incapable of capturing audio signals of high quality.
  • an auxiliary or reference audio capturing element cannot be activated to function as a primary one in this situation, even if this element is now in a better or optimal position.
  • the functionalities of audio capturing elements on the user terminal are fixed in design and manufacturing, and cannot be changed or switched adaptively in use. As a result, the quality of audio capturing will degrade.
  • embodiments of the present invention propose a method, apparatus, computer program, and user terminal for adaptive audio capturing.
  • embodiments of the present invention provide a method for adaptive audio capturing.
  • the method comprising: obtaining an audio signal through an audio channel associated with an audio capturing element on a user terminal; calculating a signal amplitude for the audio channel by processing the obtained audio signal; and determining a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
  • Other embodiments in this aspect include a corresponding computer program product.
  • embodiments of the present invention provide an apparatus for adaptive audio capturing.
  • the apparatus comprising: an obtaining unit configured to obtain an audio signal through an audio channel associated with an audio capturing element on a user terminal; a calculating unit configured to calculate a signal amplitude for the audio channel by processing the obtained audio signal; and a determining unit configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
  • inventions of the present invention provide a user terminal.
  • the user terminal comprises at least one processor; a plurality of audio capturing elements; and at least one memory coupled to the at least one processor and storing program of computer executable instructions, the computer executable instructions configured, with the at least one processor, to cause the mobile terminal to at least perform according to the method outlined in the above paragraphs.
  • the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed.
  • the most optimal audio capturing element may be adaptively determined as the primary element, while one or more other audio capturing elements may function as reference audio capturing elements accordingly. In this way, the quality of captured audio signals may be maintained at a high level in various conditions in use.
  • FIG. 1 is a flowchart illustrating a method for adaptive audio capturing in accordance with an exemplary embodiment of the present invention
  • FIG. 2 is a flowchart illustrating a method for adaptive audio capturing in accordance with another exemplary embodiment of the present invention
  • FIGS. 3A and 3B are schematic diagrams illustrating an example of adaptive audio capturing in accordance with an exemplary embodiment of the present invention
  • FIG. 4 is a block diagram illustrating an apparatus for adaptive audio capturing in accordance with an exemplary embodiment of the present invention.
  • FIG. 5 is a block diagram illustrating a user terminal in accordance with an exemplary embodiment of the present invention.
  • embodiments of the present invention provide a method, apparatus, and computer program product for adaptive audio capturing.
  • the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed. As such the quality of captured audio signals may be maintained at a high level in various conditions in use.
  • FIG. 1 a flowchart illustrating a method 100 for adaptive audio capturing in accordance with an exemplary embodiment of the present invention is shown.
  • an audio signal is obtained at step S 101 through an audio channel associated with an audio capturing element on a user terminal.
  • the user terminal is equipped with a plurality of audio capturing elements.
  • audio capturing element refers to any suitable device that may be configured to capture, record, or otherwise obtain audio signals, such as microphones.
  • Each audio capturing element is associated with an audio channel through which the audio signals captured by that audio capturing element may be passed to, for example, the processor or controller of the user terminal.
  • the signal amplitude for the audio channel may comprise any information indicating the magnitude of the audio signals on that channel.
  • the single amplitude calculated at step S 103 may comprise the signal magnitude in time domain, which may be expressed by the root mean square value of the audio signal, for example.
  • the amplitude in frequency domain like the spectrum amplitude and/or power spectrum of the obtained audio signal may also be used as the signal amplitude. It will be appreciated that these are only some examples of signal amplitude and should not be constructed as a limit of the present invention. Any information capable of indicating the signal amplitude for an audio channel, whether currently known or developed in the future, may be used in connection with embodiments of the present invention. Specific examples in this regard will be detailed below with reference to FIG. 2 .
  • the location of audio signal source (e.g., a speaker) with respect to the audio capturing elements on the user terminal will generally remain stable at least in a certain period of time. Therefore, in some exemplary embodiments, the signal amplitude calculated at step S 103 may comprise an average of the single amplitudes accumulated over a given time interval. In these embodiments, the average signal amplitudes may be used to determine the functionality of audio capturing elements for a next time interval, for example. Specific examples in this regard will be detailed below with reference to FIG. 2 .
  • a functionality of the audio capturing element is determined based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
  • the user terminal is equipped with one or more other audio capturing elements, each of which is associated with a corresponding audio channel.
  • the signal amplitudes for one or more of these audio channels may be calculated in a similar way as described above.
  • the signal amplitude for another audio channel may be calculated by method 100 or by a similar process associated with or dedicated to that audio channel.
  • the functionality of the audio capturing element may be determined based on the signal amplitude for the associated audio channel and the further signal amplitudes for one or more further audio channels on the same user terminal.
  • the associated audio capturing element may be used as a primary element and configured to, for example, capture the foreground audio signals (e.g., the user's speech signal in a voice call).
  • the associated audio capturing element may be used as an auxiliary or reference audio capturing element and configured to capture background audio signals for the purpose of noise estimation, for example.
  • Method 100 ends after step S 104 .
  • the functionalities of multiple audio capturing elements may be determined adaptively according to the specific condition in real time. For example, assuming that a mobile phone has two microphones, one of which is a primary one for capturing the user's speech signal while the other is an auxiliary one for capturing background noise.
  • the functionalities of these two microphones can be swapped accordingly. That is, the original auxiliary element is now changed to function as the primary audio capturing element, while the original primary audio capturing element may be changed to function as the auxiliary one or directly disabled.
  • FIG. 2 shows a flowchart illustrating a method 200 for adaptive audio capturing in accordance with another exemplary embodiment of the present invention.
  • an audio signal is obtained at step S 201 through an audio channel associated with an audio capturing element on a user terminal.
  • the audio signal may be obtained from an audio channel associated with one of the microphones.
  • Step S 201 corresponds to step S 101 described above with reference to FIG. 1 and will not detailed here.
  • step S 202 the voice activity detection (VAD) is performed to determine whether there is a voice activity on one or more audio channels of the user terminal. If not, method 200 returns to step S 201 .
  • VAD voice activity detection
  • the subsequent steps are performed only if a voice activity exists. This is primarily due to the concerns of energy saving. That is, if there is no voice activity on the audio channel(s) of the user terminal, then it is unnecessary to calculate the signal amplitude and determine or change the functionalities of the audio capturing element. In this way, the user terminal may operate more effectively.
  • the voice activity detection may be performed only on a single audio channel.
  • the voice activity detection may be performed on the audio channel which is associated with the current primary audio capturing element on the user terminal.
  • the voice activity detection may be performed on more than one audio channel. Only for the purpose of illustration, embodiments where the voice activity detection is performed on multiple audio channels will be described below.
  • the voice activity detection is to be performed on a subset of voice channels (denoted as L sub ) which may comprise some or all the voice channels on the user terminal.
  • the voice activity state in each of the voice channels in the set may be detected.
  • the voice activity may be detected based on a certain feature(s) of the audio signals, for example, including but not limited to short-term energy, zero crossing rate, Cepstra feature, Itakura LPC spectrum distance, and/or periodical measurement of vowels.
  • One or more of such features may be extracted from the audio signal and then compared with a predetermined threshold(s) to determine if the current frame is a voice frame or a noise frame.
  • Any suitable voice activity detection algorithms or processes may be used in connection with embodiments of the present invention.
  • the current overall voice activity state for the user terminal may be calculated as the sum of VAD(n) of each voice channel in the set L sub , which can be expressed as follows:
  • the voice activity detection is optional.
  • the signal amplitudes for different audio channels may be calculated and compared with each other to determine the functionalities of the associated audio capturing elements, as will be described below at steps S 203 and S 204 , without detecting the voice activities on the audio channels.
  • step S 203 the signal amplitude for the audio channel is calculated by processing the obtained audio signal.
  • the signal amplitude for the audio channel may comprise any information indicating the magnitude of the audio signals on that channel, including but not limited to the spectrum amplitude, the power spectrum, or any other information (either in time domain or in frequency domain) of the obtained audio signal.
  • the power spectrum will be described as the signal amplitude.
  • the obtained audio signal may be processed on a frame-by-frame basis. Windowing operation may be applied to each frame of the audio signal, and the windowed signal is subjected to discrete Fourier transform to obtain the spectrum of the frame which may be indicated as X j (n,k) wherein n is the sequence number of the frame, k indicates the serial number of the frequency point after the discrete Fourier Transformation.
  • X j (n,k) may be calculated as follows:
  • the window function may be any window functions suitable for audio signal processing, such as Hamming window, Hanning window, rectangular window, etc.
  • the frame length may be within the range of 10-40 ms, for example, 20 ms.
  • the discrete Fourier transform may be implemented through a Fast Fourier Transform (FFT).
  • FFT Fast Fourier Transform
  • the FFT may be directly applied to the windowed audio signal.
  • the zero padding may be performed to enhance the frequency resolution and/or to meet the requirement that condition that the length of FFT is a multiple of the power of two. For example, applying the FFT to N points will obtain the spectrum value for N points.
  • the sampling ratio F s may be 16 kHz
  • the Hamming window may be selected
  • the frame length may be 20 ms
  • the inter-frame overlap may be 50%.
  • n denotes the sequence number of the current frame
  • j denotes a sequence number of the audio channel in consideration
  • P X j X j (n,k) denotes the auto-power spectrum of the audio channel of the user terminal
  • ⁇ j denotes the smoothing factor of the audio channel which could be set within the range of 0 to 1
  • the user terminal may have a primary audio capturing element, and the audio channel associated with this primary audio capturing element may be referred to as primary audio channel (denoted as j m , for example).
  • the relative signal amplitude of that audio channel with respect to the primary audio channel may be calculated, and may be optionally normalized.
  • Such relative signal amplitude indicates the amplitude difference between the primary channel j m and another audio channel and may be used as the analysis criterion.
  • the normalized relative signal amplitude of the channel j and the primary channel j m may be calculated as follows:
  • the average signal amplitude for an audio channel(s) within a time interval may be calculated. It can be appreciated that the spatial location of the audio source with respect to the user terminal and its audio capturing elements would not probably change a lot within a short time period. Therefore, it is possible to improve the decision accuracy at the following step by detecting and analyzing the channel condition within a certain time interval. Only for the purpose of illustration, in the exemplary embodiments where the voice activity detection is performed and the relative power spectrum value is calculated as the signal amplitude, the average signal amplitude for an audio channel j may be calculated as follows:
  • T denotes the length of time interval which may has a range of 1 ⁇ 10 s and typically 2 s in some exemplary embodiments
  • n ⁇ T VAD denotes each frame having a voice activity within the time interval T
  • k 1 and k 2 are the upper and lower thresholds of a frequency band, respectively.
  • any information capable of indicating the signal amplitude for the audio channel and any combination thereof may be calculated at step S 203 .
  • step S 204 the functionalities of the audio capturing elements may be determined based on the signal amplitude for the current audio channel and a further signal amplitude for one or more other audio channels of the user terminal.
  • the functionalities of the audio capturing elements are determined based on their audio capturing capabilities in the specific situation. The audio capturing elements with higher capability in the current situation will play a major role in audio capturing.
  • the average relative power spectrum values within the time interval T are calculated for one or more audio channels, these values may be ranked in a descending order
  • ⁇ j ⁇ ⁇ j ⁇ ( t ) _ ⁇ ⁇ a 1 _ , ⁇ a 2 _ , ... ⁇ , ⁇ a L _ ⁇ , wherein ⁇ a 1 , a 2 , . . . , a L ⁇ is obtained by reordering ⁇ 1, 2, . . . , j, . . . L ⁇ . Then the audio capturing elements associated with the top ranked M audio channels, which are expected to have higher capturing capabilities in the current situation, may be classified into the primary group of audio capturing elements which are used to capture foreground audio signals (e.g., the voice signal from the speaker) in the next time interval.
  • foreground audio signals e.g., the voice signal from the speaker
  • those audio capturing elements associated with lower ranked audio channels may be classified into the auxiliary group of audio capturing elements used to capture background audio signals (e.g., the background noise) in the next time interval.
  • the functionalities of audio capturing elements on the user terminal may be set adaptively and dynamically according to the current situation.
  • the decision at step S 204 is not necessarily based on the average signal amplitude.
  • the functionality may be determined based on instantaneous state of the audio channels. For example, the calculation of the signal amplitude (step S 203 ) may be performed periodically, and the instantaneous signal amplitudes for different audio channels at the time instant when the calculation is performed may be compared to determine the functionalities of the audio capturing elements.
  • the mobile phone comprises a first microphone at the lower part of the front face of the phone and a second microphone at the top of the back face as the audio capturing elements.
  • the first and second microphones have associated first and second audio channels, respectively.
  • the sampling rate may be set to be 16 kHz, and the number of sampling point is 16 bit.
  • the audio signals are captured in a large open office, with background noises being surrounded.
  • the speaker first speaks facing the front face of the mobile phone, and then speaks facing the back face of the mobile phone.
  • the time domain signals as captured are shown in FIG.
  • the X-axis denotes time and the Y-axis coordinate denotes the signal amplitudes.
  • the signal amplitudes for the first and second microphones are shown by plots 301 and 302 , respectively.
  • the Hamming window is used as the window function
  • the frame length is 20 ms
  • the inter-frame overlap is 50%
  • the zero padding is performed at the end of every frame of the audio signal
  • the smoothing factor of the power spectrum a j 0.8
  • the length of time interval T is selected as 2 s.
  • the processed results are shown in FIG. 3B .
  • the plot 303 when the speaker faces the front face of the mobile phone (before the time instant T 1 in FIG. 3A ), the signal amplitude for the first audio channel is higher than that of the second audio channel.
  • the associated first microphone (MIC- 1 ) will function as the primary microphone.
  • the second microphone (MIC- 2 ) will be changed as the primary microphone while the first microphone will instead function as the auxiliary one.
  • FIG. 4 a block diagram illustrating an apparatus 400 for adaptive audio capturing in accordance with an exemplary embodiment of the present invention is shown.
  • the apparatus 400 may be configured to carry out methods 100 and/or 200 as described above.
  • the apparatus 400 comprises an obtaining unit 401 configured to obtain an audio signal through an audio channel associated with an audio capturing element on a user terminal.
  • the apparatus 400 further comprises a calculating unit 402 configured to calculate a signal amplitude for the audio channel by processing the obtained audio signal.
  • the apparatus 400 comprises a determining unit 403 configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
  • the apparatus 400 may further comprise: a voice activity detecting unit configured to detect whether a voice activity exists on one or more audio channels of the user terminal, wherein the determining unit is configured to determine the functionality of the audio capturing element if the voice activity exists on the one or more audio channels.
  • a voice activity detecting unit configured to detect whether a voice activity exists on one or more audio channels of the user terminal, wherein the determining unit is configured to determine the functionality of the audio capturing element if the voice activity exists on the one or more audio channels.
  • the calculating unit 402 may comprise at least one of: a time domain amplitude calculating unit configured to calculate a time domain amplitude of the obtained audio signal; and a frequency domain amplitude calculating unit configured to calculate a frequency domain amplitude of the obtained audio signal.
  • the calculating unit 402 may comprise an average amplitude calculating unit configured to calculate an average signal amplitude for the audio channel within a time interval.
  • the further signal amplitude may comprise a further average signal amplitude for the at least one further audio channel within the time interval
  • the determining unit 403 may comprise an average amplitude comparing unit configured to compare the average signal amplitude with the further average signal amplitude.
  • the user terminal has a primary audio channel.
  • the calculating unit 402 may comprise a relative amplitude calculating unit configured to calculate a relative amplitude of the audio channel with respect to the primary audio channel, and the further signal amplitude comprises a further relative amplitude of the at least one further audio channel with respect to the primary audio channel.
  • the determining unit 403 may comprise a relative amplitude comparing unit configured to compare the relative amplitude with the further relative amplitude.
  • the determining unit 403 may comprise a classifying unit configured to classify the audio capturing element into a primary group of audio capturing elements used to capture foreground audio signals or an auxiliary group of audio capturing elements used to capture background audio signals.
  • FIG. 5 is a block diagram illustrating a user terminal in accordance with an exemplary embodiment of the present invention.
  • the user terminal 500 may be embodied as a mobile phone. It should be understood, however, that a mobile phone is merely illustrative of one type of apparatus that would benefit from embodiments of the present invention and, therefore, should not be taken to limit the scope of embodiments of the present invention.
  • the user terminal 500 includes an antenna(s) 512 in operable communication with a transmitter 514 and a receiver 516 .
  • the user terminal 500 further includes at least one processor or controller 520 .
  • the controller 520 may be comprised of a digital signal processor, a microprocessor, and various analog to digital converters, digital to analog converters, and other support circuits. Control and information processing functions of the user terminal 500 are allocated between these devices according to their respective capabilities.
  • the user terminal 500 also comprises a user interface including output devices such as a ringer 522 , an earphone or speaker 524 , a plurality of microphones 526 as audio capturing elements and a display 528 , and user input devices such as a keyboard 530 , a joystick or other user input interface, all of which are coupled to the controller 520 .
  • the user terminal 500 further includes a battery 534 , such as a vibrating battery pack, for powering various circuits that are required to operate the user terminal 500 , as well as optionally providing mechanical vibration as a detectable output.
  • the user terminal 500 includes a media capturing element, such as a camera, video and/or audio module, in communication with the controller 520 .
  • the media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission.
  • the camera module 536 may include a digital camera capable of forming a digital image file from a captured image.
  • the user terminal 500 may further include a universal identity module (UIM) 538 .
  • the UIM 538 is typically a memory device having a processor built in.
  • the UIM 538 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), etc.
  • SIM subscriber identity module
  • UICC universal integrated circuit card
  • USIM universal subscriber identity module
  • R-UIM removable user identity module
  • the UIM 538 typically stores information elements related to a subscriber.
  • the user terminal 500 may be equipped with at least one memory.
  • the user terminal 500 may include volatile memory 540 , such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data.
  • RAM volatile Random Access Memory
  • the user terminal 500 may also include other non-volatile memory 542 , which can be embedded and/or may be removable.
  • the non-volatile memory 542 can additionally or alternatively comprise an EEPROM, flash memory or the like.
  • the memories can store any of a number of pieces of information, program, and data, used by the user terminal 500 to implement the functions of the user terminal 500 .
  • the memories may store program of computer executable code, which may be configured, with the controller 520 , to cause the user terminal 500 to at least perform the steps of methods 100 and/or 200 as described above.
  • the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed.
  • the most optimal audio capturing element may be adaptively determined as the primary element, while one or more other audio capturing elements may function as reference audio capturing elements accordingly. In this way, the quality of captured audio signals may be maintained at a high level in various conditions in use.
  • the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the exemplary embodiments of the present invention are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
  • the apparatus 400 described above may be implemented as hardware, software/firmware, or any combination thereof.
  • one or more units in the apparatus 400 may be implemented as software modules.
  • some or all of the units may be implemented using hardware modules like integrated circuits (ICs), application specific integrated circuits (ASICs), system-on-chip (SOCs), field programmable gate arrays (FPGAs), and the like.
  • ICs integrated circuits
  • ASICs application specific integrated circuits
  • SOCs system-on-chip
  • FPGAs field programmable gate arrays
  • FIGS. 1-2 may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s).
  • methods 100 and/or 200 may be implemented by computer program codes contained in a computer program tangibly embodied on a machine readable medium.
  • a machine readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine readable medium may be a machine readable signal medium or a machine readable storage medium.
  • a machine readable storage medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • machine readable storage medium More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read-only memory
  • EPROM or Flash memory erasable programmable read-only memory
  • CD-ROM portable compact disc read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • Computer program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented.
  • the program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.

Landscapes

  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Signal Processing (AREA)
  • Telephone Function (AREA)
  • Stereophonic System (AREA)
  • Circuit For Audible Band Transducer (AREA)

Abstract

Embodiments of the present invention relate to adaptive audio capturing. A method for adaptive audio capturing is disclosed. The method comprising obtaining an audio signal through an audio channel associated with an audio capturing element on a user terminal; calculating a signal amplitude for the audio channel by processing the obtained audio signal; and determining a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal. Corresponding apparatus, computer program product, and user terminal are also disclosed.

Description

The present application is a National Stage Application of International Application No. PCT/CN2012/087963, filed on Dec. 31, 2012, and entitled “ADAPTIVE AUDIO CAPTURING”, the entire disclosure of which is incorporated herein by reference.
TECHNICAL FIELD
Embodiments of the present invention generally relate to audio processing, and more particularly, to a method, apparatus, computer program, and user terminal for adaptive audio capturing.
BACKGROUND
A user terminal like a mobile phone, a tablet computer or a personal digital assistant (PDA) may have a plurality of audio capturing elements such as multiple microphones. Such configuration has become popular in past several years. For example, those commercially available smart mobile phones are usually equipped with two or more microphones. Generally speaking, among the plurality of audio capturing elements on a single user terminal, some of them are designed to function as primary audio capturing elements and used to capture, for example, the foreground audio signals; while the audio capturing elements may function as reference or auxiliary ones and used to capture, for example, the background audio signals. As an example, a microphone located in the lower part of a mobile phone is generally supposed to be capable of capturing high quality of voice signals from a speaker. Therefore, this microphone is usually used as a primary audio capturing element to capture the user's speech signal in a voice call. A microphone at another location may function as an auxiliary audio capturing element that may be used to capture the background noise for environmental noise estimation, noise inhibition, and the like.
Those skilled in the art would appreciate that the spatial location of a user terminal with respect to the audio signal source and the surrounding environment will have an impact on the effects of audio capturing. For example, in some situations, the originally designed primary audio capturing element might be blocked or located opposite to the audio signal source of the user terminal, thereby rendering the originally designed primary audio capturing element incapable of capturing audio signals of high quality. However, in the prior art, an auxiliary or reference audio capturing element cannot be activated to function as a primary one in this situation, even if this element is now in a better or optimal position. In other words, the functionalities of audio capturing elements on the user terminal are fixed in design and manufacturing, and cannot be changed or switched adaptively in use. As a result, the quality of audio capturing will degrade.
In view of the foregoing, there is a need in the art for an audio capturing solution that may be adaptive to various conditions in use.
SUMMARY
In order to address the foregoing and other potential problems, embodiments of the present invention propose a method, apparatus, computer program, and user terminal for adaptive audio capturing.
In one aspect, embodiments of the present invention provide a method for adaptive audio capturing. The method comprising: obtaining an audio signal through an audio channel associated with an audio capturing element on a user terminal; calculating a signal amplitude for the audio channel by processing the obtained audio signal; and determining a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal. Other embodiments in this aspect include a corresponding computer program product.
In another aspect, embodiments of the present invention provide an apparatus for adaptive audio capturing. The apparatus comprising: an obtaining unit configured to obtain an audio signal through an audio channel associated with an audio capturing element on a user terminal; a calculating unit configured to calculate a signal amplitude for the audio channel by processing the obtained audio signal; and a determining unit configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
In yet another aspect, embodiments of the present invention provide a user terminal. The user terminal comprises at least one processor; a plurality of audio capturing elements; and at least one memory coupled to the at least one processor and storing program of computer executable instructions, the computer executable instructions configured, with the at least one processor, to cause the mobile terminal to at least perform according to the method outlined in the above paragraphs.
These and other optional embodiments of the present invention can be implemented to realize one or more of the following advantages. For a user terminal equipped with a plurality of audio capturing elements, by processing and analyzing the audio signals in real time, the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed. For example, according to the various factor like the relative position of the user terminal with respect to the audio signal source and/or the gesture of the user terminal itself, the most optimal audio capturing element may be adaptively determined as the primary element, while one or more other audio capturing elements may function as reference audio capturing elements accordingly. In this way, the quality of captured audio signals may be maintained at a high level in various conditions in use.
Other features and advantages of embodiments of the present invention will also be understood from the following description of exemplary embodiments when read in conjunction with the accompanying drawings, which illustrate, by way of example, sprite and principles of the present invention.
BRIEF DESCRIPTION OF THE DRAWINGS
The details of one or more embodiments of the present invention are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages of the invention will become apparent from the description, the drawings, and the claims, wherein:
FIG. 1 is a flowchart illustrating a method for adaptive audio capturing in accordance with an exemplary embodiment of the present invention;
FIG. 2 is a flowchart illustrating a method for adaptive audio capturing in accordance with another exemplary embodiment of the present invention;
FIGS. 3A and 3B are schematic diagrams illustrating an example of adaptive audio capturing in accordance with an exemplary embodiment of the present invention;
FIG. 4 is a block diagram illustrating an apparatus for adaptive audio capturing in accordance with an exemplary embodiment of the present invention; and
FIG. 5 is a block diagram illustrating a user terminal in accordance with an exemplary embodiment of the present invention.
Throughout the drawings, same or similar reference numbers indicates same or similar elements.
DETAILED DESCRIPTION
In general, embodiments of the present invention provide a method, apparatus, and computer program product for adaptive audio capturing. In accordance with embodiments of the present invention, for a user terminal equipped with a plurality of audio capturing elements, by processing and analyzing the audio signals in real time, the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed. As such the quality of captured audio signals may be maintained at a high level in various conditions in use.
Reference is first made to FIG. 1, where a flowchart illustrating a method 100 for adaptive audio capturing in accordance with an exemplary embodiment of the present invention is shown. As shown, after method 100 starts, an audio signal is obtained at step S101 through an audio channel associated with an audio capturing element on a user terminal. In accordance with embodiments of the present invention, the user terminal is equipped with a plurality of audio capturing elements. As used herein, the term “audio capturing element” refers to any suitable device that may be configured to capture, record, or otherwise obtain audio signals, such as microphones. Each audio capturing element is associated with an audio channel through which the audio signals captured by that audio capturing element may be passed to, for example, the processor or controller of the user terminal.
Method 100 then proceeds to step S103 where the signal amplitude for the audio channel is calculated by processing the obtained audio signal. In accordance with embodiments of the present invention, the signal amplitude for the audio channel may comprise any information indicating the magnitude of the audio signals on that channel. In some exemplary embodiments, the single amplitude calculated at step S103 may comprise the signal magnitude in time domain, which may be expressed by the root mean square value of the audio signal, for example. Alternatively or additionally, the amplitude in frequency domain like the spectrum amplitude and/or power spectrum of the obtained audio signal may also be used as the signal amplitude. It will be appreciated that these are only some examples of signal amplitude and should not be constructed as a limit of the present invention. Any information capable of indicating the signal amplitude for an audio channel, whether currently known or developed in the future, may be used in connection with embodiments of the present invention. Specific examples in this regard will be detailed below with reference to FIG. 2.
Furthermore, in some situation like a voice call, the location of audio signal source (e.g., a speaker) with respect to the audio capturing elements on the user terminal will generally remain stable at least in a certain period of time. Therefore, in some exemplary embodiments, the signal amplitude calculated at step S103 may comprise an average of the single amplitudes accumulated over a given time interval. In these embodiments, the average signal amplitudes may be used to determine the functionality of audio capturing elements for a next time interval, for example. Specific examples in this regard will be detailed below with reference to FIG. 2.
Next, at step S104, a functionality of the audio capturing element is determined based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal. As described above, in addition to the audio channel which is considered at steps S101 and S103, the user terminal is equipped with one or more other audio capturing elements, each of which is associated with a corresponding audio channel. The signal amplitudes for one or more of these audio channels may be calculated in a similar way as described above. In accordance with embodiments of the present invention, the signal amplitude for another audio channel may be calculated by method 100 or by a similar process associated with or dedicated to that audio channel.
The functionality of the audio capturing element may be determined based on the signal amplitude for the associated audio channel and the further signal amplitudes for one or more further audio channels on the same user terminal. Generally speaking, if the audio channel is of higher signal amplitude, the associated audio capturing element may be used as a primary element and configured to, for example, capture the foreground audio signals (e.g., the user's speech signal in a voice call). To the contrary, if the audio channel is of lower signal amplitude, the associated audio capturing element may be used as an auxiliary or reference audio capturing element and configured to capture background audio signals for the purpose of noise estimation, for example.
Method 100 ends after step S104. By using method 100, the functionalities of multiple audio capturing elements may be determined adaptively according to the specific condition in real time. For example, assuming that a mobile phone has two microphones, one of which is a primary one for capturing the user's speech signal while the other is an auxiliary one for capturing background noise. In case that the original primary microphone is blocked by an object and therefore the signal magnitude on the associated audio channel degrades to a level below the signal magnitude of the audio channel associated with the original auxiliary microphone, then the functionalities of these two microphones can be swapped accordingly. That is, the original auxiliary element is now changed to function as the primary audio capturing element, while the original primary audio capturing element may be changed to function as the auxiliary one or directly disabled.
Now a more specific example will be described with reference to FIG. 2, which shows a flowchart illustrating a method 200 for adaptive audio capturing in accordance with another exemplary embodiment of the present invention.
After method 200 starts, an audio signal is obtained at step S201 through an audio channel associated with an audio capturing element on a user terminal. Assuming that a user terminal comprises a plurality of microphones as audio capturing elements, the audio signal may be obtained from an audio channel associated with one of the microphones. Step S201 corresponds to step S101 described above with reference to FIG. 1 and will not detailed here.
Next, method 200 proceeds to step S202 where the voice activity detection (VAD) is performed to determine whether there is a voice activity on one or more audio channels of the user terminal. If not, method 200 returns to step S201. In other words, according to the embodiments shown in FIG. 2, the subsequent steps are performed only if a voice activity exists. This is primarily due to the concerns of energy saving. That is, if there is no voice activity on the audio channel(s) of the user terminal, then it is unnecessary to calculate the signal amplitude and determine or change the functionalities of the audio capturing element. In this way, the user terminal may operate more effectively.
In accordance with embodiments of the present invention, various strategies may be utilized to implement the voice activity detection. In some exemplary embodiments, the voice activity detection may be performed only on a single audio channel. For example, the voice activity detection may be performed on the audio channel which is associated with the current primary audio capturing element on the user terminal. Alternatively, the voice activity detection may be performed on more than one audio channel. Only for the purpose of illustration, embodiments where the voice activity detection is performed on multiple audio channels will be described below.
In these embodiments, assume that the voice activity detection is to be performed on a subset of voice channels (denoted as Lsub) which may comprise some or all the voice channels on the user terminal. The voice activity state in each of the voice channels in the set may be detected. In general, the voice activity may be detected based on a certain feature(s) of the audio signals, for example, including but not limited to short-term energy, zero crossing rate, Cepstra feature, Itakura LPC spectrum distance, and/or periodical measurement of vowels. One or more of such features may be extracted from the audio signal and then compared with a predetermined threshold(s) to determine if the current frame is a voice frame or a noise frame. Any suitable voice activity detection algorithms or processes may be used in connection with embodiments of the present invention.
If a voice activity exists on the jth audio channel, then for the signal frame n, a voice activity state associated with the jth audio channel may be set as VADj(n)=1, indicating that the current frame is a voice frame. Otherwise, the voice activity state associated with the jth channel is marked as VADj(n)=0, indicating that the current frame is a noise frame. The current overall voice activity state for the user terminal may be calculated as the sum of VAD(n) of each voice channel in the set Lsub, which can be expressed as follows:
VAD ( n ) _ = { 1 , j = 1 L sub VAD j ( n ) 1 0 , j = 1 L sub VAD j ( n ) = 0
It will be appreciated that the voice activity detection is optional. The signal amplitudes for different audio channels may be calculated and compared with each other to determine the functionalities of the associated audio capturing elements, as will be described below at steps S203 and S204, without detecting the voice activities on the audio channels.
Return to FIG. 2, method 200 then proceeds to step S203 where the signal amplitude for the audio channel is calculated by processing the obtained audio signal. As described above with reference to step S103 in FIG. 1, in accordance with embodiments of the present invention, the signal amplitude for the audio channel may comprise any information indicating the magnitude of the audio signals on that channel, including but not limited to the spectrum amplitude, the power spectrum, or any other information (either in time domain or in frequency domain) of the obtained audio signal. In the embodiments shown in FIG. 2, only for the purpose of illustration, the power spectrum will be described as the signal amplitude.
In order to calculate the power spectrum of the obtained audio signal, in some exemplary embodiments, the obtained audio signal may be processed on a frame-by-frame basis. Windowing operation may be applied to each frame of the audio signal, and the windowed signal is subjected to discrete Fourier transform to obtain the spectrum of the frame which may be indicated as Xj(n,k) wherein n is the sequence number of the frame, k indicates the serial number of the frequency point after the discrete Fourier Transformation. In some exemplary embodiments, Xj(n,k) may be calculated as follows:
X j ( n , k ) = m = - + x j ( m ) w ( nR - m ) - 2 π k m / N ,
wherein R denotes the number of updating samples for the signal, N denotes the number of discrete Fourier transform points, and w(m) denotes a window function. In accordance with embodiments of the present invention, the window function may be any window functions suitable for audio signal processing, such as Hamming window, Hanning window, rectangular window, etc. The frame length may be within the range of 10-40 ms, for example, 20 ms.
In some exemplary embodiments, there may be an overlap between a frame and its preceding frame, and the amount of overlap may be selected according to specific situations. Additionally, the discrete Fourier transform may be implemented through a Fast Fourier Transform (FFT). The FFT may be directly applied to the windowed audio signal. Alternatively, the zero padding may be performed to enhance the frequency resolution and/or to meet the requirement that condition that the length of FFT is a multiple of the power of two. For example, applying the FFT to N points will obtain the spectrum value for N points.
In some exemplary embodiments, the sampling ratio Fs may be 16 kHz, the Hamming window may be selected, the frame length may be 20 ms, and the inter-frame overlap may be 50%. In these embodiments, each frame signal has 320 sampling points in total, and the number of updating samples R=160. By padding zero at the end of the audio signal, totally 512 sampling points may be obtained. As such, the N-point FFT processing (N=512) may obtain 512 frequency points. Based on the spectrum of the frame of the audio signal and the power spectrum of a preceding frame, the power spectrum value of the current frame may be estimated as follows:
P X j X j (n,k)=αj ·P X j X j (n−1,k)+(1−αj)·|X j(n,k)|2
where n denotes the sequence number of the current frame, j denotes a sequence number of the audio channel in consideration, PX j X j (n,k) denotes the auto-power spectrum of the audio channel of the user terminal, αj denotes the smoothing factor of the audio channel which could be set within the range of 0 to 1, and |.| denotes a mod operation.
It will be understood that the above description is just exemplary embodiments of calculating the power spectrum as the signal amplitude for the audio channel. Any other suitable processes or algorithms, whether currently known or developed in the future, may be used in connection to embodiments of the present invention to calculate the power spectrum of audio signals. Moreover, as described above, other information may be used to indicate the signal amplitude of an audio channel.
Furthermore, the user terminal may have a primary audio capturing element, and the audio channel associated with this primary audio capturing element may be referred to as primary audio channel (denoted as jm, for example). In these embodiments, at step S203, for any given audio channel of the user terminal, the relative signal amplitude of that audio channel with respect to the primary audio channel may be calculated, and may be optionally normalized. Such relative signal amplitude indicates the amplitude difference between the primary channel jm and another audio channel and may be used as the analysis criterion. Still considering the above exemplary embodiments where the power spectrum is used as the signal amplitude, the normalized relative signal amplitude of the channel j and the primary channel jm may be calculated as follows:
λ j ( n , k ) = P X j X j ( n , k ) - P X j m X j m ( n , k ) P X j X j ( n , k ) + P X j m P X j m ( n , k ) , j = 1 , , L
wherein −1≦λj(n,k)≦1. It can be seen that when PX j X j (n,k) is far less than
P X j m X j m ( n , k ) , λ j ( n , k ) - 1 ;
when PX j X j (n,k) is far greater than
P X j m X j m ( n , k ) , λ j ( n , k ) 1 ;
and when j=jm, λj(n,k)≈0. The relative signal amplitudes for different audio channels may be compared to make the decision at step S204, as will be detailed below.
Additionally or alternatively, at step S203, the average signal amplitude for an audio channel(s) within a time interval may be calculated. It can be appreciated that the spatial location of the audio source with respect to the user terminal and its audio capturing elements would not probably change a lot within a short time period. Therefore, it is possible to improve the decision accuracy at the following step by detecting and analyzing the channel condition within a certain time interval. Only for the purpose of illustration, in the exemplary embodiments where the voice activity detection is performed and the relative power spectrum value is calculated as the signal amplitude, the average signal amplitude for an audio channel j may be calculated as follows:
λ j ( t ) _ = n T VAD k = k 1 k 2 λ j ( n , k )
wherein T denotes the length of time interval which may has a range of 1˜10 s and typically 2 s in some exemplary embodiments, nεTVAD denotes each frame having a voice activity within the time interval T, and k1 and k2 are the upper and lower thresholds of a frequency band, respectively. The frequency band may be the one where voice energy is mainly concentrated. For example, if the sampling rate FS=16 kHz and the number of FFT points N=512, then the frequency band may be 200-3500 Hz. Accordingly,
k 1 = floor ( 200 F S / N ) = 6 and k 2 = floor ( 3500 F S / N ) = 112.
It will be understood that the above exemplary embodiments, either considered alone or in combination, should be not constructed as limitations of the present invention. Any information capable of indicating the signal amplitude for the audio channel and any combination thereof may be calculated at step S203.
Next, method 200 proceeds to step S204, where the functionalities of the audio capturing elements may be determined based on the signal amplitude for the current audio channel and a further signal amplitude for one or more other audio channels of the user terminal. Generally speaking, the functionalities of the audio capturing elements are determined based on their audio capturing capabilities in the specific situation. The audio capturing elements with higher capability in the current situation will play a major role in audio capturing.
For example, in the embodiments where the average relative power spectrum values within the time interval T are calculated for one or more audio channels, these values may be ranked in a descending order
sort j λ j ( t ) _ = { λ a 1 _ , λ a 2 _ , , λ a L _ } ,
wherein {a1, a2, . . . , aL} is obtained by reordering {1, 2, . . . , j, . . . L}. Then the audio capturing elements associated with the top ranked M audio channels, which are expected to have higher capturing capabilities in the current situation, may be classified into the primary group of audio capturing elements which are used to capture foreground audio signals (e.g., the voice signal from the speaker) in the next time interval. To the contrary, those audio capturing elements associated with lower ranked audio channels may be classified into the auxiliary group of audio capturing elements used to capture background audio signals (e.g., the background noise) in the next time interval. In this way, the functionalities of audio capturing elements on the user terminal may be set adaptively and dynamically according to the current situation.
It will be appreciated that the decision at step S204 is not necessarily based on the average signal amplitude. In some alternative embodiments, the functionality may be determined based on instantaneous state of the audio channels. For example, the calculation of the signal amplitude (step S203) may be performed periodically, and the instantaneous signal amplitudes for different audio channels at the time instant when the calculation is performed may be compared to determine the functionalities of the audio capturing elements.
Now consider a specific example of dual-microphone mobile phone. In this example, the mobile phone comprises a first microphone at the lower part of the front face of the phone and a second microphone at the top of the back face as the audio capturing elements. The first and second microphones have associated first and second audio channels, respectively. In the embodiments where the average relative power spectrum values are calculated as the signal amplitude, the sampling rate may be set to be 16 kHz, and the number of sampling point is 16 bit. The audio signals are captured in a large open office, with background noises being surrounded. The speaker first speaks facing the front face of the mobile phone, and then speaks facing the back face of the mobile phone. The time domain signals as captured are shown in FIG. 3A, where the X-axis denotes time and the Y-axis coordinate denotes the signal amplitudes. In FIG. 3A, the signal amplitudes for the first and second microphones are shown by plots 301 and 302, respectively.
In some exemplary embodiments, the Hamming window is used as the window function, the frame length is 20 ms, the inter-frame overlap is 50%, the zero padding is performed at the end of every frame of the audio signal, and the FFT is performed with N=512 points. Furthermore, the smoothing factor of the power spectrum aj=0.8, frequency thresholds are k1=6 and k2=112, and the length of time interval T is selected as 2 s. The processed results are shown in FIG. 3B. As shown by the plot 303, when the speaker faces the front face of the mobile phone (before the time instant T1 in FIG. 3A), the signal amplitude for the first audio channel is higher than that of the second audio channel. As a result, the associated first microphone (MIC-1) will function as the primary microphone. When the speaker faces the back face of the mobile phone (after the time instant T1 in FIG. 3B), due to the change of signal amplitude for the first and second audio channels, the second microphone (MIC-2) will be changed as the primary microphone while the first microphone will instead function as the auxiliary one.
Referring to FIG. 4, a block diagram illustrating an apparatus 400 for adaptive audio capturing in accordance with an exemplary embodiment of the present invention is shown. In accordance with embodiments of the present invention, the apparatus 400 may be configured to carry out methods 100 and/or 200 as described above.
As shown, the apparatus 400 comprises an obtaining unit 401 configured to obtain an audio signal through an audio channel associated with an audio capturing element on a user terminal. The apparatus 400 further comprises a calculating unit 402 configured to calculate a signal amplitude for the audio channel by processing the obtained audio signal. In addition, the apparatus 400 comprises a determining unit 403 configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal.
In some exemplary embodiments, the apparatus 400 may further comprise: a voice activity detecting unit configured to detect whether a voice activity exists on one or more audio channels of the user terminal, wherein the determining unit is configured to determine the functionality of the audio capturing element if the voice activity exists on the one or more audio channels.
In some exemplary embodiments, the calculating unit 402 may comprise at least one of: a time domain amplitude calculating unit configured to calculate a time domain amplitude of the obtained audio signal; and a frequency domain amplitude calculating unit configured to calculate a frequency domain amplitude of the obtained audio signal.
In some exemplary embodiments, the calculating unit 402 may comprise an average amplitude calculating unit configured to calculate an average signal amplitude for the audio channel within a time interval. In these embodiments, the further signal amplitude may comprise a further average signal amplitude for the at least one further audio channel within the time interval, and the determining unit 403 may comprise an average amplitude comparing unit configured to compare the average signal amplitude with the further average signal amplitude.
In some exemplary embodiments, the user terminal has a primary audio channel. In these embodiments, the calculating unit 402 may comprise a relative amplitude calculating unit configured to calculate a relative amplitude of the audio channel with respect to the primary audio channel, and the further signal amplitude comprises a further relative amplitude of the at least one further audio channel with respect to the primary audio channel. The determining unit 403 may comprise a relative amplitude comparing unit configured to compare the relative amplitude with the further relative amplitude.
In some exemplary embodiments, the determining unit 403 may comprise a classifying unit configured to classify the audio capturing element into a primary group of audio capturing elements used to capture foreground audio signals or an auxiliary group of audio capturing elements used to capture background audio signals.
It will be understood that various units in the apparatus 400 correspond to the steps of methods 100 and/or 200 described above. As a result, the optional units are not shown in FIG. 4 and the corresponding features are not detailed here.
FIG. 5 is a block diagram illustrating a user terminal in accordance with an exemplary embodiment of the present invention. In some embodiments, the user terminal 500 may be embodied as a mobile phone. It should be understood, however, that a mobile phone is merely illustrative of one type of apparatus that would benefit from embodiments of the present invention and, therefore, should not be taken to limit the scope of embodiments of the present invention.
The user terminal 500 includes an antenna(s) 512 in operable communication with a transmitter 514 and a receiver 516. The user terminal 500 further includes at least one processor or controller 520. For example, the controller 520 may be comprised of a digital signal processor, a microprocessor, and various analog to digital converters, digital to analog converters, and other support circuits. Control and information processing functions of the user terminal 500 are allocated between these devices according to their respective capabilities.
The user terminal 500 also comprises a user interface including output devices such as a ringer 522, an earphone or speaker 524, a plurality of microphones 526 as audio capturing elements and a display 528, and user input devices such as a keyboard 530, a joystick or other user input interface, all of which are coupled to the controller 520. The user terminal 500 further includes a battery 534, such as a vibrating battery pack, for powering various circuits that are required to operate the user terminal 500, as well as optionally providing mechanical vibration as a detectable output.
In some embodiments, the user terminal 500 includes a media capturing element, such as a camera, video and/or audio module, in communication with the controller 520. The media capturing element may be any means for capturing an image, video and/or audio for storage, display or transmission. For example, in an exemplary embodiment in which the media capturing element is a camera module 536, the camera module 536 may include a digital camera capable of forming a digital image file from a captured image.
When embodied as a mobile terminal, the user terminal 500 may further include a universal identity module (UIM) 538. The UIM 538 is typically a memory device having a processor built in. The UIM 538 may include, for example, a subscriber identity module (SIM), a universal integrated circuit card (UICC), a universal subscriber identity module (USIM), a removable user identity module (R-UIM), etc. The UIM 538 typically stores information elements related to a subscriber.
The user terminal 500 may be equipped with at least one memory. For example, the user terminal 500 may include volatile memory 540, such as volatile Random Access Memory (RAM) including a cache area for the temporary storage of data. The user terminal 500 may also include other non-volatile memory 542, which can be embedded and/or may be removable. The non-volatile memory 542 can additionally or alternatively comprise an EEPROM, flash memory or the like. The memories can store any of a number of pieces of information, program, and data, used by the user terminal 500 to implement the functions of the user terminal 500. For example, the memories may store program of computer executable code, which may be configured, with the controller 520, to cause the user terminal 500 to at least perform the steps of methods 100 and/or 200 as described above.
For the purpose of illustrating spirit and principle of the present invention, some specific embodiments thereof have been described above. For a user terminal equipped with a plurality of audio capturing elements, by processing and analyzing the audio signals in real time, the functionalities of the multiple audio capturing elements on a single user terminal may be dynamically determined and changed. For example, according to the various factor like the relative position of the user terminal with respect to the audio signal source and/or the gesture of the user terminal itself, the most optimal audio capturing element may be adaptively determined as the primary element, while one or more other audio capturing elements may function as reference audio capturing elements accordingly. In this way, the quality of captured audio signals may be maintained at a high level in various conditions in use.
In general, the various exemplary embodiments may be implemented in hardware or special purpose circuits, software, logic or any combination thereof. Some aspects may be implemented in hardware, while other aspects may be implemented in firmware or software which may be executed by a controller, microprocessor or other computing device. While various aspects of the exemplary embodiments of the present invention are illustrated and described as block diagrams, flowcharts, or using some other pictorial representation, it will be appreciated that the blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
For example, the apparatus 400 described above may be implemented as hardware, software/firmware, or any combination thereof. In some exemplary embodiments, one or more units in the apparatus 400 may be implemented as software modules. Alternatively or additionally, some or all of the units may be implemented using hardware modules like integrated circuits (ICs), application specific integrated circuits (ASICs), system-on-chip (SOCs), field programmable gate arrays (FPGAs), and the like. The scope of the present invention is not limited in that regard. Additionally, various blocks shown in FIGS. 1-2 may be viewed as method steps, and/or as operations that result from operation of computer program code, and/or as a plurality of coupled logic circuit elements constructed to carry out the associated function(s). For example, methods 100 and/or 200 may be implemented by computer program codes contained in a computer program tangibly embodied on a machine readable medium.
In the context of this specification, a machine readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine readable medium may be a machine readable signal medium or a machine readable storage medium. A machine readable storage medium may include but not limited to an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of the machine readable storage medium would include an electrical connection having one or more wires, a portable computer diskette, a hard disk, a random access memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or Flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Computer program code for carrying out methods of the present invention may be written in any combination of one or more programming languages. These computer program codes may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor of the computer or other programmable data processing apparatus, cause the functions/operations specified in the flowcharts and/or block diagrams to be implemented. The program code may execute entirely on a computer, partly on the computer, as a stand-alone software package, partly on the computer and partly on a remote computer or entirely on the remote computer or server.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. In certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are contained in the above discussions, these should not be construed as limitations on the scope of any invention or of what may be claimed, but rather as descriptions of features that may be specific to particular embodiments of particular inventions. Certain features that are described in this specification in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable sub-combination.
Various modifications, adaptations to the foregoing exemplary embodiments of this invention may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings. Any and all modifications will still fall within the scope of the non-limiting and exemplary embodiments of this invention. Furthermore, other embodiments of the inventions set forth herein will come to mind to one skilled in the art to which these embodiments of the invention pertain having the benefit of the teachings presented in the foregoing descriptions and the drawings.
Therefore, it will be appreciated that the embodiments of the invention are not to be limited to the specific embodiments disclosed and that modifications and other embodiments are intended to be included within the scope of the appended claims. Although specific terms are used herein, they are used in a generic and descriptive sense only and not for purposes of limitation.

Claims (12)

What is claimed is:
1. A method for adaptive audio capturing, the method comprising:
obtaining an audio signal through an audio channel associated with an audio capturing element on a user terminal;
detecting whether a voice activity exists on one or more audio channels of the user terminal; and
if the voice activity exists on the one or more audio channels, determining a functionality of the audio capturing element, which comprises:
calculating a signal amplitude for the audio channel by processing the obtained audio signal; and
wherein determining the functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element on the user terminal;
wherein determining the functionality of the audio capturing element comprises: classifying the audio capturing element into a primary group of audio capturing elements used to capture foreground audio signals or an auxiliary group of audio capturing elements used to capture background audio signals;
the user terminal has a primary audio channel;
calculating the signal amplitude comprises calculating a relative amplitude of the audio channel with respect to the primary audio channel and averaging the relative amplitude of the audio channel within a time period to acquire an average relative amplitude of the audio channel;
the further signal amplitude comprises a further average relative amplitude of the at least one further audio channel with respect to the primary audio channel; and
determining the functionality of the audio capturing element comprises comparing the average relative amplitude with the further average relative amplitude;
wherein calculating the relative amplitude of the audio channel with respect to the primary audio channel comprises: acquiring a ratio of a difference between a first intermediate signal amplitude of the audio channel and a second intermediate signal amplitude of the primary audio channel to a summation of the first intermediate signal amplitude and the second intermediate signal amplitude, wherein the first intermediate signal amplitude and the second intermediate signal amplitude are obtained by processing the obtained audio signal.
2. The method according to claim 1,
wherein calculating the signal amplitude comprises calculating a time domain amplitude or a frequency domain amplitude by processing the obtained audio signal.
3. The method according to claim 2, wherein the time domain amplitude includes a root-mean-square value; the frequency domain amplitude includes a spectrum amplitude or a power spectrum.
4. The method according to claim 1,
wherein calculating the signal amplitude comprises calculating an average signal amplitude for the audio channel within a time interval;
the further signal amplitude comprises a further average signal amplitude for the at least one further audio channel within the time interval; and
determining the functionality of the audio capturing element comprises comparing the average signal amplitude with the further average signal amplitude.
5. The method according to claim 1,
wherein
the further signal amplitude comprises a further relative amplitude of the at least one further audio channel with respect to the primary audio channel; and
determining the functionality of the audio capturing element comprises comparing the relative amplitude of the audio channel with respect to the primary audio channel with the further relative amplitude.
6. The method according to claim 1,
wherein averaging the relative amplitude of the audio channel within the time period comprises: summating the relative amplitude of the audio channel over a set of frequencies within the time period.
7. An apparatus for adaptive audio capturing, the apparatus comprising:
an obtaining unit configured to obtain an audio signal through an audio channel associated with an audio capturing element of the obtaining unit;
a voice activity detecting unit configured to detect whether a voice activity exists on one or more audio channels associated with the obtaining unit;
a calculating unit configured to, if the voice activity exists on the one or more audio channels, calculate a signal amplitude for the audio channel by processing the obtained audio signal; and
a determining unit configured to determine a functionality of the audio capturing element based on the signal amplitude and a further signal amplitude for at least one further audio channel associated with at least one further audio capturing element of the obtaining unit;
wherein the determining unit comprises: a classifying unit configured to classify the audio capturing element into a primary group of audio capturing elements used to capture foreground audio signals or an auxiliary group of audio capturing elements used to capture background audio signals;
the apparatus has a primary audio channel;
the calculating unit comprises a relative amplitude calculating unit configured to calculate a relative amplitude of the audio channel with respect to the primary audio channel; and an averaging unit configured to average the relative amplitude of the audio channel within a time period to acquire an average relative amplitude of the audio channel;
the further signal amplitude comprises a further average relative amplitude of the at least one further audio channel with respect to the primary audio channel; and
the determining unit comprises an average relative amplitude comparing unit configured to compare the average relative amplitude with the further average relative amplitude;
wherein the relative amplitude calculating unit calculating the relative amplitude comprises acquiring a ratio of a difference between a first intermediate signal amplitude of the audio channel and a second intermediate signal amplitude of the primary audio channel to a summation of the first intermediate signal amplitude and the second intermediate signal amplitude, wherein the first intermediate signal amplitude and the second intermediate signal amplitude are obtained by processing the obtained audio signal.
8. The apparatus according to claim 7, wherein the calculating unit comprises at least one of:
a time domain amplitude calculating unit configured to calculate a time domain amplitude by processing the obtained audio signal; and
a frequency domain amplitude calculating unit configured to calculate a frequency domain amplitude by processing the obtained audio signal.
9. The apparatus according to claim 8, wherein the time domain amplitude includes a root-mean-square value; the frequency domain amplitude includes a spectrum amplitude or a power spectrum.
10. The apparatus according to claim 7,
wherein the calculating unit comprises an average amplitude calculating unit configured to calculate an average signal amplitude for the audio channel within a time interval;
the further signal amplitude comprises a further average signal amplitude for the at least one further audio channel within the time interval; and
the determining unit comprises an average amplitude comparing unit configured to compare the average signal amplitude with the further average signal amplitude.
11. The apparatus according to claim 7,
wherein
the further signal amplitude comprises a further relative amplitude of the at least one further audio channel with respect to the primary audio channel; and
the determining unit comprises a relative amplitude comparing unit configured to compare the relative amplitude of the audio channel with respect to the primary audio channel with the further relative amplitude.
12. The apparatus according to claim 7,
wherein the averaging unit is configured to summate the relative amplitude of the audio channel over a set of frequencies within the time period.
US14/758,026 2012-12-31 2012-12-31 Adaptive audio capturing Active US9692379B2 (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2012/087963 WO2014101156A1 (en) 2012-12-31 2012-12-31 Adaptive audio capturing

Publications (2)

Publication Number Publication Date
US20150341006A1 US20150341006A1 (en) 2015-11-26
US9692379B2 true US9692379B2 (en) 2017-06-27

Family

ID=49911154

Family Applications (1)

Application Number Title Priority Date Filing Date
US14/758,026 Active US9692379B2 (en) 2012-12-31 2012-12-31 Adaptive audio capturing

Country Status (4)

Country Link
US (1) US9692379B2 (en)
EP (1) EP2797080B1 (en)
CN (1) CN104025699B (en)
WO (1) WO2014101156A1 (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9685156B2 (en) * 2015-03-12 2017-06-20 Sony Mobile Communications Inc. Low-power voice command detector
KR20170024913A (en) * 2015-08-26 2017-03-08 삼성전자주식회사 Noise Cancelling Electronic Device and Noise Cancelling Method Using Plurality of Microphones
WO2017035771A1 (en) * 2015-09-01 2017-03-09 华为技术有限公司 Voice path check method, device, and terminal
WO2018127412A1 (en) * 2017-01-03 2018-07-12 Koninklijke Philips N.V. Audio capture using beamforming
CN110447237B (en) 2017-03-24 2022-04-15 雅马哈株式会社 Sound pickup device and sound pickup method
US10455319B1 (en) * 2018-07-18 2019-10-22 Motorola Mobility Llc Reducing noise in audio signals
CN108965600B (en) * 2018-07-24 2021-05-04 Oppo(重庆)智能科技有限公司 Voice pickup method and related product
CN112925502B (en) * 2021-02-10 2022-07-08 歌尔科技有限公司 Audio channel switching equipment, method and device and electronic equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4241329A (en) * 1978-04-27 1980-12-23 Dialog Systems, Inc. Continuous speech recognition method for improving false alarm rates
US6125288A (en) * 1996-03-14 2000-09-26 Ricoh Company, Ltd. Telecommunication apparatus capable of controlling audio output level in response to a background noise
CN1732872A (en) 2005-06-24 2006-02-15 清华大学 Bidirectional digital modulating multi-channel artificial cochlea system
CN1794758A (en) 2004-12-22 2006-06-28 美国博通公司 Wireless telephone and method for processing audio single in the wireless telephone
CN101595452A (en) 2006-12-22 2009-12-02 Step实验室公司 The near-field vector signal strengthens
US20100081487A1 (en) * 2008-09-30 2010-04-01 Apple Inc. Multiple microphone switching and configuration
US20120051548A1 (en) * 2010-02-18 2012-03-01 Qualcomm Incorporated Microphone array subset selection for robust noise reduction
US20130344788A1 (en) * 2012-06-22 2013-12-26 GM Global Technology Operations LLC Hvac system zone compensation for improved communication performance

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7464029B2 (en) * 2005-07-22 2008-12-09 Qualcomm Incorporated Robust separation of speech signals in a noisy environment
JP2009089133A (en) * 2007-10-01 2009-04-23 Yamaha Corp Sound emission and collection device
US8175291B2 (en) * 2007-12-19 2012-05-08 Qualcomm Incorporated Systems, methods, and apparatus for multi-microphone based speech enhancement
US8411880B2 (en) * 2008-01-29 2013-04-02 Qualcomm Incorporated Sound quality by intelligently selecting between signals from a plurality of microphones
US8041054B2 (en) * 2008-10-31 2011-10-18 Continental Automotive Systems, Inc. Systems and methods for selectively switching between multiple microphones
US20110058683A1 (en) * 2009-09-04 2011-03-10 Glenn Kosteva Method & apparatus for selecting a microphone in a microphone array
US20110317848A1 (en) * 2010-06-23 2011-12-29 Motorola, Inc. Microphone Interference Detection Method and Apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4241329A (en) * 1978-04-27 1980-12-23 Dialog Systems, Inc. Continuous speech recognition method for improving false alarm rates
US6125288A (en) * 1996-03-14 2000-09-26 Ricoh Company, Ltd. Telecommunication apparatus capable of controlling audio output level in response to a background noise
CN1794758A (en) 2004-12-22 2006-06-28 美国博通公司 Wireless telephone and method for processing audio single in the wireless telephone
CN1732872A (en) 2005-06-24 2006-02-15 清华大学 Bidirectional digital modulating multi-channel artificial cochlea system
CN101595452A (en) 2006-12-22 2009-12-02 Step实验室公司 The near-field vector signal strengthens
US20100081487A1 (en) * 2008-09-30 2010-04-01 Apple Inc. Multiple microphone switching and configuration
US20120051548A1 (en) * 2010-02-18 2012-03-01 Qualcomm Incorporated Microphone array subset selection for robust noise reduction
US20130344788A1 (en) * 2012-06-22 2013-12-26 GM Global Technology Operations LLC Hvac system zone compensation for improved communication performance

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
International Search Report corresponding to Application No. PCT/CN2012/087963; Date of Mailing: Oct. 10, 2013.

Also Published As

Publication number Publication date
EP2797080B1 (en) 2016-09-28
US20150341006A1 (en) 2015-11-26
WO2014101156A1 (en) 2014-07-03
EP2797080A3 (en) 2015-04-22
CN104025699B (en) 2018-05-22
CN104025699A (en) 2014-09-03
EP2797080A2 (en) 2014-10-29

Similar Documents

Publication Publication Date Title
US9692379B2 (en) Adaptive audio capturing
EP2770750B1 (en) Detecting and switching between noise reduction modes in multi-microphone mobile devices
US11094323B2 (en) Electronic device and method for processing audio signal by electronic device
EP3526979B1 (en) Method and apparatus for output signal equalization between microphones
EP3127114B1 (en) Situation dependent transient suppression
US9251804B2 (en) Speech recognition
WO2019112468A1 (en) Multi-microphone noise reduction method, apparatus and terminal device
US8874440B2 (en) Apparatus and method for detecting speech
US20150095027A1 (en) Key phrase detection
WO2013040414A1 (en) Mobile device context information using speech detection
KR20130101943A (en) Endpoints detection apparatus for sound source and method thereof
CN112233689B (en) Audio noise reduction method, device, equipment and medium
US20160027432A1 (en) Speaker Dependent Voiced Sound Pattern Template Mapping
US20230116052A1 (en) Array geometry agnostic multi-channel personalized speech enhancement
EP3140831B1 (en) Audio signal discriminator and coder
CN110085264B (en) Voice signal detection method, device, equipment and storage medium
CN112233688A (en) Audio noise reduction method, device, equipment and medium
US10832687B2 (en) Audio processing device and audio processing method
US10136235B2 (en) Method and system for audio quality enhancement
CN115995234A (en) Audio noise reduction method and device, electronic equipment and readable storage medium
CN110941455B (en) Active wake-up method and device and electronic equipment
CN118314919B (en) Voice repair method, device, audio equipment and storage medium
CN108182948B (en) Voice acquisition processing method and device capable of improving voice recognition rate
WO2017119901A1 (en) System and method for speech detection adaptation
KR20130037921A (en) Method of efficient implementation of svm-based speech / music classification on embedded systems suitable for marine environment

Legal Events

Date Code Title Description
AS Assignment

Owner name: SPREADTRUM COMMUNICATIONS (SHANGHAI) CO., LTD., CH

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JIANG, BIN;WU, SHENG;LIN, FUHUEI;AND OTHERS;SIGNING DATES FROM 20150612 TO 20150616;REEL/FRAME:035915/0984

STCF Information on status: patent grant

Free format text: PATENTED CASE

MAFP Maintenance fee payment

Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

Year of fee payment: 4