USRE48371E1 - Microphone array system - Google Patents

Microphone array system Download PDF

Info

Publication number
USRE48371E1
USRE48371E1 US16/052,623 US201816052623A USRE48371E US RE48371 E1 USRE48371 E1 US RE48371E1 US 201816052623 A US201816052623 A US 201816052623A US RE48371 E USRE48371 E US RE48371E
Authority
US
United States
Prior art keywords
sound
target sound
signals
sound signal
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active, expires
Application number
US16/052,623
Inventor
Manli Zhu
Qi Li
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vocalife LLC
Original Assignee
Vocalife LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=45870681&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=USRE48371(E1) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Application filed by Vocalife LLC filed Critical Vocalife LLC
Priority to US16/052,623 priority Critical patent/USRE48371E1/en
Assigned to VOCALIFE LLC reassignment VOCALIFE LLC ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: LI, QI, ZHU, MANLI
Application granted granted Critical
Publication of USRE48371E1 publication Critical patent/USRE48371E1/en
Active legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/802Systems for determining direction or deviation from predetermined direction
    • G01S3/805Systems for determining direction or deviation from predetermined direction using adjustment of real or effective orientation of directivity characteristics of a transducer or transducer system to give a desired condition of signal derived from that transducer or transducer system, e.g. to give a maximum or minimum signal
    • G01S3/8055Systems for determining direction or deviation from predetermined direction using adjustment of real or effective orientation of directivity characteristics of a transducer or transducer system to give a desired condition of signal derived from that transducer or transducer system, e.g. to give a maximum or minimum signal adjusting orientation of a single directivity characteristic to produce maximum or minimum signal
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/80Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using ultrasonic, sonic or infrasonic waves
    • G01S3/801Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • G01S5/22Position of source determined by co-ordinating a plurality of position lines defined by path-difference measurements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R1/00Details of transducers, loudspeakers or microphones
    • H04R1/20Arrangements for obtaining desired frequency or directional characteristics
    • H04R1/32Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only
    • H04R1/40Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers
    • H04R1/406Arrangements for obtaining desired frequency or directional characteristics for obtaining desired directional characteristic only by combining a number of identical transducers microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M3/00Automatic or semi-automatic exchanges
    • H04M3/42Systems providing special services or facilities to subscribers
    • H04M3/56Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities
    • H04M3/568Arrangements for connecting several subscribers to a common circuit, i.e. affording conference facilities audio processing specific to telephonic conferencing, e.g. spatial distribution, mixing of participants
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/4012D or 3D arrays of transducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2201/00Details of transducers, loudspeakers or microphones covered by H04R1/00 but not provided for in any of its subgroups
    • H04R2201/40Details of arrangements for obtaining desired directional characteristic by combining a number of identical transducers covered by H04R1/40 but not provided for in any of its subgroups
    • H04R2201/403Linear arrays of transducers

Definitions

  • Microphones constitute an important element in today's speech acquisition devices.
  • most of the hands-free speech acquisition devices for example, mobile devices, lapels, headsets, etc., convert sound into electrical signals by using a microphone embedded within the speech acquisition device.
  • the paradigm of a single microphone often does not work effectively because the microphone picks up many ambient noise signals in addition to the desired sound, specifically when the distance between a user and the microphone is more than a few inches. Therefore, there is a need for a microphone system that operates under a variety of different ambient noise conditions and that places fewer constraints on the user with respect to the microphone, thereby eliminating the need to wear the microphone or be in close proximity to the microphone.
  • a microphone array that achieves directional gain in a preferred spatial direction while suppressing ambient noise from other directions.
  • Conventional microphone arrays include arrays that are typically developed for applications such as radar and sonar, but are generally not suitable for hands-free or handheld speech acquisition devices. The main reason is that the desired sound signal has an extremely wide bandwidth relative to its center frequency, thereby rendering conventional narrowband techniques employed in the conventional microphone arrays unsuitable.
  • the array size needs to be vastly increased, making the conventional microphone arrays large and bulky, and precluding the conventional microphone arrays from having broader applications, for example, in mobile and handheld communication devices.
  • There is a need for a microphone array system that provides an effective response over a wide spectrum of frequencies while being unobtrusive in terms of size.
  • target sound signal refers to a sound signal from a desired or target sound source, for example, a person's speech that needs to be enhanced.
  • a microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit, is provided. The sound source localization unit, the adaptive beamforming unit, and the noise reduction unit are in operative communication with the array of sound sensors.
  • the array of sound sensors is, for example, a linear array of sound sensors, a circular array of sound sensors, or an arbitrarily distributed coplanar array of sound sensors.
  • the array of sound sensors herein referred to as a “microphone array” receives sound signals from multiple disparate sound sources.
  • the method disclosed herein can be applied on a microphone array with an arbitrary number of sound sensors having, for example, an arbitrary two dimensional (2D) configuration.
  • the sound signals received by the sound sensors in the microphone array comprise the target sound signal from the target sound source among the disparate sound sources, and ambient noise signals.
  • the sound source localization unit estimates a spatial location of the target sound signal from the received sound signals, for example, using a steered response power-phase transform.
  • the adaptive beamforming unit performs adaptive beamforming for steering a directivity pattern of the microphone array in a direction of the spatial location of the target sound signal.
  • the adaptive beamforming unit thereby enhances the target sound signal from the target sound source and partially suppresses the ambient noise signals.
  • the noise reduction unit suppresses the ambient noise signals for further enhancing the target sound signal received from the target sound source.
  • a delay between each of the sound sensors and an origin of the microphone array is determined as a function of distance between each of the sound sensors and the origin, a predefined angle between each of the sound sensors and a reference axis, and an azimuth angle between the reference axis and the target sound signal.
  • the delay between each of the sound sensors and the origin of the microphone array is determined as a function of distance between each of the sound sensors and the origin, a predefined angle between each of the sound sensors and a first reference axis, an elevation angle between a second reference axis and the target sound signal, and an azimuth angle between the first reference axis and the target sound signal.
  • This method of determining the delay enables beamforming for arbitrary numbers of sound sensors and multiple arbitrary microphone array configurations. The delay is determined, for example, in terms of number of samples. Once the delay is determined, the microphone array can be aligned to enhance the target sound signal from a specific direction.
  • the adaptive beamforming unit comprises a fixed beamformer, a blocking matrix, and an adaptive filter.
  • the fixed beamformer steers the directivity pattern of the microphone array in the direction of the spatial location of the target sound signal from the target sound source for enhancing the target sound signal, when the target sound source is in motion.
  • the blocking matrix feeds the ambient noise signals to the adaptive filter by blocking the target sound signal from the target sound source.
  • the adaptive filter adaptively filters the ambient noise signals in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources.
  • the fixed beamformer performs fixed beamforming, for example, by filtering and summing output sound signals from the sound sensors.
  • the adaptive filtering comprises sub-band adaptive filtering.
  • the adaptive filter comprises an analysis filter bank, an adaptive filter matrix, and a synthesis filter bank.
  • the analysis filter bank splits the enhanced target sound signal from the fixed beamformer and the ambient noise signals from the blocking matrix into multiple frequency sub-bands.
  • the adaptive filter matrix adaptively filters the ambient noise signals in each of the frequency sub-bands in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources.
  • the synthesis filter bank synthesizes a full-band sound signal using the frequency sub-bands of the enhanced target sound signal.
  • the adaptive beamforming unit further comprises an adaptation control unit for detecting the presence of the target sound signal and adjusting a step size for the adaptive filtering in response to detecting the presence or the absence of the target sound signal in the sound signals received from the disparate sound sources.
  • the noise reduction unit suppresses the ambient noise signals for further enhancing the target sound signal from the target sound source.
  • the noise reduction unit performs noise reduction, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm.
  • the noise reduction unit performs noise reduction in multiple frequency sub-bands employed for sub-band adaptive beamforming by the analysis filter bank of the adaptive beamforming unit.
  • the microphone array system disclosed herein comprising the microphone array with an arbitrary number of sound sensors positioned in arbitrary configurations can be implemented in handheld devices, for example, the iPad® of Apple Inc., the iPhone® of Apple Inc., smart phones, tablet computers, laptop computers, etc.
  • the microphone array system disclosed herein can further be implemented in conference phones, video conferencing applications, or any device or equipment that needs better speech inputs.
  • FIG. 1 illustrates a method for enhancing a target sound signal from multiple sound signals.
  • FIG. 2 illustrates a system for enhancing a target sound signal from multiple sound signals.
  • FIG. 3 exemplarily illustrates a microphone array configuration showing a microphone array having N sound sensors arbitrarily distributed on a circle.
  • FIG. 4 exemplarily illustrates a graphical representation of a filter-and-sum beamforming algorithm for determining output of the microphone array having N sound sensors.
  • FIG. 5 exemplarily illustrates distances between an origin of the microphone array and sound sensor M 1 and sound sensor M 3 in the circular microphone array configuration, when the target sound signal is at an angle ⁇ from the Y-axis.
  • FIG. 6A exemplarily illustrates a table showing the distance between each sound sensor in a circular microphone array configuration from the origin of the microphone array, when the target sound source is in the same plane as that of the microphone array.
  • FIG. 6B exemplarily illustrates a table showing the relationship of the position of each sound sensor in the circular microphone array configuration and its distance to the origin of the microphone array, when the target sound source is in the same plane as that of the microphone array.
  • FIG. 7A exemplarily illustrates a graphical representation of a microphone array, when the target sound source is in a three dimensional plane.
  • FIG. 7B exemplarily illustrates a table showing delay between each sound sensor in a circular microphone array configuration and the origin of the microphone array, when the target sound source is in a three dimensional plane.
  • FIG. 7C exemplarily illustrates a three dimensional working space of the microphone array, where the target sound signal is incident at an elevation angle ⁇
  • FIG. 8 exemplarily illustrates a method for estimating a spatial location of the target sound signal from the target sound source by a sound source localization unit using a steered response power-phase transform.
  • FIG. 9A exemplarily illustrates a graph showing the value of the steered response power-phase transform for every 10°.
  • FIG. 9B exemplarily illustrates a graph representing the estimated target sound signal from the target sound source.
  • FIG. 10 exemplarily illustrates a system for performing adaptive beamforming by an adaptive beamforming unit.
  • FIG. 11 exemplarily illustrates a system for sub-band adaptive filtering.
  • FIG. 12 exemplarily illustrates a graphical representation showing the performance of a perfect reconstruction filter bank.
  • FIG. 13 exemplarily illustrates a block diagram of a noise reduction unit that performs noise reduction using a Wiener-filter based noise reduction algorithm.
  • FIG. 14 exemplarily illustrates a hardware implementation of the microphone array system.
  • FIGS. 15A-15C exemplarily illustrate a conference phone comprising an eight-sensor microphone array.
  • FIG. 16A exemplarily illustrates a layout of an eight-sensor microphone array for a conference phone.
  • FIG. 16B exemplarily illustrates a graphical representation of eight spatial regions to which the eight-sensor microphone array of FIG. 16A responds.
  • FIGS. 16C-16D exemplarily illustrate computer simulations showing the steering of the directivity patterns of the eight-sensor microphone array of FIG. 16A in the directions of 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz.
  • FIGS. 16E-16L exemplarily illustrate graphical representations showing the directivity patterns of the eight-sensor microphone array of FIG. 16A in each of the eight spatial regions, where each directivity pattern is an average response from 300 Hz to 5000 Hz.
  • FIG. 17A exemplarily illustrates a graphical representation of four spatial regions to which a four-sensor microphone array for a wireless handheld device responds.
  • FIGS. 17B-17I exemplarily illustrate computer simulations showing the directivity patterns of the four-sensor microphone array of FIG. 17A with respect to azimuth and frequency.
  • FIGS. 18A-18B exemplarily illustrate a microphone array configuration for a tablet computer.
  • FIG. 18C exemplarily illustrates an acoustic beam formed using the microphone array configuration of FIGS. 18A-18B according to the method and system disclosed herein.
  • FIGS. 18D-18G exemplarily illustrate graphs showing processing results of the adaptive beamforming unit and the noise reduction unit for the microphone array configuration of FIG. 18B , in both a time domain and a spectral domain for the tablet computer.
  • FIGS. 19A-19F exemplarily illustrate tables showing different microphone array configurations and the corresponding values of delay ⁇ n , for the sound sensors in each of the microphone array configurations.
  • FIG. 1 illustrates a method for enhancing a target sound signal from multiple sound signals.
  • target sound signal refers to a desired sound signal from a desired or target sound source, for example, a person's speech that needs to be enhanced.
  • the method disclosed herein provides 101 a microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit.
  • the sound source localization unit, the adaptive beamforming unit, and the noise reduction unit are in operative communication with the array of sound sensors.
  • the microphone array system disclosed herein employs the array of sound sensors positioned in an arbitrary configuration, the sound source localization unit, the adaptive beamforming unit, and the noise reduction unit for enhancing a target sound signal by acoustic beam forming in the direction of the target sound signal in the presence of ambient noise signals.
  • the array of sound sensors herein referred to as a “microphone array” comprises multiple or an arbitrary number of sound sensors, for example, microphones, operating in tandem.
  • the microphone array refers to an array of an arbitrary number of sound sensors positioned in an arbitrary configuration.
  • the sound sensors are transducers that detect sound and convert the sound into electrical signals.
  • the sound sensors are, for example, condenser microphones, piezoelectric microphones, etc.
  • the sound sensors receive 102 sound signals from multiple disparate sound sources and directions.
  • the target sound source that emits the target sound signal is one of the disparate sound sources.
  • the term “sound signals” refers to composite sound energy from multiple disparate sound sources in an environment of the microphone array.
  • the sound signals comprise the target sound signal from the target sound source and the ambient noise signals.
  • the sound sensors are positioned in an arbitrary planar configuration herein referred to as a “microphone array configuration”, for example, a linear configuration, a circular configuration, any arbitrarily distributed coplanar array configuration, etc.
  • the microphone array provides a higher response to the target sound signal received from a particular direction than to the sound signals from other directions.
  • a plot of the response of the microphone array versus frequency and direction of arrival of the sound signals is referred to as a directivity pattern of the microphone array.
  • the sound source localization unit estimates 103 a spatial location of the target sound signal from the received sound signals.
  • the sound source localization unit estimates the spatial location of the target sound signal from the target sound source, for example, using a steered response power-phase transform as disclosed in the detailed description of FIG. 8 .
  • the adaptive beamforming unit performs adaptive beamforming 104 by steering the directivity pattern of the microphone array in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal, and partially suppressing the ambient noise signals.
  • Beamforming refers to a signal processing technique used in the microphone array for directional signal reception, that is, spatial filtering. This spatial filtering is achieved by using adaptive or fixed methods. Spatial filtering refers to separating two signals with overlapping frequency content that originate from different spatial locations.
  • the noise reduction unit performs noise reduction by further suppressing 105 the ambient noise signals and thereby further enhancing the target sound signal.
  • the noise reduction unit performs the noise reduction, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm.
  • FIG. 2 illustrates a system 200 for enhancing a target sound signal from multiple sound signals.
  • the system 200 herein referred to as a “microphone array system”, comprises the array 201 of sound sensors positioned in an arbitrary configuration, the sound source localization unit 202 , the adaptive beamforming unit 203 , and the noise reduction unit 207 .
  • the array 201 of sound sensors is in operative communication with the sound source localization unit 202 , the adaptive beamforming unit 203 , and the noise reduction unit 207 .
  • the microphone array 201 is, for example, a linear array of sound sensors, a circular array of sound sensors, or an arbitrarily distributed coplanar array of sound sensors.
  • the microphone array 201 achieves directional gain in any preferred spatial direction and frequency band while suppressing signals from other spatial directions and frequency bands.
  • the sound sensors receive the sound signals comprising the target sound signal and ambient noise signals from multiple disparate sound sources, where one of the disparate sound sources is the target sound source that emits the target sound signal.
  • the sound source localization unit 202 estimates the spatial location of the target sound signal from the received sound signals.
  • the sound source localization unit 202 uses, for example, a steered response power-phase transform, for estimating the spatial location of the target sound signal from the target sound source.
  • the adaptive beamforming unit 203 steers the directivity pattern of the microphone array 201 in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal and partially suppressing the ambient noise signals.
  • the adaptive beamforming unit 203 comprises a fixed beamformer 204 , a blocking matrix 205 , and an adaptive filter 206 as disclosed in the detailed description of FIG. 10 .
  • the fixed beamformer 204 performs fixed beamforming by filtering and summing output sound signals from each of the sound sensors in the microphone array 201 as disclosed in the detailed description of FIG. 4 .
  • the adaptive filter 206 is implemented as a set of sub-band adaptive filters.
  • the adaptive filter 206 comprises an analysis filter bank 206 a, an adaptive filter matrix 206 b, and a synthesis filter bank 206 c as disclosed in the detailed description of FIG. 11 .
  • the noise reduction unit 207 further suppresses the ambient noise signals for further enhancing the target sound signal.
  • the noise reduction unit 207 is, for example, a Wiener-filter based noise reduction unit, a spectral subtraction noise reduction unit, an auditory transform based noise reduction unit, or a model based noise reduction unit.
  • FIG. 3 exemplarily illustrates a microphone array configuration showing a microphone array 201 having N sound sensors 301 arbitrarily distributed on a circle 302 with a diameter “d”, where “N” refers to the number of sound sensors 301 in the microphone array 201 .
  • N refers to the number of sound sensors 301 in the microphone array 201 .
  • N refers to the number of sound sensors 301 in the microphone array 201 .
  • N refers to the number of sound sensors 301 in the microphone array 201 .
  • N refers to the number of sound sensors 301 in the microphone array 201 .
  • N refers to the number of sound sensors 301 in the microphone array 201 .
  • N refers to the number of sound sensors 301 in the microphone array 201 .
  • the sound sensor 301 M 0 is positioned at an acute angle ⁇ 0 from the Y-axis; the sound sensor 301 M 1 is positioned at an acute angle ⁇ 1 from the Y-axis; the sound sensor 301 M 2 is positioned at an acute angle ⁇ 2 from the Y-axis; and the sound sensor 301 M 3 is positioned at an acute angle ⁇ 3 from the Y-axis.
  • a filter-and-sum beamforming algorithm determines the output “y” of the microphone array 201 having N sound sensors 301 as disclosed in the detailed description of FIG. 4 .
  • FIG. 4 exemplarily illustrates a graphical representation of the filter-and-sum beamforming algorithm for determining the output of the microphone array 201 having N sound sensors 301 .
  • the microphone array configuration is arbitrary in a two dimensional plane, for example, a circular array configuration where the sound sensors 301 M 0 , M 1 , M 2 , . . . , M N , M N ⁇ 1 of the microphone array 201 are arbitrarily positioned on a circle 302 .
  • the sound signals received by each of the sound sensors 301 in the microphone array 201 are inputs to the microphone array 201 .
  • the adaptive beamforming unit 203 employs the filter-and-sum beamforming algorithm that applies independent weights to each of the inputs to the microphone array 201 such that directivity pattern of the microphone array 201 is steered to the spatial location of the target sound signal as determined by the sound source localization unit 202 .
  • the spatial directivity pattern H ( ⁇ , ⁇ ) for the target sound signal from angle ⁇ with normalized frequency w is defined as:
  • X is the signal received at the origin of the circular microphone array 201
  • W is the frequency response of the real-valued finite impulse response (FIR) filter w.
  • FIG. 5 exemplarily illustrates distances between an origin of the microphone array 201 and the sound sensor 301 M 1 and the sound sensor 301 M 3 in the circular microphone array configuration, when the target sound signal is at an angle ⁇ from the Y-axis.
  • the microphone array system 200 disclosed herein can be used with an arbitrary directivity pattern for arbitrarily distributed sound sensors 301 .
  • the parameter that is defined to achieve beamformer coefficients is the value of delay ⁇ n for each sound sensor 301 .
  • ⁇ n an origin or a reference point of the microphone array 201 is defined; and then the distance d n between each sound sensor 301 and the origin is measured, and then the angle ⁇ n of each sound sensor 301 biased from a vertical axis is measured.
  • the angle between the Y-axis and the line joining the origin and the sound sensor 301 M 0 is ⁇ 0
  • the angle between the Y-axis and the line joining the origin and the sound sensor 301 M 1 is ⁇ 1
  • the angle between the Y-axis and the line joining the origin and the sound sensor 301 M 2 is ⁇ 2
  • the angle between the Y-axis and the line joining the origin and the sound sensor 301 M 3 is ⁇ 3
  • the distance between the origin O and the sound sensor 301 M 1 , and the origin O and the sound sensor 301 M 3 when the incoming target sound signal from the target sound source is at an angle ⁇ from the Y-axis is denoted as ⁇ 1 and ⁇ 3 , respectively.
  • the detailed description refers to a circular microphone array configuration; however, the scope of the microphone array system 200 disclosed herein is not limited to the circular microphone array configuration but may be extended to include a linear array configuration, an arbitrarily distributed coplanar array configuration, or a microphone array configuration with any arbitrary geometry.
  • FIG. 6A exemplarily illustrates a table showing the distance between each sound sensor 301 in a circular microphone array configuration from the origin of the microphone array 201 , when the target sound source is in the same plane as that of the microphone array 201 .
  • the distance measured in meters and the corresponding delay ( ⁇ ) measured in number of samples is exemplarily illustrated in FIG. 6A .
  • the delay ( ⁇ ) between each of the sound sensors 301 and the origin of the microphone array 201 is determined as a function of distance (d) between each of the sound sensors 301 and the origin, a predefined angle ( ⁇ ) between each of the sound sensors 301 and a reference axis (Y) as exemplarily illustrated in FIG. 5 , and an azimuth angle ( ⁇ ) between the reference axis (Y) and the target sound signal.
  • the determined delay ( ⁇ ) is represented in terms of number of samples.
  • the time delay between the signal received by the (n+1) th sound sensor 301 “x n ,” and the origin of the microphone array 201 is herein denoted as “t” measured in seconds.
  • the sound signals received by the microphone array 201 which are in analog form are converted into digital sound signals by sampling the analog sound signals at a particular frequency, for example, 8000 Hz. That is, the number of samples in each second is 8000.
  • FIG. 6B exemplarily illustrates a table showing the relationship of the position of each sound sensor 301 in the circular microphone array configuration and its distance to the origin of the microphone array 201 , when the target sound source is in the same plane as that of the microphone array 201 .
  • the distance measured in meters and the corresponding delay ( ⁇ ) measured in number of samples is exemplarily illustrated in FIG. 6B .
  • the method of determining the delay ( ⁇ ) enables beamforming for arbitrary numbers of sound sensors 301 and multiple arbitrary microphone array configurations. Once the delay ( ⁇ ) is determined, the microphone array 201 can be aligned to enhance the target sound signal from a specific direction.
  • FIGS. 7A-7C exemplarily illustrate an embodiment of a microphone array 201 when the target sound source is in a three dimensional plane.
  • the delay ( ⁇ ) between each of the sound sensors 301 and the origin of the microphone array 201 is determined as a function of distance (d) between each of the sound sensors 301 and the origin, a predefined angle ( ⁇ ) between each of the sound sensors 301 and a first reference axis (Y), an elevation angle ( ⁇ ) between a second reference axis (Z) and the target sound signal, and an azimuth angle ( ⁇ ) between the first reference axis (Y) and the target sound signal.
  • the determined delay ( ⁇ ) is represented in terms of number of samples. The determination of the delay enables beamforming for arbitrary numbers of the sound sensors 301 and multiple arbitrary configurations of the microphone array 201 .
  • FIG. 7A exemplarily illustrates a graphical representation of a microphone array 201 , when the target sound source in a three dimensional plane.
  • the target sound signal from the target sound source is received from the direction ( ⁇ , ⁇ ) with reference to the origin of the microphone array 201 , where ⁇ is the elevation angle and ⁇ is the azimuth.
  • FIG. 7B exemplarily illustrates a table showing delay between each sound sensor 301 in a circular microphone array configuration and the origin of the microphone array 201 , when the target sound source is in a three dimensional plane.
  • the target sound source in a three dimensional plane emits a target sound signal from a spatial location ( ⁇ , ⁇ ).
  • the distances between the origin O and the sound sensors 301 M 0 , M 1 , M 2 , and M 3 when the incoming target sound signal from the target sound source is at an angle ( ⁇ , ⁇ ) from the Z-axis and the Y-axis respectively, are denoted as ⁇ 0 , ⁇ 1 , ⁇ 2 , and ⁇ 3 respectively.
  • FIG. 7C exemplarily illustrates a three dimensional working space of the microphone array 201 , where the target sound signal is incident at an elevation angle ⁇ , where ⁇ is a specific angle and is a variable representing the elevation angle.
  • is a specific angle and is a variable representing the elevation angle.
  • all four sound sensors 301 M 0 , M 1 , M 2 , and M 3 receive the same target sound signal for 0° ⁇ 0 ⁇ 360°.
  • the value of ⁇ is determined by the sample delay between each of the sound sensors 301 and the origin of the microphone array 201 .
  • the adaptive beamforming unit 203 enhances sound from this range and suppresses sound signals from other directions, for example, S 1 and S 2 treating them as ambient noise signals.
  • the beamforming is performed by a delay-sum method. In another embodiment, the beamforming is performed by a filter-sum method.
  • FIG. 8 exemplarily illustrates a method for estimating a spatial location of the target sound signal from the target sound source by the sound source localization unit 202 using a steered response power-phase transform (SRP-PHAT).
  • SRP-PHAT combines the advantages of sound source localization methods, for example, the time difference of arrival (TDOA) method and the steered response power (SRP) method.
  • the TDOA method performs the time delay estimation of the sound signals relative to a pair of spatially separated sound sensors 301 .
  • the estimated time delay is a function of both the location of the target sound source and the position of each of the sound sensors 301 in the microphone array 201 .
  • the location of the target sound source can be determined.
  • a filter-and-sum beamforming algorithm is applied to the microphone array 201 for sound signals in the direction of each of the disparate sound sources.
  • the location of the target sound source corresponds to the direction in which the output of the filter-and-sum beamforming has the largest response power.
  • the TDOA based localization is suitable under low to moderate reverberation conditions.
  • the SRP method requires shorter analysis intervals and exhibits an elevated insensitivity to environmental conditions while not allowing for use under excessive multi-path.
  • the SRP-PHAT method disclosed herein combines the advantages of the TDOA method and the SRP method, has a decreased sensitivity to noise and reverberations compared to the TDOA method, and provides more precise location estimates than existing localization methods.
  • the correlation value corr(D it ) between the t th pair of the sound sensors 301 corresponding to the delay of D it is then calculated 802 .
  • the correlation value is given 803 by:
  • FIGS. 9A-9B exemplarily illustrate graphs showing the results of sound source localization performed using the steered response power-phase transform (SRP-PHAT).
  • FIG. 9A exemplarily illustrates a graph showing the value of the SRP-PHAT for every 10° The maximum value corresponds to the location of the target sound signal from the target sound source.
  • FIG. 9B exemplarily illustrates a graph representing the estimated target sound signal from the target sound source and a ground truth.
  • FIG. 10 exemplarily illustrates a system for performing adaptive beamforming by the adaptive beamforming unit 203 .
  • the algorithm for fixed beamforming is disclosed with reference to equations (3) through (8) in the detailed description of FIG. 4 , FIGS. 6A-6B , and FIGS. 7A-7C , which is extended herein to adaptive beamforming.
  • Adaptive beamforming refers to a beamforming process where the directivity pattern of the microphone array 201 is adaptively steered in the direction of a target sound signal emitted by a target sound source in motion.
  • Adaptive beamforming achieves better ambient noise suppression than fixed beamforming. This is because the target direction of arrival, which is assumed to be stable in fixed beamforming, changes with the movement of the target sound source.
  • the gains of the sound sensors 301 which are assumed uniform in fixed beamforming, exhibit significant distribution. All these factors reduce speech quality.
  • adaptive beamforming adaptively performs beam steering and null steering; therefore, the adaptive beamforming method is more robust against steering error caused by the array imperfection mentioned above.
  • the adaptive beamforming unit 203 disclosed herein comprises a fixed beamformer 204 , a blocking matrix 205 , an adaptation control unit 208 , and an adaptive filter 206 .
  • the fixed beamformer 204 adaptively steers the directivity pattern of the microphone array 201 in the direction of the spatial location of the target sound signal from the target sound source for enhancing the target sound signal, when the target sound source is in motion.
  • the sound sensors 301 in the microphone array 201 receive the sound signals S 1 , . . . , S 4 , which comprise both the target sound signal from the target sound source and the ambient noise signals.
  • the received sound signals are fed as input to the fixed beamformer 204 and the blocking matrix 205 .
  • the fixed beamformer 204 outputs a signal “b”.
  • the fixed beamformer 204 performs fixed beamforming by filtering and summing output sound signals from the sound sensors 301 .
  • the blocking matrix 205 outputs a signal “z” which primarily comprises the ambient noise signals.
  • the blocking matrix 205 blocks the target sound signal from the target sound source and feeds the ambient noise signals to the adaptive filter 206 to minimize the effect of the ambient noise signals on the enhanced target sound signal.
  • the output “z” of the blocking matrix 205 may contain some weak target sound signals due to signal leakage. If the adaptation is active when the target sound signal, for example, speech is present, the speech is cancelled out with the noise. Therefore, the adaptation control unit 208 determines when the adaptation should be applied.
  • the adaptation control unit 208 comprises a target sound signal detector 208 a and a step size adjusting module 208 b.
  • the target sound signal detector 208 a of the adaptation control unit 208 detects the presence or absence of the target sound signal, for example, speech.
  • the step size adjusting module 208 b adjusts the step size for the adaptation process such that when the target sound signal is present, the adaptation is slow for preserving the target sound signal, and when the target sound signal is absent, adaptation is quick for better cancellation of the ambient noise signals.
  • FIG. 11 exemplarily illustrates a system for sub-band adaptive filtering.
  • Sub-band adaptive filtering involves separating a full-band signal into different frequency ranges called sub-bands prior to the filtering process.
  • the sub-band adaptive filtering using sub-band adaptive filters lead to a higher convergence speed compared to using a full-band adaptive filter.
  • the noise reduction unit 207 disclosed herein is developed in a sub-band, whereby applying sub-band adaptive filtering provides the same sub-band framework for both beamforming and noise reduction, and thus saves on computational cost.
  • the adaptive filter 206 comprises an analysis filter bank 206 a, an adaptive filter matrix 206 b, and a synthesis filter bank 206 c.
  • the analysis filter bank 206 a splits the enhanced target sound signal (b) from the fixed beamformer 204 and the ambient noise signals (z) from the blocking matrix 205 exemplarily illustrated in FIG. 10 into multiple frequency sub-bands.
  • the analysis filter bank 206 a performs an analysis step where the outputs of the fixed beamformer 204 and the blocking matrix 205 are split into frequency sub bands.
  • the sub-band adaptive filter 206 typically has a shorter impulse response than its full band counterpart.
  • the step size of the sub-bands can be adjusted individually for each sub-band by the step-size adjusting module 208 b, which leads to a higher convergence speed compared to using a full band adaptive filter.
  • the adaptive filter matrix 206 b adaptively filters the ambient noise signals in each of the frequency sub-bands in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources.
  • the adaptive filter matrix 206 b performs an adaptation step, where the adaptive filter 206 is adapted such that the filter output only contains the target sound signal, for example, speech.
  • the synthesis filter bank 206 c synthesizes a full-band sound signal using the frequency sub-bands of the enhanced target sound signal.
  • the synthesis filter bank 206 c performs a synthesis step where the sub-band sound signal is synthesized into a full-band sound signal. Since the noise reduction and the beamforming are performed in the same sub-band framework, the noise reduction as disclosed in the detailed description of FIG. 13 , by the noise reduction unit 207 is performed prior to the synthesis step, thereby reducing computation.
  • the analysis filter bank 206 a is implemented as a perfect-reconstruction filter bank, where the output of the synthesis filter bank 206 c after the analysis and synthesis steps perfectly matches the input to the analysis filter bank 206 a. That is, all the sub-band analysis filter banks 206 a are factorized to operate on prototype filter coefficients and a modulation matrix is used to take advantage of the fast Fourier transform (FFT). Both analysis and synthesize steps require performing frequency shifts in each sub-band, which involves complex value computations with cosines and sinusoids. The method disclosed herein employs the FFT to perform the frequency shifts required in each sub-band, thereby minimizing the amount of multiply-accumulate operations.
  • the implementation of the sub-band analysis filter bank 206 a as a perfect-reconstruction filter bank ensures the quality of the target sound signal by ensuring that the sub-band analysis filter banks 206 a do not distort the target sound signal itself.
  • FIG. 12 exemplarily illustrates a graphical representation showing the performance of a perfect-reconstruction filter bank.
  • the solid line represents the input signal to the analysis filter bank 206 a, and the circles represent the output of the synthesis filter bank 206 c after analysis and synthesis.
  • the output of the synthesis filter bank 206 c perfectly matches the input, and is therefore referred to as the perfect-reconstruction filter bank.
  • FIG. 13 exemplarily illustrates a block diagram of a noise reduction unit 207 for performing noise reduction using, for example, a Wiener-filter based noise reduction algorithm.
  • the noise reduction unit 207 performs noise reduction for further suppressing the ambient noise signals after adaptive beamforming, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm.
  • the noise reduction unit 207 performs noise reduction in multiple frequency sub-bands employed by an analysis filter bank 206 a of the adaptive beamforming unit 203 for sub-band adaptive beamforming.
  • the noise reduction is performed using the Wiener-filter based noise reduction algorithm.
  • the noise reduction unit 207 explores the short-term and long-term statistics of the target sound signal, for example, speech, and the ambient noise signals, and the wide-band and narrowband signal-to-noise ratio (SNR) to support a Wiener gain filtering.
  • the noise reduction unit 207 comprises a target sound signal statistics analyzer 207 a, a noise statistics analyzer 207 b, a signal-to-noise ratio (SNR) analyzer 207 c, and a Wiener filter 207 d.
  • the target sound signal statistics analyzer 207 a explores the short-term and long-term statistics of the target sound signal, for example, speech.
  • the noise statistics analyzer 207 b explores the short-term and long-term statistics of the ambient noise signals.
  • the SNR analyzer 207 c of the noise reduction unit 207 explores the wide-band and narrow-band signal-to-noise ratio (SNR). After the spectrum of noisy-speech passes through the Wiener filter 207 d, an estimation of the clean-speech spectrum is generated.
  • the synthesis filter bank 206 c by an inverse process of the analysis filter bank 206 a, reconstructs the signals of the clean speech into a full-band signal, given the estimated spectrum of the clean speech.
  • FIG. 14 exemplarily illustrates a hardware implementation of the microphone array system 200 disclosed herein.
  • the hardware implementation of the microphone array system 200 disclosed in the detailed description of FIG. 2 comprises the microphone array 201 having an arbitrary number of sound sensors 301 positioned in an arbitrary configuration, multiple microphone amplifiers 1401 , one or more audio codecs 1402 , a digital signal processor (DSP) 1403 , a flash memory 1404 , one or more power regulators 1405 and 1406 , a battery 1407 , a loudspeaker or a headphone 1408 , and a communication interface 1409 .
  • the microphone array 201 comprises, for example, four or eight sound sensors 301 arranged in a linear or a circular microphone array configuration. The microphone array 201 receives the sound signals.
  • the microphone array 201 comprises four sound sensors 301 that pick up the sound signals.
  • Four microphone amplifiers 1401 receive the output sound signals from the four sound sensors 301 .
  • the microphone amplifiers 1401 also referred to as preamplifiers provide a gain to boost the power of the received sound signals for enhancing the sensitivity of the sound sensors 301 .
  • the gain of the preamplifiers is 20 dB.
  • the DSP 1403 either stores the processed signal from the DSP 1403 in a memory device for a recording application, or transmits the processed signal to the communication interface 1409 .
  • the recording application comprises, for example, storing the processed signal onto the memory device for the purposes of playing back the processed signal at a later time.
  • the communication interface 1409 transmits the processed signal, for example, to a computer, the internet, or a radio for communicating the processed signal.
  • the microphone array system 200 disclosed herein implements a two-way communication device where the signal received from the communication interface 1409 is processed by the DSP 1403 and the processed signal is then played through the loudspeaker or the headphone 1408 .
  • the flash memory 1404 stores the code for the DSP 1403 and compressed audio signals.
  • the DSP 1403 reads the code from the flash memory 1404 into an internal memory of the DSP 1403 and then starts executing the code.
  • the audio codec 1402 can be configured for encoding and decoding audio or sound signals during the start up stage by writing to registers of the DSP 1403 .
  • two four-channel audio codec 1402 chips may be used.
  • the power regulators 1405 and 1406 for example, linear power regulators 1405 and switch power regulators 1406 provide appropriate voltage and current supply for all the components, for example, 201 , 1401 , 1402 , 1403 , etc., mechanically supported and electrically connected on a circuit board.
  • a universal serial bus (USB) control is built into the DSP 1403 .
  • the battery 1407 is used for powering the microphone array system 200 .
  • the microphone array system 200 disclosed herein is implemented on a mixed signal circuit board having a six-layer printed circuit board (PCB).
  • noisy digital signals easily contaminate the low voltage analog sound signals from the sound sensors 301 . Therefore, the layout of the mixed signal circuit board is carefully partitioned to isolate the analog circuits from the digital circuits.
  • both the inputs and outputs of the microphone amplifiers 1401 are in analog form, the microphone amplifiers 1401 are placed in a digital region of the mixed signal circuit board because of their high power consumption 1401 and switch amplifier nature.
  • the linear power regulators 1405 are deployed in an analog region of the mixed signal circuit board due to the low noise property exhibited by the linear power regulators 1405 .
  • Five power regulators, for example, 1405 are designed in the microphone array system 200 circuits to ensure quality.
  • the switch power regulators 1406 achieve an efficiency of about 95% of the input power and have high output current capacity; however their outputs are too noisy for analog circuits.
  • the efficiency of the linear power regulators 1405 is determined by the ratio of the output voltage to the input voltage, which is lower than that of the switch power regulators 1406 in most cases.
  • the regulator outputs utilized in the microphone array system 200 circuits are stable, quiet, and suitable for the low power analog circuits.
  • the microphone array system 200 is designed with a microphone array 201 having dimensions of 10 cm ⁇ 2.5 cm ⁇ 1.5 cm, a USB interface, and an assembled PCB supporting the microphone array 201 and a DSP 1403 having a low power consumption design devised for portable devices, a four-channel codec 1402 , and a flash memory 1404 .
  • the DSP 1403 chip is powerful enough to handle the DSP 1403 computations in the microphone array system 200 disclosed herein.
  • the hardware configuration of this example can be used for any microphone array configuration, with suitable modifications to the software.
  • the adaptive beamforming unit 203 of the microphone array system 200 is implemented as hardware with software instructions programmed on the DSP 1403 .
  • the DSP 1403 is programmed for beamforming, noise reduction, echo cancellation, and USB interfacing according to the method disclosed herein, and fine tuned for optimal performance.
  • FIGS. 15A-15C exemplarily illustrate a conference phone 1500 comprising an eight-sensor microphone array 201 .
  • the eight-sensor microphone array 201 comprises eight sound sensors 301 arranged in a configuration as exemplarily illustrated in FIG. 15A .
  • a top view of the conference phone 1500 comprising the eight-sensor microphone array 201 is exemplarily illustrated in FIG. 15A .
  • a front view of the conference phone 1500 comprising the eight-sensor microphone array 201 is exemplarily illustrated in FIG. 15B .
  • a headset 1502 that can be placed in a base holder 1501 of the conference phone 1500 having the eight-sensor microphone array 201 is exemplarily illustrated in FIG. 15C .
  • the microphone array system 200 disclosed herein with broadband beamforming can be configured for a mobile phone, a tablet computer, etc., for speech enhancement and noise reduction.
  • FIG. 16A exemplarily illustrates a layout of an eight-sensor microphone array 201 for a conference phone 1500 .
  • a circular microphone array 201 in which eight sound sensors 301 are mounted on the surface of the conference phone 1500 as exemplarily illustrated in FIG. 15A .
  • the conference phone 1500 has a removable handset 1502 on top, and hence the microphone array system 200 is configured to accommodate the handset 1502 as exemplarily illustrated in FIGS. 15A-15C .
  • the circular microphone array 201 has a diameter of about four inches.
  • Eight sound sensors 301 for example, microphones, M 0 , M 1 , M 2 , M 3 , M 4 , M 5 , M 6 , and M 7 are distributed along a circle 302 on the conference phone 1500 .
  • Microphones M 4 -M 7 are separated by 90 degrees from each other, and microphones M o -M 3 are rotated counterclockwise by 60 degrees from microphone M 4 -M 7 respectively.
  • FIG. 16B exemplarily illustrates a graphical representation of eight spatial regions to which the eight-sensor microphone array 201 of FIG. 16A responds.
  • the space is divided into eight spatial regions with equal spaces centered at 15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330° respectively.
  • the adaptive beamforming unit 203 configures the eight-sensor microphone array 201 to automatically point to one of these eight spatial regions according to the location of the target sound signal from the target sound source as estimated by the sound source localization unit 202 .
  • FIGS. 16C-16D exemplarily illustrate computer simulations showing the steering of the directivity patterns of the eight-sensor microphone array 201 of FIG. 16A , in the directions 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz.
  • FIG. 16C exemplarily illustrates the computer simulation result showing the directivity pattern of the microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 15°.
  • the computer simulation for verifying the performance of the adaptive beamforming unit 203 when the target sound signal is received from the target sound source in the spatial region centered at 15° uses the following parameters:
  • Passband ( ⁇ p , ⁇ p ) ⁇ 300-5000 Hz, ⁇ 5°-35° ⁇ , designed spatial directivity pattern is 1.
  • Stopband ( ⁇ s , ⁇ s ) ⁇ 300 ⁇ 5000 Hz, ⁇ 180° ⁇ 15°+45° ⁇ 180° ⁇ , the designed spatial directivity pattern is 0.
  • FIG. 16D exemplarily illustrates the computer simulation result showing the directivity pattern of the microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 60°.
  • the computer simulation for verifying the performance of the adaptive beamforming unit 203 when the target sound signal is received from the target sound source in the spatial region centered at 60° uses the following parameters:
  • Passband ( ⁇ p , ⁇ p ) ⁇ 300-5000 Hz, 40°-80° ⁇ , designed spatial directivity pattern is 1.
  • Stopband ( ⁇ s , ⁇ s ) ⁇ 300 ⁇ 5000 Hz, ⁇ 180° ⁇ 30°+90° ⁇ 180° ⁇ , the designed spatial directivity pattern is 0.
  • the directivity pattern of the microphone array 201 in the spatial region centered at 60° is enhanced while the sound signals from all other spatial regions are suppressed.
  • the other six spatial regions have similar parameters.
  • the main lobe has the same level, which means the target sound signal has little distortion in frequency.
  • FIGS. 16E-16L exemplarily illustrate graphical representations showing the directivity patterns of the eight-sensor microphone array 201 of FIG. 16A in each of the eight spatial regions, where each directivity pattern is an average response from 300 Hz to 5000 Hz.
  • the main lobe is about 10 dB higher than the side lobe, and therefore the ambient noise signals from other directions are highly suppressed compared to the target sound signal in the pass direction.
  • the microphone array system 200 calculates the filter coefficients for the target sound signal, for example, speech signals from each sound sensor 301 and combines the filtered signals to enhance the speech from any specific direction. Since speech covers a large range of frequencies, the method and system 200 disclosed herein covers broadband signals from 300 Hz to 5000 Hz.
  • FIG. 16E exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 15°.
  • FIG. 16F exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 60°.
  • FIG. 16G exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 105°.
  • FIG. 16E exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 15°.
  • FIG. 16F exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region
  • FIG. 16H exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 150°.
  • FIG. 16I exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 195°.
  • FIG. 16J exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 240°.
  • FIG. 16H exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 150°.
  • FIG. 16I exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region
  • FIG. 16K exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 285°.
  • FIG. 16L exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 330°.
  • the microphone array system 200 disclosed herein enhances the target sound signal from each of the directions 15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330°, while suppressing the ambient noise signals from the other directions.
  • the microphone array system 200 disclosed herein can be implemented for a square microphone array configuration and a rectangular array configuration where a sound sensor 301 is positioned in each corner of the four-cornered array.
  • the microphone array system 200 disclosed herein implements beamforming from plane to three dimensional sound sources.
  • FIG. 17A exemplarily illustrates a graphical representation of four spatial regions to which a four-sensor microphone array 201 for a wireless handheld device responds.
  • the wireless handheld device is, for example, a mobile phone.
  • the microphone array 201 comprises four sound sensors 301 , for example, microphones, uniformly distributed around a circle 302 having diameter equal to about two inches. This configuration is identical to positioning four sound sensors 301 or microphones on four corners of a square.
  • the space is divided into four spatial regions with equal space centered at ⁇ 90°, 0°, 90°, and 180° respectively.
  • the adaptive beamforming unit 203 configures the four-sensor microphone array 201 to automatically point to one of these spatial regions according to the location of the target sound signal from the target sound source as estimated by the sound source localization unit 202 .
  • FIGS. 17B-17I exemplarily illustrate computer simulations showing the directivity patterns of the four-sensor microphone array 201 of FIG. 17A with respect to azimuth and frequency.
  • Passband ( ⁇ p , ⁇ p ) ⁇ 300-4000 Hz, ⁇ 20°-20° ⁇ , designed spatial directivity pattern is 1.
  • Stopband ( ⁇ , ⁇ s ) ⁇ 300 ⁇ 4000 Hz, ⁇ 180° ⁇ 30°+30° ⁇ 180° ⁇ , the designed spatial directivity pattern is 0.
  • Passband ( ⁇ p , ⁇ p ) ⁇ 300-4000 Hz, 70°-110° ⁇ , designed spatial directivity pattern is 1.
  • Stopband ( ⁇ s , ⁇ s ) ⁇ 300 ⁇ 4000 Hz, ⁇ 180° ⁇ 60°+120° ⁇ 180° ⁇ , the designed spatial directivity pattern is 0.
  • the directivity patterns for the spatial regions centered at ⁇ 90° and 180° are similarly obtained.
  • FIG. 17B exemplarily illustrates the computer simulation result representing a three dimensional (3D) display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at ⁇ 90°.
  • FIG. 17C exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at ⁇ 90°.
  • FIG. 17D exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 0°.
  • FIG. 17E exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 0°.
  • FIG. 17F exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 90°.
  • FIG. 17G exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 90°.
  • FIG. 17H exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array 201 when the target sound source is received from the target sound source in the spatial region centered at 180°.
  • FIG. 17I exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound source is received from the target sound source in the spatial region centered at 180°.
  • the 3D displays of the directivity patterns in FIG. 17B , FIG. 17D , FIG. 17F , and FIG. 17H demonstrate that the passbands have the same height.
  • the 2D displays of the directivity patterns in FIG. 17C , FIG. 17E , FIG. 17G , and FIG. 17I demonstrate that the passbands have the same width along the frequency and demonstrates the broadband properties of the microphone array 201 .
  • FIGS. 18A-18B exemplarily illustrates a microphone array configuration for a tablet computer.
  • four sound sensors 301 of the microphone array 201 are positioned on a frame 1801 of the tablet computer, for example, the iPad® of Apple Inc.
  • the sound sensors 301 are distributed on the circle 302 as exemplarily in FIG. 18B .
  • the radius of the circle 302 is equal to the width of the tablet computer.
  • the angle ⁇ between the sound sensors 301 M 2 and M 3 is determined to avoid spatial aliasing up to 4000 Hz.
  • This microphone array configuration enhances a front speaker's voice and suppresses background ambient noise.
  • the adaptive beamforming unit 203 configures the microphone array 201 to form an acoustic beam 1802 pointing frontwards using the method and system 200 disclosed herein.
  • the target sound signal that is, the front speaker's voice within the range of ⁇ 30° is enhanced compared to the sound signals from other directions.
  • FIG. 18C exemplarily illustrates an acoustic beam 1802 formed using the microphone array configuration of FIGS. 18A-18B according to the method and system 200 disclosed herein.
  • FIGS. 18D-18G exemplarily illustrates graphs showing processing results of the adaptive beamforming unit 203 and the noise reduction unit 207 for the microphone array configuration of FIG. 18B , in both a time domain and a spectral domain for the tablet computer.
  • FIG. 18D exemplarily illustrates a graph showing the performance of the microphone array 201 before performing beamforming and noise reduction with a signal-to-noise ratio (SNR) of 15 dB.
  • SNR signal-to-noise ratio
  • FIG. 18E exemplarily illustrates a graph showing the performance of the microphone array 201 after performing beamforming and noise reduction, according to the method disclosed herein, with an SNR of 15 dB.
  • FIG. 18F exemplarily illustrates a graph showing the performance of the microphone array 201 before performing beamforming and noise reduction with an SNR of 0 dB.
  • FIG. 18G exemplarily illustrates a graph showing the performance of the microphone array 201 after performing beamforming and noise reduction, according to the method disclosed herein, with an SNR of 0 dB.
  • the performance graph is noisier for the microphone array 201 before the beamforming and noise reduction is performed. Therefore, the adaptive beamforming unit 203 and the noise reduction unit 207 of the microphone array system 200 disclosed herein suppresses ambient noise signals while maintaining the clarity of the target sound signal, for example, the speech signal.
  • FIGS. 19A-19F exemplarily illustrate tables showing different microphone array configurations and the corresponding values of delay ⁇ n for the sound sensors 301 in each of the microphone array configurations.
  • the broadband beamforming method disclosed herein can be used for microphone arrays 201 with arbitrary numbers of sound sensors 301 and arbitrary locations of the sound sensors 301 .
  • the sound sensors 301 can be mounted on surfaces or edges of any speech acquisition device.
  • the only parameter that needs to be defined to achieve the beamformer coefficients is the value of; for each sound sensor 301 as disclosed in the detailed description of FIG. 5 , FIGS. 6A-6B , and FIGS. 7A-7C and as exemplarily illustrated in FIGS. 19A-19F .
  • the microphone array configuration exemplarily illustrated in FIG. 19F is implemented on a handheld device for hands-free speech acquisition.
  • a user prefers to talk in distance rather than speaking close to the sound sensor 301 and may want to talk while watching a screen of the handheld device.
  • the microphone array system 200 disclosed herein allows the handheld device to pick up sound signals from the direction of the speaker's mouth and suppress noise from other directions.
  • the method and system 200 disclosed herein may be implemented on any device or equipment, for example, a voice recorder where a target sound signal or speech needs to be enhanced.

Abstract

A method and system for enhancing a target sound signal from multiple sound signals is provided. An array of an arbitrary number of sound sensors positioned in an arbitrary configuration receives the sound signals from multiple disparate sources. The sound signals comprise the target sound signal from a target sound source, and ambient noise signals. A sound source localization unit, an adaptive beamforming unit, and a noise reduction unit are in operative communication with the array of sound sensors. The sound source localization unit estimates a spatial location of the target sound signal from the received sound signals. The adaptive beamforming unit performs adaptive beamforming by steering a directivity pattern of the array of sound sensors in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal and partially suppressing the ambient noise signals, which are further suppressed by the noise reduction unit.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application is a continuation reissue application of patent application Ser. No. 15/293,626 titled “Microphone Array System”, filed on Oct. 14, 2016 in the United States Patent and Trademark Office, which is a re-issue application of U.S. patent application Ser. No. 13/049,877 titled “Microphone Array System”, filed on Mar. 16, 2011 in the United States Patent and Trademark Office (now U.S. Pat. No. 8,861,756), which claims the benefit of provisional patent application No. 61/403,952 titled “Microphone array design and implementation for telecommunications and handheld devices”, filed on Sep. 24, 2010 in the United States Patent and Trademark Office.
The specification of the above referenced patent application is incorporated herein by reference in its entirety.
BACKGROUND
Microphones constitute an important element in today's speech acquisition devices. Currently, most of the hands-free speech acquisition devices, for example, mobile devices, lapels, headsets, etc., convert sound into electrical signals by using a microphone embedded within the speech acquisition device. However, the paradigm of a single microphone often does not work effectively because the microphone picks up many ambient noise signals in addition to the desired sound, specifically when the distance between a user and the microphone is more than a few inches. Therefore, there is a need for a microphone system that operates under a variety of different ambient noise conditions and that places fewer constraints on the user with respect to the microphone, thereby eliminating the need to wear the microphone or be in close proximity to the microphone.
To mitigate the drawbacks of the single microphone system, there is a need for a microphone array that achieves directional gain in a preferred spatial direction while suppressing ambient noise from other directions. Conventional microphone arrays include arrays that are typically developed for applications such as radar and sonar, but are generally not suitable for hands-free or handheld speech acquisition devices. The main reason is that the desired sound signal has an extremely wide bandwidth relative to its center frequency, thereby rendering conventional narrowband techniques employed in the conventional microphone arrays unsuitable. In order to cater to such broadband speech applications, the array size needs to be vastly increased, making the conventional microphone arrays large and bulky, and precluding the conventional microphone arrays from having broader applications, for example, in mobile and handheld communication devices. There is a need for a microphone array system that provides an effective response over a wide spectrum of frequencies while being unobtrusive in terms of size.
Hence, there is a long felt but unresolved need for a broadband microphone array and broadband beamforming system that enhances acoustics of a desired sound signal while suppressing ambient noise signals.
SUMMARY OF THE INVENTION
This summary is provided to introduce a selection of concepts in a simplified form that are further described in the detailed description of the invention. This summary is not intended to identify key or essential inventive concepts of the claimed subject matter, nor is it intended for determining the scope of the claimed subject matter.
The method and system disclosed herein addresses the above stated need for enhancing acoustics of a target sound signal received from a target sound source, while suppressing ambient noise signals. As used herein, the term “target sound signal” refers to a sound signal from a desired or target sound source, for example, a person's speech that needs to be enhanced. A microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit, is provided. The sound source localization unit, the adaptive beamforming unit, and the noise reduction unit are in operative communication with the array of sound sensors. The array of sound sensors is, for example, a linear array of sound sensors, a circular array of sound sensors, or an arbitrarily distributed coplanar array of sound sensors. The array of sound sensors herein referred to as a “microphone array” receives sound signals from multiple disparate sound sources. The method disclosed herein can be applied on a microphone array with an arbitrary number of sound sensors having, for example, an arbitrary two dimensional (2D) configuration. The sound signals received by the sound sensors in the microphone array comprise the target sound signal from the target sound source among the disparate sound sources, and ambient noise signals.
The sound source localization unit estimates a spatial location of the target sound signal from the received sound signals, for example, using a steered response power-phase transform. The adaptive beamforming unit performs adaptive beamforming for steering a directivity pattern of the microphone array in a direction of the spatial location of the target sound signal. The adaptive beamforming unit thereby enhances the target sound signal from the target sound source and partially suppresses the ambient noise signals. The noise reduction unit suppresses the ambient noise signals for further enhancing the target sound signal received from the target sound source.
In an embodiment where the target sound source that emits the target sound signal is in a two dimensional plane, a delay between each of the sound sensors and an origin of the microphone array is determined as a function of distance between each of the sound sensors and the origin, a predefined angle between each of the sound sensors and a reference axis, and an azimuth angle between the reference axis and the target sound signal. In another embodiment where the target sound source that emits the target sound signal is in a three dimensional plane, the delay between each of the sound sensors and the origin of the microphone array is determined as a function of distance between each of the sound sensors and the origin, a predefined angle between each of the sound sensors and a first reference axis, an elevation angle between a second reference axis and the target sound signal, and an azimuth angle between the first reference axis and the target sound signal. This method of determining the delay enables beamforming for arbitrary numbers of sound sensors and multiple arbitrary microphone array configurations. The delay is determined, for example, in terms of number of samples. Once the delay is determined, the microphone array can be aligned to enhance the target sound signal from a specific direction.
The adaptive beamforming unit comprises a fixed beamformer, a blocking matrix, and an adaptive filter. The fixed beamformer steers the directivity pattern of the microphone array in the direction of the spatial location of the target sound signal from the target sound source for enhancing the target sound signal, when the target sound source is in motion. The blocking matrix feeds the ambient noise signals to the adaptive filter by blocking the target sound signal from the target sound source. The adaptive filter adaptively filters the ambient noise signals in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources. The fixed beamformer performs fixed beamforming, for example, by filtering and summing output sound signals from the sound sensors.
In an embodiment, the adaptive filtering comprises sub-band adaptive filtering. The adaptive filter comprises an analysis filter bank, an adaptive filter matrix, and a synthesis filter bank. The analysis filter bank splits the enhanced target sound signal from the fixed beamformer and the ambient noise signals from the blocking matrix into multiple frequency sub-bands. The adaptive filter matrix adaptively filters the ambient noise signals in each of the frequency sub-bands in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources. The synthesis filter bank synthesizes a full-band sound signal using the frequency sub-bands of the enhanced target sound signal. In an embodiment, the adaptive beamforming unit further comprises an adaptation control unit for detecting the presence of the target sound signal and adjusting a step size for the adaptive filtering in response to detecting the presence or the absence of the target sound signal in the sound signals received from the disparate sound sources.
The noise reduction unit suppresses the ambient noise signals for further enhancing the target sound signal from the target sound source. The noise reduction unit performs noise reduction, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm. The noise reduction unit performs noise reduction in multiple frequency sub-bands employed for sub-band adaptive beamforming by the analysis filter bank of the adaptive beamforming unit.
The microphone array system disclosed herein comprising the microphone array with an arbitrary number of sound sensors positioned in arbitrary configurations can be implemented in handheld devices, for example, the iPad® of Apple Inc., the iPhone® of Apple Inc., smart phones, tablet computers, laptop computers, etc. The microphone array system disclosed herein can further be implemented in conference phones, video conferencing applications, or any device or equipment that needs better speech inputs.
BRIEF DESCRIPTION OF THE DRAWINGS
The foregoing summary, as well as the following detailed description of the invention, is better understood when read in conjunction with the appended drawings. For the purpose of illustrating the invention, exemplary constructions of the invention are shown in the drawings. However, the invention is not limited to the specific methods and instrumentalities disclosed herein.
FIG. 1 illustrates a method for enhancing a target sound signal from multiple sound signals.
FIG. 2 illustrates a system for enhancing a target sound signal from multiple sound signals.
FIG. 3 exemplarily illustrates a microphone array configuration showing a microphone array having N sound sensors arbitrarily distributed on a circle.
FIG. 4 exemplarily illustrates a graphical representation of a filter-and-sum beamforming algorithm for determining output of the microphone array having N sound sensors.
FIG. 5 exemplarily illustrates distances between an origin of the microphone array and sound sensor M1 and sound sensor M3 in the circular microphone array configuration, when the target sound signal is at an angle θ from the Y-axis.
FIG. 6A exemplarily illustrates a table showing the distance between each sound sensor in a circular microphone array configuration from the origin of the microphone array, when the target sound source is in the same plane as that of the microphone array.
FIG. 6B exemplarily illustrates a table showing the relationship of the position of each sound sensor in the circular microphone array configuration and its distance to the origin of the microphone array, when the target sound source is in the same plane as that of the microphone array.
FIG. 7A exemplarily illustrates a graphical representation of a microphone array, when the target sound source is in a three dimensional plane.
FIG. 7B exemplarily illustrates a table showing delay between each sound sensor in a circular microphone array configuration and the origin of the microphone array, when the target sound source is in a three dimensional plane.
FIG. 7C exemplarily illustrates a three dimensional working space of the microphone array, where the target sound signal is incident at an elevation angle Ψ<Ω
FIG. 8 exemplarily illustrates a method for estimating a spatial location of the target sound signal from the target sound source by a sound source localization unit using a steered response power-phase transform.
FIG. 9A exemplarily illustrates a graph showing the value of the steered response power-phase transform for every 10°.
FIG. 9B exemplarily illustrates a graph representing the estimated target sound signal from the target sound source.
FIG. 10 exemplarily illustrates a system for performing adaptive beamforming by an adaptive beamforming unit.
FIG. 11 exemplarily illustrates a system for sub-band adaptive filtering.
FIG. 12 exemplarily illustrates a graphical representation showing the performance of a perfect reconstruction filter bank.
FIG. 13 exemplarily illustrates a block diagram of a noise reduction unit that performs noise reduction using a Wiener-filter based noise reduction algorithm.
FIG. 14 exemplarily illustrates a hardware implementation of the microphone array system.
FIGS. 15A-15C exemplarily illustrate a conference phone comprising an eight-sensor microphone array.
FIG. 16A exemplarily illustrates a layout of an eight-sensor microphone array for a conference phone.
FIG. 16B exemplarily illustrates a graphical representation of eight spatial regions to which the eight-sensor microphone array of FIG. 16A responds.
FIGS. 16C-16D exemplarily illustrate computer simulations showing the steering of the directivity patterns of the eight-sensor microphone array of FIG. 16A in the directions of 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz.
FIGS. 16E-16L exemplarily illustrate graphical representations showing the directivity patterns of the eight-sensor microphone array of FIG. 16A in each of the eight spatial regions, where each directivity pattern is an average response from 300 Hz to 5000 Hz.
FIG. 17A exemplarily illustrates a graphical representation of four spatial regions to which a four-sensor microphone array for a wireless handheld device responds.
FIGS. 17B-17I exemplarily illustrate computer simulations showing the directivity patterns of the four-sensor microphone array of FIG. 17A with respect to azimuth and frequency.
FIGS. 18A-18B exemplarily illustrate a microphone array configuration for a tablet computer.
FIG. 18C exemplarily illustrates an acoustic beam formed using the microphone array configuration of FIGS. 18A-18B according to the method and system disclosed herein.
FIGS. 18D-18G exemplarily illustrate graphs showing processing results of the adaptive beamforming unit and the noise reduction unit for the microphone array configuration of FIG. 18B, in both a time domain and a spectral domain for the tablet computer.
FIGS. 19A-19F exemplarily illustrate tables showing different microphone array configurations and the corresponding values of delay τn, for the sound sensors in each of the microphone array configurations.
DETAILED DESCRIPTION OF THE INVENTION
FIG. 1 illustrates a method for enhancing a target sound signal from multiple sound signals. As used herein, the term “target sound signal” refers to a desired sound signal from a desired or target sound source, for example, a person's speech that needs to be enhanced. The method disclosed herein provides 101 a microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit. The sound source localization unit, the adaptive beamforming unit, and the noise reduction unit are in operative communication with the array of sound sensors. The microphone array system disclosed herein employs the array of sound sensors positioned in an arbitrary configuration, the sound source localization unit, the adaptive beamforming unit, and the noise reduction unit for enhancing a target sound signal by acoustic beam forming in the direction of the target sound signal in the presence of ambient noise signals.
The array of sound sensors herein referred to as a “microphone array” comprises multiple or an arbitrary number of sound sensors, for example, microphones, operating in tandem. The microphone array refers to an array of an arbitrary number of sound sensors positioned in an arbitrary configuration. The sound sensors are transducers that detect sound and convert the sound into electrical signals. The sound sensors are, for example, condenser microphones, piezoelectric microphones, etc.
The sound sensors receive 102 sound signals from multiple disparate sound sources and directions. The target sound source that emits the target sound signal is one of the disparate sound sources. As used herein, the term “sound signals” refers to composite sound energy from multiple disparate sound sources in an environment of the microphone array. The sound signals comprise the target sound signal from the target sound source and the ambient noise signals. The sound sensors are positioned in an arbitrary planar configuration herein referred to as a “microphone array configuration”, for example, a linear configuration, a circular configuration, any arbitrarily distributed coplanar array configuration, etc. By employing beamforming according to the method disclosed herein, the microphone array provides a higher response to the target sound signal received from a particular direction than to the sound signals from other directions. A plot of the response of the microphone array versus frequency and direction of arrival of the sound signals is referred to as a directivity pattern of the microphone array.
The sound source localization unit estimates 103 a spatial location of the target sound signal from the received sound signals. In an embodiment, the sound source localization unit estimates the spatial location of the target sound signal from the target sound source, for example, using a steered response power-phase transform as disclosed in the detailed description of FIG. 8.
The adaptive beamforming unit performs adaptive beamforming 104 by steering the directivity pattern of the microphone array in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal, and partially suppressing the ambient noise signals. Beamforming refers to a signal processing technique used in the microphone array for directional signal reception, that is, spatial filtering. This spatial filtering is achieved by using adaptive or fixed methods. Spatial filtering refers to separating two signals with overlapping frequency content that originate from different spatial locations.
The noise reduction unit performs noise reduction by further suppressing 105 the ambient noise signals and thereby further enhancing the target sound signal. The noise reduction unit performs the noise reduction, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm.
FIG. 2 illustrates a system 200 for enhancing a target sound signal from multiple sound signals. The system 200, herein referred to as a “microphone array system”, comprises the array 201 of sound sensors positioned in an arbitrary configuration, the sound source localization unit 202, the adaptive beamforming unit 203, and the noise reduction unit 207.
The array 201 of sound sensors, herein referred to as the “microphone array” is in operative communication with the sound source localization unit 202, the adaptive beamforming unit 203, and the noise reduction unit 207. The microphone array 201 is, for example, a linear array of sound sensors, a circular array of sound sensors, or an arbitrarily distributed coplanar array of sound sensors. The microphone array 201 achieves directional gain in any preferred spatial direction and frequency band while suppressing signals from other spatial directions and frequency bands. The sound sensors receive the sound signals comprising the target sound signal and ambient noise signals from multiple disparate sound sources, where one of the disparate sound sources is the target sound source that emits the target sound signal.
The sound source localization unit 202 estimates the spatial location of the target sound signal from the received sound signals. In an embodiment, the sound source localization unit 202 uses, for example, a steered response power-phase transform, for estimating the spatial location of the target sound signal from the target sound source.
The adaptive beamforming unit 203 steers the directivity pattern of the microphone array 201 in a direction of the spatial location of the target sound signal, thereby enhancing the target sound signal and partially suppressing the ambient noise signals. The adaptive beamforming unit 203 comprises a fixed beamformer 204, a blocking matrix 205, and an adaptive filter 206 as disclosed in the detailed description of FIG. 10. The fixed beamformer 204 performs fixed beamforming by filtering and summing output sound signals from each of the sound sensors in the microphone array 201 as disclosed in the detailed description of FIG. 4. In an embodiment, the adaptive filter 206 is implemented as a set of sub-band adaptive filters. The adaptive filter 206 comprises an analysis filter bank 206a, an adaptive filter matrix 206b, and a synthesis filter bank 206c as disclosed in the detailed description of FIG. 11.
The noise reduction unit 207 further suppresses the ambient noise signals for further enhancing the target sound signal. The noise reduction unit 207 is, for example, a Wiener-filter based noise reduction unit, a spectral subtraction noise reduction unit, an auditory transform based noise reduction unit, or a model based noise reduction unit.
FIG. 3 exemplarily illustrates a microphone array configuration showing a microphone array 201 having N sound sensors 301 arbitrarily distributed on a circle 302 with a diameter “d”, where “N” refers to the number of sound sensors 301 in the microphone array 201. Consider an example where N=4, that is, there are four sound sensors 301 M0, M1, M2, and M3 in the microphone array 201. Each of the sound sensors 301 is positioned at an acute angle “Φn” from a Y-axis, where Φ1≥0 and n=0, 1, 2, . . . N−1. In an example, the sound sensor 301 M0 is positioned at an acute angle Φ0 from the Y-axis; the sound sensor 301 M1 is positioned at an acute angle Φ1 from the Y-axis; the sound sensor 301 M2 is positioned at an acute angle Φ2 from the Y-axis; and the sound sensor 301 M3 is positioned at an acute angle Φ3 from the Y-axis. A filter-and-sum beamforming algorithm determines the output “y” of the microphone array 201 having N sound sensors 301 as disclosed in the detailed description of FIG. 4.
FIG. 4 exemplarily illustrates a graphical representation of the filter-and-sum beamforming algorithm for determining the output of the microphone array 201 having N sound sensors 301. Consider an example where the target sound signal from the target sound source is at an angle θ with a normalized frequency w. The microphone array configuration is arbitrary in a two dimensional plane, for example, a circular array configuration where the sound sensors 301 M0, M1, M2, . . . , MN, MN−1 of the microphone array 201 are arbitrarily positioned on a circle 302. The sound signals received by each of the sound sensors 301 in the microphone array 201 are inputs to the microphone array 201. The adaptive beamforming unit 203 employs the filter-and-sum beamforming algorithm that applies independent weights to each of the inputs to the microphone array 201 such that directivity pattern of the microphone array 201 is steered to the spatial location of the target sound signal as determined by the sound source localization unit 202.
The output “y” of the microphone array 201 having N sound sensors 301 is the filter-and-sum of the outputs of the N sound sensors 301. That is, y=Σn=0 N−1wn Txn, where xn is the output of the (n+1)th sound sensor 301, and wn T denotes a transpose of a length-L filter applied to the (n+1)th sound sensor 301.
The spatial directivity pattern H (ω, θ) for the target sound signal from angle θ with normalized frequency w is defined as:
H ( ω , θ ) = Y ( ω , θ ) X ¯ ( ω , θ ) = n = 0 N - 1 W n ( ω ) X n ( ω , θ ) X ¯ ( ω , θ ) ( 1 )
where X is the signal received at the origin of the circular microphone array 201 and W is the frequency response of the real-valued finite impulse response (FIR) filter w. If the target sound source is far enough away from the microphone array 201, the difference between the signal received by the (n+1)th sound sensor 301 “xn” and the origin of the microphone array 201 is a delay τn; that is, Xn(ω,τ)=X(ω, θ)e−jωτ n .
FIG. 5 exemplarily illustrates distances between an origin of the microphone array 201 and the sound sensor 301 M1 and the sound sensor 301 M3 in the circular microphone array configuration, when the target sound signal is at an angle θ from the Y-axis. The microphone array system 200 disclosed herein can be used with an arbitrary directivity pattern for arbitrarily distributed sound sensors 301. For any specific microphone array configuration, the parameter that is defined to achieve beamformer coefficients is the value of delay τn for each sound sensor 301. To define the value of τn, an origin or a reference point of the microphone array 201 is defined; and then the distance dn between each sound sensor 301 and the origin is measured, and then the angle Φn of each sound sensor 301 biased from a vertical axis is measured.
For example, the angle between the Y-axis and the line joining the origin and the sound sensor 301 M0 is Φ0, the angle between the Y-axis and the line joining the origin and the sound sensor 301 M1 is Φ1, the angle between the Y-axis and the line joining the origin and the sound sensor 301 M2 is Φ2, and the angle between the Y-axis and the line joining the origin and the sound sensor 301 M3 is Φ3. The distance between the origin O and the sound sensor 301 M1, and the origin O and the sound sensor 301 M3 when the incoming target sound signal from the target sound source is at an angle θ from the Y-axis is denoted as τ1 and τ3, respectively.
For purposes of illustration, the detailed description refers to a circular microphone array configuration; however, the scope of the microphone array system 200 disclosed herein is not limited to the circular microphone array configuration but may be extended to include a linear array configuration, an arbitrarily distributed coplanar array configuration, or a microphone array configuration with any arbitrary geometry.
FIG. 6A exemplarily illustrates a table showing the distance between each sound sensor 301 in a circular microphone array configuration from the origin of the microphone array 201, when the target sound source is in the same plane as that of the microphone array 201. The distance measured in meters and the corresponding delay (τ) measured in number of samples is exemplarily illustrated in FIG. 6A. In an embodiment where the target sound source that emits the target sound signal is in a two dimensional plane, the delay (τ) between each of the sound sensors 301 and the origin of the microphone array 201 is determined as a function of distance (d) between each of the sound sensors 301 and the origin, a predefined angle (Φ) between each of the sound sensors 301 and a reference axis (Y) as exemplarily illustrated in FIG. 5, and an azimuth angle (θ) between the reference axis (Y) and the target sound signal. The determined delay (τ) is represented in terms of number of samples.
If the target sound source is far enough from the microphone array 201, the time delay between the signal received by the (n+1)th sound sensor 301 “xn,” and the origin of the microphone array 201 is herein denoted as “t” measured in seconds. The sound signals received by the microphone array 201, which are in analog form are converted into digital sound signals by sampling the analog sound signals at a particular frequency, for example, 8000 Hz. That is, the number of samples in each second is 8000. The delay τ can be represented as the product of the sampling frequency (fs) and the time delay (t). That is, τ=fs*t. Therefore, the distance between the sound sensors 301 in the microphone array 201 corresponds to the time used for the target sound signal to travel the distance and is measured by the number of samples within that time period.
Consider an example where “d” is the radius of the circle 302 of the circular microphone array configuration, “fs” is the sampling frequency, and “c” is the speed of sound. FIG. 6B exemplarily illustrates a table showing the relationship of the position of each sound sensor 301 in the circular microphone array configuration and its distance to the origin of the microphone array 201, when the target sound source is in the same plane as that of the microphone array 201. The distance measured in meters and the corresponding delay (τ) measured in number of samples is exemplarily illustrated in FIG. 6B.
The method of determining the delay (τ) enables beamforming for arbitrary numbers of sound sensors 301 and multiple arbitrary microphone array configurations. Once the delay (τ) is determined, the microphone array 201 can be aligned to enhance the target sound signal from a specific direction.
Therefore, the spatial directivity pattern H can be re-written as:
H(ω,θ)=Σn=0 N−1Wn(ω)e−jωτ n (θ)=wTg(ω,θ)  (2)
where wT=[w0 T, w1 T, w2 T, w3 T, . . . , wN−1 T] and
g(ω,θ)={gi(ω, θ)}i=1 . . . NL={e−jω(k+τ n (θ))}i=1 . . . NL is the steering vector, i=1 . . . NL, and k=mod(i−1,L) and n=floor((i−1)/L).
FIGS. 7A-7C exemplarily illustrate an embodiment of a microphone array 201 when the target sound source is in a three dimensional plane. In an embodiment where the target sound source that emits the target sound signal is in a three dimensional plane, the delay (τ) between each of the sound sensors 301 and the origin of the microphone array 201 is determined as a function of distance (d) between each of the sound sensors 301 and the origin, a predefined angle (Φ) between each of the sound sensors 301 and a first reference axis (Y), an elevation angle (Ψ) between a second reference axis (Z) and the target sound signal, and an azimuth angle (θ) between the first reference axis (Y) and the target sound signal. The determined delay (τ) is represented in terms of number of samples. The determination of the delay enables beamforming for arbitrary numbers of the sound sensors 301 and multiple arbitrary configurations of the microphone array 201.
Consider an example of a microphone array configuration with four sound sensors 301 M0, M1, M2, and M3. FIG. 7A exemplarily illustrates a graphical representation of a microphone array 201, when the target sound source in a three dimensional plane. As exemplarily illustrated in FIG. 7A, the target sound signal from the target sound source is received from the direction (Ψ, θ) with reference to the origin of the microphone array 201, where Ψ is the elevation angle and θ is the azimuth.
FIG. 7B exemplarily illustrates a table showing delay between each sound sensor 301 in a circular microphone array configuration and the origin of the microphone array 201, when the target sound source is in a three dimensional plane. The target sound source in a three dimensional plane emits a target sound signal from a spatial location (Ψ, θ). The distances between the origin O and the sound sensors 301 M0, M1, M2, and M3 when the incoming target sound signal from the target sound source is at an angle (Ψ, θ) from the Z-axis and the Y-axis respectively, are denoted as τ0, τ1, τ2, and τ3 respectively. When the spatial location of the target sound signal moves from the location Ψ=90° to a location Ψ=0°, sin(Ψ) changes from 1 to 0, and as a result, the difference between each sound sensor 301 in the microphone array 201 becomes smaller and smaller. When Ψ=0°, there is no difference between the sound sensors 301, which implies that the target sound signal reaches each sound sensor 301 at the same time. Taking into account that the sample delay between the sound sensors 301 can only be an integer, the range where all the sound sensors 301 are identical is determined.
FIG. 7C exemplarily illustrates a three dimensional working space of the microphone array 201, where the target sound signal is incident at an elevation angle Ψ<Ω, where Ω is a specific angle and is a variable representing the elevation angle. When the target sound signal is incident at an elevation angle Ψ<Ω, all four sound sensors 301 M0, M1, M2, and M3 receive the same target sound signal for 0°<0<360°. The delay τ is a function of both the elevation angle Ψ and the azimuth angle θ. That is, τ=τ(θ, Ψ). As used herein, Ω refers to the elevation angle such that all τi (θ, Ω) are equal to each other, where i=0, 1, 2, 3, etc. The value of Ω is determined by the sample delay between each of the sound sensors 301 and the origin of the microphone array 201. The adaptive beamforming unit 203 enhances sound from this range and suppresses sound signals from other directions, for example, S1 and S2 treating them as ambient noise signals.
Consider a least mean square solution for beamforming according to the method disclosed herein. Let the spatial directivity pattern be 1 in the passband and 0 in the stopband. The least square cost function is defined as:
J ( w ) = Ω p Θ p H ( ω , θ ) - 1 2 d ω d θ + α Ω s Θ s H ( ω , θ ) 2 d ω d θ = Ω p Θ p H ( ω , θ ) 2 d ω d θ + α Ω S Θ X H ( ω , θ ) 2 d ω d θ - 2 Ω P Θ P Re ( H ( ω , θ ) ) d ω d θ + Ω P Θ P 1 d ω d θ ( 3 )
Replacing
|H(ω,θ)|2=wTg(ω,θ)gH(ω,θ)w=wT(GR(ω,θ)+jG1(ω,θ))w=wTGR(ω,θ)w and Re(H(ω,θ))=wTgR(ω,θ),J(ω) becomes
J(ω)=wTQw−2wTα+d, where
Q=∫Ω P Θ P GR(ω,θ)dωdθ+αθΩ s Θ S GR(ω,θ)dωdθ
α=∫Ω P Θ P gR(ω,θ)dωdθ
d=∫Ω P Θ P 1dωdθ  (4)
where gR(ω,θ)=cos [ω(k+τn)] and GR(ω,θ)=cos [ω(k−l+τn−τm)].
When ∂J/∂w=0, the cost function J is minimized. The least-square estimate of w is obtained by:
w=Q−1α  (5)
Applying linear constrains Cw=b, the spatial response is further constrained to a predefined value b at angle θf using following equation:
[ g R T ( ω start , θ f ) g R T ( ω end , θ f ) ] w = [ b start b end ] ( 6 )
Now, the design problem becomes:
min w w T Qw - 2 w T a + d subject to Cw = b ( 7 )
and the solution of the constrained minimization problem is equal to:
w=Q−1CT(CQ−1CT)−1(b−CQ−1α)+Q−1α  (8)
where w is the filter parameter for the designed adaptive beamforming unit 203.
In an embodiment, the beamforming is performed by a delay-sum method. In another embodiment, the beamforming is performed by a filter-sum method.
FIG. 8 exemplarily illustrates a method for estimating a spatial location of the target sound signal from the target sound source by the sound source localization unit 202 using a steered response power-phase transform (SRP-PHAT). The SRP-PHAT combines the advantages of sound source localization methods, for example, the time difference of arrival (TDOA) method and the steered response power (SRP) method. The TDOA method performs the time delay estimation of the sound signals relative to a pair of spatially separated sound sensors 301. The estimated time delay is a function of both the location of the target sound source and the position of each of the sound sensors 301 in the microphone array 201. Because the position of each of the sound sensors 301 in the microphone array 201 is predefined, once the time delay is estimated, the location of the target sound source can be determined. In the SRP method, a filter-and-sum beamforming algorithm is applied to the microphone array 201 for sound signals in the direction of each of the disparate sound sources. The location of the target sound source corresponds to the direction in which the output of the filter-and-sum beamforming has the largest response power. The TDOA based localization is suitable under low to moderate reverberation conditions. The SRP method requires shorter analysis intervals and exhibits an elevated insensitivity to environmental conditions while not allowing for use under excessive multi-path. The SRP-PHAT method disclosed herein combines the advantages of the TDOA method and the SRP method, has a decreased sensitivity to noise and reverberations compared to the TDOA method, and provides more precise location estimates than existing localization methods.
For direction i (0≤t≤360), the delay Dit is calculated 801 between the tth pair of the sound sensors 301 (t=1: all pairs). The correlation value corr(Dit) between the tth pair of the sound sensors 301 corresponding to the delay of Dit is then calculated 802. For the direction i (0≤i≤360), the correlation value is given 803 by:
CORR i = t = 1 ALL PAIR corr ( D it )
Therefore, the spatial location of the target sound signal is given 804 by:
S = argmax 0 i 360 CORR i .
FIGS. 9A-9B exemplarily illustrate graphs showing the results of sound source localization performed using the steered response power-phase transform (SRP-PHAT). FIG. 9A exemplarily illustrates a graph showing the value of the SRP-PHAT for every 10° The maximum value corresponds to the location of the target sound signal from the target sound source. FIG. 9B exemplarily illustrates a graph representing the estimated target sound signal from the target sound source and a ground truth.
FIG. 10 exemplarily illustrates a system for performing adaptive beamforming by the adaptive beamforming unit 203. The algorithm for fixed beamforming is disclosed with reference to equations (3) through (8) in the detailed description of FIG. 4, FIGS. 6A-6B, and FIGS. 7A-7C, which is extended herein to adaptive beamforming. Adaptive beamforming refers to a beamforming process where the directivity pattern of the microphone array 201 is adaptively steered in the direction of a target sound signal emitted by a target sound source in motion. Adaptive beamforming achieves better ambient noise suppression than fixed beamforming. This is because the target direction of arrival, which is assumed to be stable in fixed beamforming, changes with the movement of the target sound source. Moreover, the gains of the sound sensors 301 which are assumed uniform in fixed beamforming, exhibit significant distribution. All these factors reduce speech quality. On the other hand, adaptive beamforming adaptively performs beam steering and null steering; therefore, the adaptive beamforming method is more robust against steering error caused by the array imperfection mentioned above.
As exemplarily illustrated in FIG. 10, the adaptive beamforming unit 203 disclosed herein comprises a fixed beamformer 204, a blocking matrix 205, an adaptation control unit 208, and an adaptive filter 206. The fixed beamformer 204 adaptively steers the directivity pattern of the microphone array 201 in the direction of the spatial location of the target sound signal from the target sound source for enhancing the target sound signal, when the target sound source is in motion. The sound sensors 301 in the microphone array 201 receive the sound signals S1, . . . , S4, which comprise both the target sound signal from the target sound source and the ambient noise signals. The received sound signals are fed as input to the fixed beamformer 204 and the blocking matrix 205. The fixed beamformer 204 outputs a signal “b”. In an embodiment, the fixed beamformer 204 performs fixed beamforming by filtering and summing output sound signals from the sound sensors 301. The blocking matrix 205 outputs a signal “z” which primarily comprises the ambient noise signals. The blocking matrix 205 blocks the target sound signal from the target sound source and feeds the ambient noise signals to the adaptive filter 206 to minimize the effect of the ambient noise signals on the enhanced target sound signal.
The output “z” of the blocking matrix 205 may contain some weak target sound signals due to signal leakage. If the adaptation is active when the target sound signal, for example, speech is present, the speech is cancelled out with the noise. Therefore, the adaptation control unit 208 determines when the adaptation should be applied. The adaptation control unit 208 comprises a target sound signal detector 208a and a step size adjusting module 208b. The target sound signal detector 208a of the adaptation control unit 208 detects the presence or absence of the target sound signal, for example, speech. The step size adjusting module 208b adjusts the step size for the adaptation process such that when the target sound signal is present, the adaptation is slow for preserving the target sound signal, and when the target sound signal is absent, adaptation is quick for better cancellation of the ambient noise signals.
The adaptive filter 206 is a filter that adaptively updates filter coefficients of the adaptive filter 206 so that the adaptive filter 206 can be operated in an unknown and changing environment. The adaptive filter 206 adaptively filters the ambient noise signals in response to detecting presence or absence of the target sound signal in the sound signals received from the disparate sound sources. The adaptive filter 206 adapts its filter coefficients with the changes in the ambient noise signals, thereby eliminating distortion in the target sound signal, when the target sound source and the ambient noise signals are in motion. In an embodiment, the adaptive filtering is performed by a set of sub-band adaptive filters using sub-band adaptive filtering as disclosed in the detailed description of FIG. 11.
FIG. 11 exemplarily illustrates a system for sub-band adaptive filtering. Sub-band adaptive filtering involves separating a full-band signal into different frequency ranges called sub-bands prior to the filtering process. The sub-band adaptive filtering using sub-band adaptive filters lead to a higher convergence speed compared to using a full-band adaptive filter. Moreover, the noise reduction unit 207 disclosed herein is developed in a sub-band, whereby applying sub-band adaptive filtering provides the same sub-band framework for both beamforming and noise reduction, and thus saves on computational cost.
As exemplarily illustrated in FIG. 11, the adaptive filter 206 comprises an analysis filter bank 206a, an adaptive filter matrix 206b, and a synthesis filter bank 206c. The analysis filter bank 206a splits the enhanced target sound signal (b) from the fixed beamformer 204 and the ambient noise signals (z) from the blocking matrix 205 exemplarily illustrated in FIG. 10 into multiple frequency sub-bands. The analysis filter bank 206a performs an analysis step where the outputs of the fixed beamformer 204 and the blocking matrix 205 are split into frequency sub bands. The sub-band adaptive filter 206 typically has a shorter impulse response than its full band counterpart. The step size of the sub-bands can be adjusted individually for each sub-band by the step-size adjusting module 208b, which leads to a higher convergence speed compared to using a full band adaptive filter.
The adaptive filter matrix 206b adaptively filters the ambient noise signals in each of the frequency sub-bands in response to detecting the presence or absence of the target sound signal in the sound signals received from the disparate sound sources. The adaptive filter matrix 206b performs an adaptation step, where the adaptive filter 206 is adapted such that the filter output only contains the target sound signal, for example, speech. The synthesis filter bank 206c synthesizes a full-band sound signal using the frequency sub-bands of the enhanced target sound signal. The synthesis filter bank 206c performs a synthesis step where the sub-band sound signal is synthesized into a full-band sound signal. Since the noise reduction and the beamforming are performed in the same sub-band framework, the noise reduction as disclosed in the detailed description of FIG. 13, by the noise reduction unit 207 is performed prior to the synthesis step, thereby reducing computation.
In an embodiment, the analysis filter bank 206a is implemented as a perfect-reconstruction filter bank, where the output of the synthesis filter bank 206c after the analysis and synthesis steps perfectly matches the input to the analysis filter bank 206a. That is, all the sub-band analysis filter banks 206a are factorized to operate on prototype filter coefficients and a modulation matrix is used to take advantage of the fast Fourier transform (FFT). Both analysis and synthesize steps require performing frequency shifts in each sub-band, which involves complex value computations with cosines and sinusoids. The method disclosed herein employs the FFT to perform the frequency shifts required in each sub-band, thereby minimizing the amount of multiply-accumulate operations. The implementation of the sub-band analysis filter bank 206a as a perfect-reconstruction filter bank ensures the quality of the target sound signal by ensuring that the sub-band analysis filter banks 206a do not distort the target sound signal itself.
FIG. 12 exemplarily illustrates a graphical representation showing the performance of a perfect-reconstruction filter bank. The solid line represents the input signal to the analysis filter bank 206a, and the circles represent the output of the synthesis filter bank 206c after analysis and synthesis. As exemplarily illustrated in FIG. 12, the output of the synthesis filter bank 206c perfectly matches the input, and is therefore referred to as the perfect-reconstruction filter bank.
FIG. 13 exemplarily illustrates a block diagram of a noise reduction unit 207 for performing noise reduction using, for example, a Wiener-filter based noise reduction algorithm. The noise reduction unit 207 performs noise reduction for further suppressing the ambient noise signals after adaptive beamforming, for example, by using a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, or a model based noise reduction algorithm. In an embodiment, the noise reduction unit 207 performs noise reduction in multiple frequency sub-bands employed by an analysis filter bank 206a of the adaptive beamforming unit 203 for sub-band adaptive beamforming.
In an embodiment, the noise reduction is performed using the Wiener-filter based noise reduction algorithm. The noise reduction unit 207 explores the short-term and long-term statistics of the target sound signal, for example, speech, and the ambient noise signals, and the wide-band and narrowband signal-to-noise ratio (SNR) to support a Wiener gain filtering. The noise reduction unit 207 comprises a target sound signal statistics analyzer 207a, a noise statistics analyzer 207b, a signal-to-noise ratio (SNR) analyzer 207c, and a Wiener filter 207d. The target sound signal statistics analyzer 207a explores the short-term and long-term statistics of the target sound signal, for example, speech. Similarly, the noise statistics analyzer 207b explores the short-term and long-term statistics of the ambient noise signals. The SNR analyzer 207c of the noise reduction unit 207 explores the wide-band and narrow-band signal-to-noise ratio (SNR). After the spectrum of noisy-speech passes through the Wiener filter 207d, an estimation of the clean-speech spectrum is generated. The synthesis filter bank 206c, by an inverse process of the analysis filter bank 206a, reconstructs the signals of the clean speech into a full-band signal, given the estimated spectrum of the clean speech.
FIG. 14 exemplarily illustrates a hardware implementation of the microphone array system 200 disclosed herein. The hardware implementation of the microphone array system 200 disclosed in the detailed description of FIG. 2 comprises the microphone array 201 having an arbitrary number of sound sensors 301 positioned in an arbitrary configuration, multiple microphone amplifiers 1401, one or more audio codecs 1402, a digital signal processor (DSP) 1403, a flash memory 1404, one or more power regulators 1405 and 1406, a battery 1407, a loudspeaker or a headphone 1408, and a communication interface 1409. The microphone array 201 comprises, for example, four or eight sound sensors 301 arranged in a linear or a circular microphone array configuration. The microphone array 201 receives the sound signals.
Consider an example where the microphone array 201 comprises four sound sensors 301 that pick up the sound signals. Four microphone amplifiers 1401 receive the output sound signals from the four sound sensors 301. The microphone amplifiers 1401 also referred to as preamplifiers provide a gain to boost the power of the received sound signals for enhancing the sensitivity of the sound sensors 301. In an example, the gain of the preamplifiers is 20 dB.
The audio codec 1402 receives the amplified output from the microphone amplifiers 1401. The audio codec 1402 provides an adjustable gain level, for example, from about −74 dB to about 6 dB. The received sound signals are in an analog form. The audio codec 1402 converts the four channels of the sound signals in the analog form into digital sound signals. The pre-amplifiers may not be required for some applications. The audio codec 1402 then transmits the digital sound signals to the DSP 1403 for processing of the digital sound signals. The DSP 1403 implements the sound source localization unit 202, the adaptive beamforming unit 203, and the noise reduction unit 207.
After the processing, the DSP 1403 either stores the processed signal from the DSP 1403 in a memory device for a recording application, or transmits the processed signal to the communication interface 1409. The recording application comprises, for example, storing the processed signal onto the memory device for the purposes of playing back the processed signal at a later time. The communication interface 1409 transmits the processed signal, for example, to a computer, the internet, or a radio for communicating the processed signal. In an embodiment, the microphone array system 200 disclosed herein implements a two-way communication device where the signal received from the communication interface 1409 is processed by the DSP 1403 and the processed signal is then played through the loudspeaker or the headphone 1408.
The flash memory 1404 stores the code for the DSP 1403 and compressed audio signals. When the microphone array system 200 boots up, the DSP 1403 reads the code from the flash memory 1404 into an internal memory of the DSP 1403 and then starts executing the code. In an embodiment, the audio codec 1402 can be configured for encoding and decoding audio or sound signals during the start up stage by writing to registers of the DSP 1403. For an eight-sensor microphone array 201, two four-channel audio codec 1402 chips may be used. The power regulators 1405 and 1406, for example, linear power regulators 1405 and switch power regulators 1406 provide appropriate voltage and current supply for all the components, for example, 201, 1401, 1402, 1403, etc., mechanically supported and electrically connected on a circuit board. A universal serial bus (USB) control is built into the DSP 1403. The battery 1407 is used for powering the microphone array system 200.
Consider an example where the microphone array system 200 disclosed herein is implemented on a mixed signal circuit board having a six-layer printed circuit board (PCB). Noisy digital signals easily contaminate the low voltage analog sound signals from the sound sensors 301. Therefore, the layout of the mixed signal circuit board is carefully partitioned to isolate the analog circuits from the digital circuits. Although both the inputs and outputs of the microphone amplifiers 1401 are in analog form, the microphone amplifiers 1401 are placed in a digital region of the mixed signal circuit board because of their high power consumption 1401 and switch amplifier nature.
The linear power regulators 1405 are deployed in an analog region of the mixed signal circuit board due to the low noise property exhibited by the linear power regulators 1405. Five power regulators, for example, 1405 are designed in the microphone array system 200 circuits to ensure quality. The switch power regulators 1406 achieve an efficiency of about 95% of the input power and have high output current capacity; however their outputs are too noisy for analog circuits. The efficiency of the linear power regulators 1405 is determined by the ratio of the output voltage to the input voltage, which is lower than that of the switch power regulators 1406 in most cases. The regulator outputs utilized in the microphone array system 200 circuits are stable, quiet, and suitable for the low power analog circuits.
In an example, the microphone array system 200 is designed with a microphone array 201 having dimensions of 10 cm×2.5 cm×1.5 cm, a USB interface, and an assembled PCB supporting the microphone array 201 and a DSP 1403 having a low power consumption design devised for portable devices, a four-channel codec 1402, and a flash memory 1404. The DSP 1403 chip is powerful enough to handle the DSP 1403 computations in the microphone array system 200 disclosed herein. The hardware configuration of this example can be used for any microphone array configuration, with suitable modifications to the software. In an embodiment, the adaptive beamforming unit 203 of the microphone array system 200 is implemented as hardware with software instructions programmed on the DSP 1403. The DSP 1403 is programmed for beamforming, noise reduction, echo cancellation, and USB interfacing according to the method disclosed herein, and fine tuned for optimal performance.
FIGS. 15A-15C exemplarily illustrate a conference phone 1500 comprising an eight-sensor microphone array 201. The eight-sensor microphone array 201 comprises eight sound sensors 301 arranged in a configuration as exemplarily illustrated in FIG. 15A. A top view of the conference phone 1500 comprising the eight-sensor microphone array 201 is exemplarily illustrated in FIG. 15A. A front view of the conference phone 1500 comprising the eight-sensor microphone array 201 is exemplarily illustrated in FIG. 15B. A headset 1502 that can be placed in a base holder 1501 of the conference phone 1500 having the eight-sensor microphone array 201 is exemplarily illustrated in FIG. 15C. In addition to a conference phone 1500, the microphone array system 200 disclosed herein with broadband beamforming can be configured for a mobile phone, a tablet computer, etc., for speech enhancement and noise reduction.
FIG. 16A exemplarily illustrates a layout of an eight-sensor microphone array 201 for a conference phone 1500. Consider an example of a circular microphone array 201 in which eight sound sensors 301 are mounted on the surface of the conference phone 1500 as exemplarily illustrated in FIG. 15A. The conference phone 1500 has a removable handset 1502 on top, and hence the microphone array system 200 is configured to accommodate the handset 1502 as exemplarily illustrated in FIGS. 15A-15C. In an example, the circular microphone array 201 has a diameter of about four inches. Eight sound sensors 301, for example, microphones, M0, M1, M2, M3, M4, M5, M6, and M7 are distributed along a circle 302 on the conference phone 1500. Microphones M4-M7 are separated by 90 degrees from each other, and microphones Mo-M3 are rotated counterclockwise by 60 degrees from microphone M4-M7 respectively.
FIG. 16B exemplarily illustrates a graphical representation of eight spatial regions to which the eight-sensor microphone array 201 of FIG. 16A responds. The space is divided into eight spatial regions with equal spaces centered at 15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330° respectively. The adaptive beamforming unit 203 configures the eight-sensor microphone array 201 to automatically point to one of these eight spatial regions according to the location of the target sound signal from the target sound source as estimated by the sound source localization unit 202.
FIGS. 16C-16D exemplarily illustrate computer simulations showing the steering of the directivity patterns of the eight-sensor microphone array 201 of FIG. 16A, in the directions 15° and 60° respectively, in the frequency range 300 Hz to 5 kHz. FIG. 16C exemplarily illustrates the computer simulation result showing the directivity pattern of the microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 15°.
The computer simulation for verifying the performance of the adaptive beamforming unit 203 when the target sound signal is received from the target sound source in the spatial region centered at 15° uses the following parameters:
Sampling frequency fs=16 k,
FIR filter taper length L=20
Passband (Θp, Ωp)={300-5000 Hz, −5°-35°}, designed spatial directivity pattern is 1.
Stopband (Θs, Ωs)={300˜5000 Hz, −180°˜−15°+45°˜180°}, the designed spatial directivity pattern is 0.
It can be seen that the directivity pattern of the microphone array 201 in the spatial region centered at 15° is enhanced while the sound signals from all other spatial regions are suppressed.
FIG. 16D exemplarily illustrates the computer simulation result showing the directivity pattern of the microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 60°. The computer simulation for verifying the performance of the adaptive beamforming unit 203 when the target sound signal is received from the target sound source in the spatial region centered at 60° uses the following parameters:
Sampling frequency fs=16 k,
FIR filter taper length L=20
Passband (Θp, Ωp)={300-5000 Hz, 40°-80°}, designed spatial directivity pattern is 1.
Stopband (Θs, Ωs)={300˜5000 Hz, −180°˜30°+90°˜180°}, the designed spatial directivity pattern is 0.
It can be seen that the directivity pattern of the microphone array 201 in the spatial region centered at 60° is enhanced while the sound signals from all other spatial regions are suppressed. The other six spatial regions have similar parameters. Moreover, in all frequencies, the main lobe has the same level, which means the target sound signal has little distortion in frequency.
FIGS. 16E-16L exemplarily illustrate graphical representations showing the directivity patterns of the eight-sensor microphone array 201 of FIG. 16A in each of the eight spatial regions, where each directivity pattern is an average response from 300 Hz to 5000 Hz. The main lobe is about 10 dB higher than the side lobe, and therefore the ambient noise signals from other directions are highly suppressed compared to the target sound signal in the pass direction. The microphone array system 200 calculates the filter coefficients for the target sound signal, for example, speech signals from each sound sensor 301 and combines the filtered signals to enhance the speech from any specific direction. Since speech covers a large range of frequencies, the method and system 200 disclosed herein covers broadband signals from 300 Hz to 5000 Hz.
FIG. 16E exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 15°. FIG. 16F exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 60°. FIG. 16G exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 105°. FIG. 16H exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 150°. FIG. 16I exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 195°. FIG. 16J exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 240°. FIG. 16K exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 285°. FIG. 16L exemplarily illustrates a graphical representation showing the directivity pattern of the eight-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 330°. The microphone array system 200 disclosed herein enhances the target sound signal from each of the directions 15°, 60°, 105°, 150°, 195°, 240°, 285°, and 330°, while suppressing the ambient noise signals from the other directions.
The microphone array system 200 disclosed herein can be implemented for a square microphone array configuration and a rectangular array configuration where a sound sensor 301 is positioned in each corner of the four-cornered array. The microphone array system 200 disclosed herein implements beamforming from plane to three dimensional sound sources.
FIG. 17A exemplarily illustrates a graphical representation of four spatial regions to which a four-sensor microphone array 201 for a wireless handheld device responds. The wireless handheld device is, for example, a mobile phone. Consider an example where the microphone array 201 comprises four sound sensors 301, for example, microphones, uniformly distributed around a circle 302 having diameter equal to about two inches. This configuration is identical to positioning four sound sensors 301 or microphones on four corners of a square. The space is divided into four spatial regions with equal space centered at −90°, 0°, 90°, and 180° respectively. The adaptive beamforming unit 203 configures the four-sensor microphone array 201 to automatically point to one of these spatial regions according to the location of the target sound signal from the target sound source as estimated by the sound source localization unit 202.
FIGS. 17B-17I exemplarily illustrate computer simulations showing the directivity patterns of the four-sensor microphone array 201 of FIG. 17A with respect to azimuth and frequency. The results of the computer simulations performed for verifying the performance of the adaptive beamforming unit 203 of the microphone array system 200 disclosed herein for a sampling frequency fs=16 k and FIR filter taper length L=20, are as follows:
For the spatial region centered at 0°:
Passband (Θp, Ωp)={300-4000 Hz, −20°-20°}, designed spatial directivity pattern is 1.
Stopband (Θ, Ωs)={300˜4000 Hz, −180°˜−30°+30°˜180°}, the designed spatial directivity pattern is 0.
For the spatial region centered at 90°:
Passband (Θp, Ωp)={300-4000 Hz, 70°-110°}, designed spatial directivity pattern is 1.
Stopband (Θs, Ωs)={300˜4000 Hz, −180°˜60°+120°˜180°}, the designed spatial directivity pattern is 0. The directivity patterns for the spatial regions centered at −90° and 180° are similarly obtained.
FIG. 17B exemplarily illustrates the computer simulation result representing a three dimensional (3D) display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at −90°. FIG. 17C exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at −90°.
FIG. 17D exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 0°. FIG. 17E exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 0°.
FIG. 17F exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 90°. FIG. 17G exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound signal is received from the target sound source in the spatial region centered at 90°.
FIG. 17H exemplarily illustrates the computer simulation result representing a 3D display of the directivity pattern of the four-sensor microphone array 201 when the target sound source is received from the target sound source in the spatial region centered at 180°. FIG. 17I exemplarily illustrates the computer simulation result representing a 2D display of the directivity pattern of the four-sensor microphone array 201 when the target sound source is received from the target sound source in the spatial region centered at 180°. The 3D displays of the directivity patterns in FIG. 17B, FIG. 17D, FIG. 17F, and FIG. 17H demonstrate that the passbands have the same height. The 2D displays of the directivity patterns in FIG. 17C, FIG. 17E, FIG. 17G, and FIG. 17I demonstrate that the passbands have the same width along the frequency and demonstrates the broadband properties of the microphone array 201.
FIGS. 18A-18B exemplarily illustrates a microphone array configuration for a tablet computer. In this example, four sound sensors 301 of the microphone array 201 are positioned on a frame 1801 of the tablet computer, for example, the iPad® of Apple Inc. Geometrically, the sound sensors 301 are distributed on the circle 302 as exemplarily in FIG. 18B. The radius of the circle 302 is equal to the width of the tablet computer. The angle θ between the sound sensors 301 M2 and M3 is determined to avoid spatial aliasing up to 4000 Hz. This microphone array configuration enhances a front speaker's voice and suppresses background ambient noise. The adaptive beamforming unit 203 configures the microphone array 201 to form an acoustic beam 1802 pointing frontwards using the method and system 200 disclosed herein. The target sound signal, that is, the front speaker's voice within the range of Φ<30° is enhanced compared to the sound signals from other directions.
FIG. 18C exemplarily illustrates an acoustic beam 1802 formed using the microphone array configuration of FIGS. 18A-18B according to the method and system 200 disclosed herein.
FIGS. 18D-18G exemplarily illustrates graphs showing processing results of the adaptive beamforming unit 203 and the noise reduction unit 207 for the microphone array configuration of FIG. 18B, in both a time domain and a spectral domain for the tablet computer. Consider an example where a speaker is talking in front of the tablet computer with ambient noise signals on the side. FIG. 18D exemplarily illustrates a graph showing the performance of the microphone array 201 before performing beamforming and noise reduction with a signal-to-noise ratio (SNR) of 15 dB. FIG. 18E exemplarily illustrates a graph showing the performance of the microphone array 201 after performing beamforming and noise reduction, according to the method disclosed herein, with an SNR of 15 dB. FIG. 18F exemplarily illustrates a graph showing the performance of the microphone array 201 before performing beamforming and noise reduction with an SNR of 0 dB. FIG. 18G exemplarily illustrates a graph showing the performance of the microphone array 201 after performing beamforming and noise reduction, according to the method disclosed herein, with an SNR of 0 dB.
It can be seen from FIGS. 18D-18G that the performance graph is noisier for the microphone array 201 before the beamforming and noise reduction is performed. Therefore, the adaptive beamforming unit 203 and the noise reduction unit 207 of the microphone array system 200 disclosed herein suppresses ambient noise signals while maintaining the clarity of the target sound signal, for example, the speech signal.
FIGS. 19A-19F exemplarily illustrate tables showing different microphone array configurations and the corresponding values of delay τn for the sound sensors 301 in each of the microphone array configurations. The broadband beamforming method disclosed herein can be used for microphone arrays 201 with arbitrary numbers of sound sensors 301 and arbitrary locations of the sound sensors 301. The sound sensors 301 can be mounted on surfaces or edges of any speech acquisition device. For any specific microphone array configuration, the only parameter that needs to be defined to achieve the beamformer coefficients is the value of; for each sound sensor 301 as disclosed in the detailed description of FIG. 5, FIGS. 6A-6B, and FIGS. 7A-7C and as exemplarily illustrated in FIGS. 19A-19F. In an example, the microphone array configuration exemplarily illustrated in FIG. 19F is implemented on a handheld device for hands-free speech acquisition. In a hands-free and non-close talking scenario, a user prefers to talk in distance rather than speaking close to the sound sensor 301 and may want to talk while watching a screen of the handheld device. The microphone array system 200 disclosed herein allows the handheld device to pick up sound signals from the direction of the speaker's mouth and suppress noise from other directions. The method and system 200 disclosed herein may be implemented on any device or equipment, for example, a voice recorder where a target sound signal or speech needs to be enhanced.
The foregoing examples have been provided merely for the purpose of explanation and are in no way to be construed as limiting of the present invention disclosed herein. While the invention has been described with reference to various embodiments, it is understood that the words, which have been used herein, are words of description and illustration, rather than words of limitation. Further, although the invention has been described herein with reference to particular means, materials and embodiments, the invention is not intended to be limited to the particulars disclosed herein; rather, the invention extends to all functionally equivalent structures, methods and uses, such as are within the scope of the appended claims. Those skilled in the art, having the benefit of the teachings of this specification, may affect numerous modifications thereto and changes may be made without departing from the scope and spirit of the invention in its aspects.

Claims (41)

We claim:
1. A method for enhancing a target sound signal from a plurality of sound signals, comprising:
providing a microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit, wherein said sound source localization unit, said adaptive beamforming unit, and said noise reduction unit are in operative communication with said array of said sound sensors;
receiving said sound signals from a plurality of disparate sound sources by said sound sensors, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
determining a delay between each of said sound sensors and an origin of said array of said sound sensors as a function of distance between each of said sound sensors and said origin, a predefined angle between each of said sound sensors and a reference axis, and an azimuth angle between said reference axis and said target sound signal, when said target sound source that emits said target sound signal is in a two dimensional plane, wherein said delay is represented in terms of number of samples, and wherein said determination of said delay enables beamforming for arbitrary numbers of said sound sensors and a plurality of arbitrary configurations of said array of said sound sensors;
estimating a spatial location of said target sound signal from said received sound signals by said sound source localization unit;
performing adaptive beamforming for steering a directivity pattern of said array of said sound sensors in a direction of said spatial location of said target sound signal by said adaptive beamforming unit, wherein said adaptive beamforming unit enhances said target sound signal and partially suppresses said ambient noise signals; and
suppressing said ambient noise signals by said noise reduction unit for further enhancing said target sound signal.
2. The method of claim 1, wherein said spatial location of said target sound signal from said target sound source is estimated using a steered response power-phase transform by said sound source localization unit.
3. The method of claim 1, wherein said adaptive beamforming comprises:
providing a fixed beamformer, a blocking matrix, and an adaptive filter in said adaptive beamforming unit;
steering said directivity pattern of said array of said sound sensors in said direction of said spatial location of said target sound signal from said target sound source by said fixed beamformer for enhancing said target sound signal, when said target sound source is in motion;
feeding said ambient noise signals to said adaptive filter by blocking said target sound signal received from said target sound source using said blocking matrix; and
adaptively filtering said ambient noise signals by said adaptive filter in response to detecting one of presence and absence of said target sound signal in said sound signals received from said disparate sound sources.
4. The method of claim 3, wherein said fixed beamformer performs fixed beamforming by filtering and summing output sound signals from said sound sensors.
5. The method of claim 3, wherein said adaptive filtering comprises sub-band adaptive filtering performed by said adaptive filter, wherein said sub-band adaptive filtering comprises:
providing an analysis filter bank, an adaptive filter matrix, and a synthesis filter bank in said adaptive filter;
splitting said enhanced target sound signal from said fixed beamformer and said ambient noise signals from said blocking matrix into a plurality of frequency sub-bands by said analysis filter bank;
adaptively filtering said ambient noise signals in each of said frequency sub-bands by said adaptive filter matrix in response to detecting one of presence and absence of said target sound signal in said sound signals received from said disparate sound sources; and
synthesizing a full-band sound signal using said frequency sub-bands of said enhanced target sound signal by said synthesis filter bank.
6. The method of claim 3, wherein said adaptive beamforming further comprises detecting said presence of said target sound signal by an adaptation control unit provided in said adaptive beamforming unit and adjusting a step size for said adaptive filtering in response to detecting one of said presence and said absence of said target sound signal in said sound signals received from said disparate sound sources.
7. The method of claim 1, wherein said noise reduction unit performs noise reduction by using one of a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, and a model based noise reduction algorithm.
8. The method of claim 1, wherein said noise reduction unit performs noise reduction in a plurality of frequency sub-bands, wherein said frequency sub-bands are employed by an analysis filter bank of said adaptive beamforming unit for sub-band adaptive beamforming.
9. A system for enhancing a target sound signal from a plurality of sound signals, comprising:
an array of sound sensors positioned in an arbitrary configuration, wherein said sound sensors receive said sound signals from a plurality of disparate sound sources, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
a sound source localization unit that estimates a spatial location of said target sound signal from said received sound signals, by determining a delay between each of said sound sensors and an origin of said array of said sound sensors as a function of distance between each of said sound sensors and said origin, a predefined angle between each of said sound sensors and a reference axis, and an azimuth angle between said reference axis and said target sound signal, when said target sound source that emits said target sound signal is in a two dimensional plane, wherein said delay is represented in terms of number of samples, and wherein said determination of said delay enables beamforming for arbitrary numbers of said sound sensors and a plurality of arbitrary configurations of said array of said sound sensors;
an adaptive beamforming unit that steers directivity pattern of said array of said sound sensors in a direction of said spatial location of said target sound signal, wherein said adaptive beamforming unit enhances said target sound signal and partially suppresses said ambient noise signals; and
a noise reduction unit that suppresses said ambient noise signals for further enhancing said target sound signal.
10. The system of claim 9, wherein said sound source localization unit estimates said spatial location of said target sound signal from said target sound source using a steered response power-phase transform.
11. The system of claim 9, wherein said adaptive beamforming unit comprises:
a fixed beamformer that steers said directivity pattern of said array of said sound sensors in said direction of said spatial location of said target sound signal from said target sound source for enhancing said target sound signal, when said target sound source is in motion;
a blocking matrix that feeds said ambient noise signals to an adaptive filter by blocking said target sound signal received from said target sound source; and
said adaptive filter that adaptively filters said ambient noise signals in response to detecting one of presence and absence of said target sound signal in said sound signals received from said disparate sound sources.
12. The system of claim 11, wherein said fixed beamformer performs fixed beamforming by filtering and summing output sound signals from said sound sensors.
13. The system of claim 11, wherein said adaptive filter comprises a set of sub-band adaptive filters comprising:
an analysis filter bank that splits said enhanced target sound signal from said fixed beamformer and said ambient noise signals from said blocking matrix into a plurality of frequency sub-bands;
an adaptive filter matrix that adaptively filters said ambient noise signals in each of said frequency sub-bands in response to detecting one of presence and absence of said target sound signal in said sound signals received from said disparate sound sources; and
a synthesis filter bank that synthesizes a full-band sound signal using said frequency sub-bands of said enhanced target sound signal.
14. The system of claim 9, wherein said adaptive beamforming unit further comprises an adaptation control unit that detects said presence of said target sound signal and adjusts a step size for said adaptive filtering in response to detecting one of said presence and said absence of said target sound signal in said sound signals received from said disparate sound sources.
15. The system of claim 9, wherein said noise reduction unit is one of a Wiener-filter based noise reduction unit, a spectral subtraction noise reduction unit, an auditory transform based noise reduction unit, and a model based noise reduction unit.
16. The system of claim 9, further comprising one or more audio codecs that convert said sound signals in an analog form of said sound signals into digital sound signals and reconverts said digital sound signals into said analog form of said sound signals.
17. The system of claim 9, wherein said noise reduction unit performs noise reduction in a plurality of frequency sub-bands employed by an analysis filter bank of said adaptive beamforming unit for sub-band adaptive beamforming.
18. The system of claim 9, wherein said array of said sound sensors is one of a linear array of said sound sensors, a circular array of said sound sensors, and an arbitrarily distributed coplanar array of said sound sensors.
19. The method of claim 1, wherein said delay (τ) is determined by a formula τ=fs*t, wherein fs is a sampling frequency and t is a time delay.
20. A method for enhancing a target sound signal from a plurality of sound signals, comprising:
providing a microphone array system comprising an array of sound sensors positioned in an arbitrary configuration, a sound source localization unit, an adaptive beamforming unit, and a noise reduction unit, wherein said sound source localization unit, said adaptive beamforming unit, and said noise reduction unit are in operative communication with said array of said sound sensors;
receiving said sound signals from a plurality of disparate sound sources by said sound sensors, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
determining a delay between each of said sound sensors and an origin of said array of said sound sensors as a function of distance between each of said sound sensors and said origin, a predefined angle between each of said sound sensors and a first reference axis, an elevation angle between a second reference axis and said target sound signal, and an azimuth angle between said first reference axis and said target sound signal, when said target sound source that emits said target sound signal is in a three dimensional plane, wherein said delay is represented in terms of number of samples, and wherein said determination of said delay enables beamforming for arbitrary numbers of said sound sensors and a plurality of arbitrary configurations of said array of said sound sensors;
estimating a spatial location of said target sound signal from said received sound signals by said sound source localization unit;
performing adaptive beamforming for steering a directivity pattern of said array of said sound sensors in a direction of said spatial location of said target sound signal by said adaptive beamforming unit, wherein said adaptive beamforming unit enhances said target sound signal and partially suppresses said ambient noise signals; and
suppressing said ambient noise signals by said noise reduction unit for further enhancing said target sound signal.
21. A system for enhancing a target sound signal from a plurality of sound signals, comprising:
an array of sound sensors positioned in an arbitrary configuration, wherein said sound sensors receive said sound signals from a plurality of disparate sound sources, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
a sound source localization unit that estimates a spatial location of said target sound signal from said received sound signals as a function of distance between each of said sound sensors and said origin, a predefined angle between each of said sound sensors and a first reference axis, an elevation angle between a second reference axis and said target sound signal, and an azimuth angle between said first reference axis and said target sound signal, when said target sound source that emits said target sound signal is in a three dimensional plane, wherein said delay is represented in terms of number of samples, and wherein said determination of said delay enables beamforming for arbitrary numbers of said sound sensors and a plurality of arbitrary configurations of said array of said sound sensors;
an adaptive beamforming unit that steers directivity pattern of said array of said sound sensors in a direction of said spatial location of said target sound signal, wherein said adaptive beamforming unit enhances said target sound signal and partially suppresses said ambient noise signals; and
a noise reduction unit that suppresses said ambient noise signals for further enhancing said target sound signal.
22. A method for enhancing a target sound signal from a plurality of sound signals, comprising:
providing a microphone array system comprising an array of sound sensors positioned in a linear, circular, or other configuration, a sound source localization unit, an adaptive beamforming unit, a noise reduction unit, and an echo cancellation unit, wherein said sound source localization unit, said adaptive beamforming unit, said noise reduction unit, and said echo cancellation unit are implemented in a digital signal processor, and wherein said digital signal processor is in operative communication with said array of said sound sensors;
receiving said sound signals from a plurality of disparate sound sources by said sound sensors, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
determining a delay between each of said sound sensors and an origin of said array of said sound sensors as a function of distance between each of said sound sensors and said origin, a predefined angle between each of said sound sensors and a reference axis, and an azimuth angle between said reference axis and said target sound signal, when said target sound source that emits said target sound signal is in a two dimensional plane, wherein said delay is represented in terms of number of samples, and wherein said determination of said delay enables beamforming for said array of said sound sensors in a plurality of configurations;
estimating a location of said target sound signal from said received sound signals by said sound source localization unit;
performing adaptive beamforming for steering a directivity pattern of said array of said sound sensors in a direction of said location of said target sound signal by said adaptive beamforming unit, wherein said adaptive beamforming unit enhances said target sound signal and partially suppresses said ambient noise signals;
performing echo cancellation by said echo cancellation unit for further enhancing said target sound signal; and
suppressing said ambient noise signals by said noise reduction unit for further enhancing said target sound signal.
23. The method of claim 22, wherein said location of said target sound signal from said target sound source is estimated using a steered response power-phase transform by said sound source localization unit.
24. The method of claim 22, wherein said adaptive beamforming comprises:
providing a fixed beamformer, a blocking matrix, and an adaptive filter in said adaptive beamforming unit;
steering said directivity pattern of said array of said sound sensors in said direction of said location of said target sound signal from said target sound source by said fixed beamformer for enhancing said target sound signal, when said target sound source is in motion;
feeding said ambient noise signals to said adaptive filter by blocking said target sound signal received from said target sound source using said blocking matrix; and
adaptively filtering said ambient noise signals by said adaptive filter in response to voice activity detection, wherein said voice activity detection comprises detecting one of presence and absence of said target sound signal in said sound signals received from said disparate sound sources.
25. The method of claim 24, wherein said fixed beamformer performs fixed beamforming by one of filtering and summing output sound signals from said sound sensors, and delaying and summing output sound signals from said sound sensors.
26. The method of claim 24, wherein said adaptive filtering comprises sub-band adaptive filtering performed by said adaptive filter, and wherein said sub-band adaptive filtering comprises:
providing an analysis filter bank, an adaptive filter matrix, and a synthesis filter bank in said adaptive filter;
splitting said enhanced target sound signal from said fixed beamformer and said ambient noise signals from said blocking matrix into a plurality of frequency sub-bands by said analysis filter bank;
adaptively filtering said ambient noise signals in each of said frequency sub-bands by said adaptive filter matrix in response to said detection of one of said presence and said absence of said target sound signal in said sound signals received from said disparate sound sources; and
synthesizing a full-band sound signal using said frequency sub-bands of said enhanced target sound signal by said synthesis filter bank.
27. The method of claim 24, wherein said adaptive beamforming further comprises detecting said presence of said target sound signal by an adaptation control unit provided in said adaptive beamforming unit and adjusting a step size for said adaptive filtering in response to said detection of one of said presence and said absence of said target sound signal in said sound signals received from said disparate sound sources.
28. The method of claim 22, wherein said noise reduction unit performs noise reduction by using one of a Wiener-filter based noise reduction algorithm, a spectral subtraction noise reduction algorithm, an auditory transform based noise reduction algorithm, and a model based noise reduction algorithm.
29. The method of claim 22, wherein said noise reduction unit performs noise reduction in a plurality of frequency sub-bands, wherein said frequency sub-bands are employed by an analysis filter bank of said adaptive beamforming unit for sub-band adaptive beamforming, wherein said sound source localization unit calculates said delay (τ) based on said number of samples within a time period and a time delay for said target sound signal to travel said distance between each of said sound sensors in said microphone array and said origin of said array of said sound sensors, and wherein said distance between said each of said sound sensors in the microphone array and said origin of said array of said sound sensors is either a same distance or a different distance.
30. A microphone array system for enhancing a target sound signal from a plurality of sound signals, comprising:
an array of sound sensors positioned in a linear, circular, or other configuration, wherein said sound sensors receive said sound signals from a plurality of disparate sound sources, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
a digital signal processor, said digital signal processor comprising:
a sound source localization unit that estimates a location of said target sound signal from said received sound signals, by determining a delay between each of said sound sensors and an origin of said array of said sound sensors as a function of distance between each of said sound sensors and said origin, a predefined angle between each of said sound sensors and a reference axis, and an azimuth angle between said reference axis and said target sound signal, when said target sound source that emits said target sound signal is in a two dimensional plane, wherein said delay is represented in terms of number of samples, and wherein said determination of said delay enables beamforming for said array of sound sensors in a plurality of configurations;
an adaptive beamforming unit that steers directivity pattern of said array of said sound sensors in a direction of said location of said target sound signal, wherein said adaptive beamforming unit enhances said target sound signal and partially suppresses said ambient noise signals;
an echo cancellation unit that performs echo cancellation for further enhancing said target sound signal; and
a noise reduction unit that suppresses said ambient noise signals for further enhancing said target sound signal.
31. The system of claim 30, wherein said sound source localization unit estimates said location of said target sound signal from said target sound source using a steered response power-phase transform.
32. The system of claim 30, wherein said adaptive beamforming unit comprises:
a fixed beamformer that steers said directivity pattern of said array of said sound sensors in said direction of said location of said target sound signal from said target sound source for enhancing said target sound signal, when said target sound source is in motion;
a blocking matrix that feeds said ambient noise signals to an adaptive filter by blocking said target sound signal received from said target sound source; and
said adaptive filter adaptively filters said ambient noise signals in response to voice activity detection, wherein said voice activity detection comprises detecting one of presence and absence of said target sound signal in said sound signals received from said disparate sound sources.
33. The system of claim 32, wherein said fixed beamformer performs fixed beamforming by one of filtering and summing output sound signals from said sound sensors, and delaying and summing output sound signals from said sound sensors.
34. The system of claim 32, wherein said adaptive filter comprises a set of sub-band adaptive filters comprising:
an analysis filter bank that splits said enhanced target sound signal from said fixed beamformer and said ambient noise signals from said blocking matrix into a plurality of frequency sub-bands;
an adaptive filter matrix that adaptively filters said ambient noise signals in each of said frequency sub-bands in response to said detection of one of said presence and said absence of said target sound signal in said sound signals received from said disparate sound sources; and
a synthesis filter bank that synthesizes a full-band sound signal using said frequency sub-bands of said enhanced target sound signal.
35. The system of claim 32, wherein said adaptive beamforming unit further comprises an adaptation control unit that detects said presence of said target sound signal and adjusts a step size for said adaptive filtering in response to said detection of one of said presence and said absence of said target sound signal in said sound signals received from said disparate sound sources.
36. The system of claim 30, wherein said noise reduction unit is one of a Wiener-filter based noise reduction unit, a spectral subtraction noise reduction unit, an auditory transform based noise reduction unit, and a model based noise reduction unit, wherein said noise reduction unit performs noise reduction in a plurality of frequency sub-bands employed by an analysis filter bank of said adaptive beamforming unit for sub-band adaptive beamforming, wherein said sound source localization unit calculates said delay (τ) based on said number of samples within a time period and a time delay for said target sound signal to travel said distance between each of said sound sensors in said microphone array and said origin of said array of said sound sensors, and wherein said distance between said each of said sound sensors in the microphone array and said origin of said array of said sound sensors is either a same distance or a different distance.
37. The system of claim 30, further comprising one or more audio codecs that convert said sound signals in an analog form of said sound signals into digital sound signals and reconverts said digital sound signals into said analog form of said sound signals.
38. A microphone array system for enhancing a target sound signal from a plurality of sound signals, comprising:
an array of sound sensors, wherein said sound sensors receive said sound signals from a plurality of disparate sound sources, wherein said received sound signals comprise said target sound signal from a target sound source among said disparate sound sources, and ambient noise signals;
a digital signal processor, said digital signal processor comprising:
a sound source localization unit that estimates a location of said target sound signal from said received sound signals by determining a delay between each of said sound sensors and a reference point of said array of said sound sensors as a function of distance between each of said sound sensors and said reference point and an angle of each of said sound sensors biased from a reference axis;
a beamforming unit that enhances said target sound signal and partially suppresses said ambient noise signals;
an echo cancellation unit that performs echo cancellation and further enhances said target sound signal; and
a noise reduction unit that suppresses said ambient noise signals and further enhances said target sound signal.
39. The system of claim 38, wherein said microphone array system is implemented in one of devices with speech acquisition capability, hands-free devices, handheld devices, conference phones and video conferencing applications, wherein said handheld devices comprise smart phones, tablet computers and laptop computers, and wherein said array of said sound sensors is one of a linear array of said sound sensors, a circular array of said sound sensors, and other types of array of said sound sensors.
40. The method of claim 22, wherein said microphone array system is implemented in one of devices with speech acquisition capability, hands-free devices, handheld devices, conference phones and video conferencing applications, wherein said handheld devices comprise smart phones, tablet computers and laptop computers.
41. The system of claim 30, wherein said microphone array system is implemented in one of devices with speech acquisition capability, hands-free devices, handheld devices, conference phones and video conferencing applications, wherein said handheld devices comprise smart phones, tablet computers and laptop computers.
US16/052,623 2010-09-24 2018-08-02 Microphone array system Active 2033-05-18 USRE48371E1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/052,623 USRE48371E1 (en) 2010-09-24 2018-08-02 Microphone array system

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US40395210P 2010-09-24 2010-09-24
US13/049,877 US8861756B2 (en) 2010-09-24 2011-03-16 Microphone array system
US15/293,626 USRE47049E1 (en) 2010-09-24 2016-10-14 Microphone array system
US16/052,623 USRE48371E1 (en) 2010-09-24 2018-08-02 Microphone array system

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US13/049,877 Reissue US8861756B2 (en) 2010-09-24 2011-03-16 Microphone array system

Publications (1)

Publication Number Publication Date
USRE48371E1 true USRE48371E1 (en) 2020-12-29

Family

ID=45870681

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/049,877 Ceased US8861756B2 (en) 2010-09-24 2011-03-16 Microphone array system
US15/293,626 Active 2033-05-18 USRE47049E1 (en) 2010-09-24 2016-10-14 Microphone array system
US16/052,623 Active 2033-05-18 USRE48371E1 (en) 2010-09-24 2018-08-02 Microphone array system

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/049,877 Ceased US8861756B2 (en) 2010-09-24 2011-03-16 Microphone array system
US15/293,626 Active 2033-05-18 USRE47049E1 (en) 2010-09-24 2016-10-14 Microphone array system

Country Status (1)

Country Link
US (3) US8861756B2 (en)

Cited By (62)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11100923B2 (en) 2018-09-28 2021-08-24 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US11120794B2 (en) 2019-05-03 2021-09-14 Sonos, Inc. Voice assistant persistence across multiple network microphone devices
US11133018B2 (en) 2016-06-09 2021-09-28 Sonos, Inc. Dynamic player selection for audio signal processing
US11132989B2 (en) 2018-12-13 2021-09-28 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
US11138975B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11138969B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11137979B2 (en) 2016-02-22 2021-10-05 Sonos, Inc. Metadata exchange involving a networked playback system and a networked microphone system
US11159880B2 (en) 2018-12-20 2021-10-26 Sonos, Inc. Optimization of network microphone devices using noise classification
US11175880B2 (en) 2018-05-10 2021-11-16 Sonos, Inc. Systems and methods for voice-assisted media content selection
US11175888B2 (en) 2017-09-29 2021-11-16 Sonos, Inc. Media playback system with concurrent voice assistance
US11184704B2 (en) 2016-02-22 2021-11-23 Sonos, Inc. Music service selection
US11184969B2 (en) 2016-07-15 2021-11-23 Sonos, Inc. Contextualization of voice inputs
US11183183B2 (en) 2018-12-07 2021-11-23 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11183181B2 (en) 2017-03-27 2021-11-23 Sonos, Inc. Systems and methods of multiple voice services
US11189286B2 (en) 2019-10-22 2021-11-30 Sonos, Inc. VAS toggle based on device orientation
US11197096B2 (en) 2018-06-28 2021-12-07 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
US11200889B2 (en) 2018-11-15 2021-12-14 Sonos, Inc. Dilated convolutions and gating for efficient keyword spotting
US11200894B2 (en) 2019-06-12 2021-12-14 Sonos, Inc. Network microphone device with command keyword eventing
US11200900B2 (en) 2019-12-20 2021-12-14 Sonos, Inc. Offline voice control
US11302326B2 (en) 2017-09-28 2022-04-12 Sonos, Inc. Tone interference cancellation
US11308962B2 (en) 2020-05-20 2022-04-19 Sonos, Inc. Input detection windowing
US11308958B2 (en) 2020-02-07 2022-04-19 Sonos, Inc. Localized wakeword verification
US11308961B2 (en) 2016-10-19 2022-04-19 Sonos, Inc. Arbitration-based voice recognition
US11315556B2 (en) 2019-02-08 2022-04-26 Sonos, Inc. Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification
US11343614B2 (en) 2018-01-31 2022-05-24 Sonos, Inc. Device designation of playback and network microphone device arrangements
US11354092B2 (en) 2019-07-31 2022-06-07 Sonos, Inc. Noise classification for event detection
US11361756B2 (en) 2019-06-12 2022-06-14 Sonos, Inc. Conditional wake word eventing based on environment
US11380322B2 (en) 2017-08-07 2022-07-05 Sonos, Inc. Wake-word detection suppression
US11405430B2 (en) 2016-02-22 2022-08-02 Sonos, Inc. Networked microphone device control
US11432030B2 (en) 2018-09-14 2022-08-30 Sonos, Inc. Networked devices, systems, and methods for associating playback devices based on sound codes
US11451908B2 (en) 2017-12-10 2022-09-20 Sonos, Inc. Network microphone devices with automatic do not disturb actuation capabilities
US11482978B2 (en) 2018-08-28 2022-10-25 Sonos, Inc. Audio notifications
US11482224B2 (en) 2020-05-20 2022-10-25 Sonos, Inc. Command keywords with input detection windowing
US11501795B2 (en) 2018-09-29 2022-11-15 Sonos, Inc. Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US11500611B2 (en) 2017-09-08 2022-11-15 Sonos, Inc. Dynamic computation of system response volume
US11501773B2 (en) 2019-06-12 2022-11-15 Sonos, Inc. Network microphone device with command keyword conditioning
US11516610B2 (en) 2016-09-30 2022-11-29 Sonos, Inc. Orientation-based playback device microphone selection
US11514898B2 (en) 2016-02-22 2022-11-29 Sonos, Inc. Voice control of a media playback system
US11513763B2 (en) 2016-02-22 2022-11-29 Sonos, Inc. Audio response playback
US11531520B2 (en) 2016-08-05 2022-12-20 Sonos, Inc. Playback device supporting concurrent voice assistants
US11538451B2 (en) 2017-09-28 2022-12-27 Sonos, Inc. Multi-channel acoustic echo cancellation
US11551700B2 (en) 2021-01-25 2023-01-10 Sonos, Inc. Systems and methods for power-efficient keyword detection
US11551690B2 (en) 2018-09-14 2023-01-10 Sonos, Inc. Networked devices, systems, and methods for intelligently deactivating wake-word engines
US11556306B2 (en) 2016-02-22 2023-01-17 Sonos, Inc. Voice controlled media playback system
US11556307B2 (en) 2020-01-31 2023-01-17 Sonos, Inc. Local voice data processing
US11563842B2 (en) 2018-08-28 2023-01-24 Sonos, Inc. Do not disturb feature for audio notifications
US11562740B2 (en) 2020-01-07 2023-01-24 Sonos, Inc. Voice verification for media playback
US11589329B1 (en) 2010-12-30 2023-02-21 Staton Techiya Llc Information processing using a population of data acquisition devices
US11641559B2 (en) 2016-09-27 2023-05-02 Sonos, Inc. Audio playback settings for voice interaction
US11646023B2 (en) 2019-02-08 2023-05-09 Sonos, Inc. Devices, systems, and methods for distributed voice processing
US11646045B2 (en) 2017-09-27 2023-05-09 Sonos, Inc. Robust short-time fourier transform acoustic echo cancellation during audio playback
US11664023B2 (en) 2016-07-15 2023-05-30 Sonos, Inc. Voice detection by multiple devices
US20230169956A1 (en) * 2019-05-03 2023-06-01 Sonos, Inc. Locally distributed keyword detection
US11676590B2 (en) 2017-12-11 2023-06-13 Sonos, Inc. Home graph
US11698771B2 (en) 2020-08-25 2023-07-11 Sonos, Inc. Vocal guidance engines for playback devices
US11715489B2 (en) 2018-05-18 2023-08-01 Sonos, Inc. Linear filtering for noise-suppressed speech detection
US11727919B2 (en) 2020-05-20 2023-08-15 Sonos, Inc. Memory allocation for keyword spotting engines
US11727936B2 (en) 2018-09-25 2023-08-15 Sonos, Inc. Voice detection optimization based on selected voice assistant service
US11792590B2 (en) 2018-05-25 2023-10-17 Sonos, Inc. Determining and adapting to changes in microphone performance of playback devices
US11790937B2 (en) 2018-09-21 2023-10-17 Sonos, Inc. Voice detection optimization using sound metadata
US11890168B2 (en) 2022-03-21 2024-02-06 Li Creative Technologies Inc. Hearing protection and situational awareness system
US11899519B2 (en) 2018-10-23 2024-02-13 Sonos, Inc. Multiple stage network microphone device with reduced power consumption and processing load

Families Citing this family (107)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102306496B (en) * 2011-09-05 2014-07-09 歌尔声学股份有限公司 Noise elimination method, device and system of multi-microphone array
US8983089B1 (en) 2011-11-28 2015-03-17 Rawles Llc Sound source localization using multiple microphone arrays
WO2013093565A1 (en) * 2011-12-22 2013-06-27 Nokia Corporation Spatial audio processing apparatus
US9437213B2 (en) * 2012-03-05 2016-09-06 Malaspina Labs (Barbados) Inc. Voice signal enhancement
US10107887B2 (en) 2012-04-13 2018-10-23 Qualcomm Incorporated Systems and methods for displaying a user interface
US20130343549A1 (en) * 2012-06-22 2013-12-26 Verisilicon Holdings Co., Ltd. Microphone arrays for generating stereo and surround channels, method of operation thereof and module incorporating the same
US9384737B2 (en) * 2012-06-29 2016-07-05 Microsoft Technology Licensing, Llc Method and device for adjusting sound levels of sources based on sound source priority
US9232310B2 (en) * 2012-10-15 2016-01-05 Nokia Technologies Oy Methods, apparatuses and computer program products for facilitating directional audio capture with multiple microphones
US9078057B2 (en) * 2012-11-01 2015-07-07 Csr Technology Inc. Adaptive microphone beamforming
US9595997B1 (en) * 2013-01-02 2017-03-14 Amazon Technologies, Inc. Adaption-based reduction of echo and noise
US9294839B2 (en) 2013-03-01 2016-03-22 Clearone, Inc. Augmentation of a beamforming microphone array with non-beamforming microphones
US10750132B2 (en) * 2013-03-14 2020-08-18 Pelco, Inc. System and method for audio source localization using multiple audio sensors
US20140270219A1 (en) * 2013-03-15 2014-09-18 CSR Technology, Inc. Method, apparatus, and manufacture for beamforming with fixed weights and adaptive selection or resynthesis
CN104065798B (en) * 2013-03-21 2016-08-03 华为技术有限公司 Audio signal processing method and equipment
US9294858B2 (en) * 2014-02-26 2016-03-22 Revo Labs, Inc. Controlling acoustic echo cancellation while handling a wireless microphone
US9716946B2 (en) * 2014-06-01 2017-07-25 Insoundz Ltd. System and method thereof for determining of an optimal deployment of microphones to achieve optimal coverage in a three-dimensional space
US10149047B2 (en) * 2014-06-18 2018-12-04 Cirrus Logic Inc. Multi-aural MMSE analysis techniques for clarifying audio signals
KR102208477B1 (en) 2014-06-30 2021-01-27 삼성전자주식회사 Operating Method For Microphones and Electronic Device supporting the same
WO2016004225A1 (en) 2014-07-03 2016-01-07 Dolby Laboratories Licensing Corporation Auxiliary augmentation of soundfields
TWI584657B (en) * 2014-08-20 2017-05-21 國立清華大學 A method for recording and rebuilding of a stereophonic sound field
KR102174850B1 (en) * 2014-10-31 2020-11-05 한화테크윈 주식회사 Environment adaptation type beam forming apparatus for audio
US9747367B2 (en) 2014-12-05 2017-08-29 Stages Llc Communication system for establishing and providing preferred audio
US10609475B2 (en) 2014-12-05 2020-03-31 Stages Llc Active noise control and customized audio system
US9654868B2 (en) 2014-12-05 2017-05-16 Stages Llc Multi-channel multi-domain source identification and tracking
US10924846B2 (en) * 2014-12-12 2021-02-16 Nuance Communications, Inc. System and method for generating a self-steering beamformer
US9565493B2 (en) 2015-04-30 2017-02-07 Shure Acquisition Holdings, Inc. Array microphone system and method of assembling the same
US9554207B2 (en) 2015-04-30 2017-01-24 Shure Acquisition Holdings, Inc. Offset cartridge microphones
JP6131989B2 (en) * 2015-07-07 2017-05-24 沖電気工業株式会社 Sound collecting apparatus, program and method
US9823893B2 (en) 2015-07-15 2017-11-21 International Business Machines Corporation Processing of voice conversations using network of computing devices
US10572073B2 (en) * 2015-08-24 2020-02-25 Sony Corporation Information processing device, information processing method, and program
US10425726B2 (en) * 2015-10-26 2019-09-24 Sony Corporation Signal processing device, signal processing method, and program
US10320964B2 (en) * 2015-10-30 2019-06-11 Mitsubishi Electric Corporation Hands-free control apparatus
KR102502601B1 (en) * 2015-11-27 2023-02-23 삼성전자주식회사 Electronic device and controlling voice signal method
US11064291B2 (en) 2015-12-04 2021-07-13 Sennheiser Electronic Gmbh & Co. Kg Microphone array system
US9894434B2 (en) 2015-12-04 2018-02-13 Sennheiser Electronic Gmbh & Co. Kg Conference system with a microphone array system and a method of speech acquisition in a conference system
JP2017102085A (en) * 2015-12-04 2017-06-08 キヤノン株式会社 Information processing apparatus, information processing method, and program
CN107290711A (en) * 2016-03-30 2017-10-24 芋头科技(杭州)有限公司 A kind of voice is sought to system and method
US9820042B1 (en) 2016-05-02 2017-11-14 Knowles Electronics, Llc Stereo separation and directional suppression with omni-directional microphones
US10657983B2 (en) 2016-06-15 2020-05-19 Intel Corporation Automatic gain control for speech recognition
TWI579833B (en) * 2016-06-22 2017-04-21 瑞昱半導體股份有限公司 Signal processing device and signal processing method
CN107889022B (en) * 2016-09-30 2021-03-23 松下电器产业株式会社 Noise suppression device and noise suppression method
US9980075B1 (en) 2016-11-18 2018-05-22 Stages Llc Audio source spatialization relative to orientation sensor and output
US9980042B1 (en) 2016-11-18 2018-05-22 Stages Llc Beamformer direction of arrival and orientation analysis system
US10945080B2 (en) 2016-11-18 2021-03-09 Stages Llc Audio analysis and processing system
US10367948B2 (en) 2017-01-13 2019-07-30 Shure Acquisition Holdings, Inc. Post-mixing acoustic echo cancellation systems and methods
WO2018140618A1 (en) 2017-01-27 2018-08-02 Shure Acquisiton Holdings, Inc. Array microphone module and system
US10366700B2 (en) 2017-02-08 2019-07-30 Logitech Europe, S.A. Device for acquiring and processing audible input
US10229667B2 (en) 2017-02-08 2019-03-12 Logitech Europe S.A. Multi-directional beamforming device for acquiring and processing audible input
US10366702B2 (en) 2017-02-08 2019-07-30 Logitech Europe, S.A. Direction detection device for acquiring and processing audible input
US10362393B2 (en) 2017-02-08 2019-07-23 Logitech Europe, S.A. Direction detection device for acquiring and processing audible input
US20180317006A1 (en) 2017-04-28 2018-11-01 Qualcomm Incorporated Microphone configurations
US10334360B2 (en) * 2017-06-12 2019-06-25 Revolabs, Inc Method for accurately calculating the direction of arrival of sound at a microphone array
WO2018229464A1 (en) * 2017-06-13 2018-12-20 Sandeep Kumar Chintala Noise cancellation in voice communication systems
US10187721B1 (en) * 2017-06-22 2019-01-22 Amazon Technologies, Inc. Weighing fixed and adaptive beamformers
US11101022B2 (en) 2017-08-10 2021-08-24 Nuance Communications, Inc. Automated clinical documentation system and method
US11316865B2 (en) 2017-08-10 2022-04-26 Nuance Communications, Inc. Ambient cooperative intelligence system and method
US10412532B2 (en) * 2017-08-30 2019-09-10 Harman International Industries, Incorporated Environment discovery via time-synchronized networked loudspeakers
US20200333423A1 (en) * 2017-10-11 2020-10-22 Sony Corporation Sound source direction estimation device and method, and program
US11565365B2 (en) * 2017-11-13 2023-01-31 Taiwan Semiconductor Manufacturing Co., Ltd. System and method for monitoring chemical mechanical polishing
CN108109617B (en) * 2018-01-08 2020-12-15 深圳市声菲特科技技术有限公司 Remote pickup method
EP3762921A4 (en) 2018-03-05 2022-05-04 Nuance Communications, Inc. Automated clinical documentation system and method
US11250382B2 (en) * 2018-03-05 2022-02-15 Nuance Communications, Inc. Automated clinical documentation system and method
EP3762929A4 (en) 2018-03-05 2022-01-12 Nuance Communications, Inc. System and method for review of automated clinical documentation
DE102018107579B4 (en) * 2018-03-29 2020-07-02 Tdk Corporation Microphone array
CN108319155A (en) * 2018-04-24 2018-07-24 苏州宏云智能科技有限公司 Wireless intelligent house terminal control unit
US20190324117A1 (en) * 2018-04-24 2019-10-24 Mediatek Inc. Content aware audio source localization
CN110441738B (en) * 2018-05-03 2023-07-28 阿里巴巴集团控股有限公司 Method, system, vehicle and storage medium for vehicle-mounted voice positioning
DE102018110759A1 (en) * 2018-05-04 2019-11-07 Sennheiser Electronic Gmbh & Co. Kg microphone array
WO2019231632A1 (en) 2018-06-01 2019-12-05 Shure Acquisition Holdings, Inc. Pattern-forming microphone array
US11297423B2 (en) 2018-06-15 2022-04-05 Shure Acquisition Holdings, Inc. Endfire linear array microphone
US10939030B2 (en) * 2018-09-07 2021-03-02 Canon Kabushiki Kaisha Video audio processing system and method of controlling the video audio processing system
EP3854108A1 (en) 2018-09-20 2021-07-28 Shure Acquisition Holdings, Inc. Adjustable lobe shape for array microphones
US11109133B2 (en) 2018-09-21 2021-08-31 Shure Acquisition Holdings, Inc. Array microphone module and system
US20200184994A1 (en) * 2018-12-07 2020-06-11 Nuance Communications, Inc. System and method for acoustic localization of multiple sources using spatial pre-filtering
CN109803171B (en) * 2019-02-15 2023-10-24 深圳市锐明技术股份有限公司 Monitoring camera for displaying voice position and control method thereof
US11558693B2 (en) 2019-03-21 2023-01-17 Shure Acquisition Holdings, Inc. Auto focus, auto focus within regions, and auto placement of beamformed microphone lobes with inhibition and voice activity detection functionality
CN113841419A (en) 2019-03-21 2021-12-24 舒尔获得控股公司 Housing and associated design features for ceiling array microphone
CN113841421A (en) 2019-03-21 2021-12-24 舒尔获得控股公司 Auto-focus, in-region auto-focus, and auto-configuration of beamforming microphone lobes with suppression
WO2020237206A1 (en) 2019-05-23 2020-11-26 Shure Acquisition Holdings, Inc. Steerable speaker array, system, and method for the same
WO2020243471A1 (en) 2019-05-31 2020-12-03 Shure Acquisition Holdings, Inc. Low latency automixer integrated with voice and noise activity detection
US11216480B2 (en) 2019-06-14 2022-01-04 Nuance Communications, Inc. System and method for querying data points from graph data structures
US11227679B2 (en) 2019-06-14 2022-01-18 Nuance Communications, Inc. Ambient clinical intelligence system and method
US11043207B2 (en) 2019-06-14 2021-06-22 Nuance Communications, Inc. System and method for array data simulation and customized acoustic modeling for ambient ASR
US11226396B2 (en) 2019-06-27 2022-01-18 Gracenote, Inc. Methods and apparatus to improve detection of audio signatures
US11531807B2 (en) 2019-06-28 2022-12-20 Nuance Communications, Inc. System and method for customized text macros
CN110364161A (en) * 2019-08-22 2019-10-22 北京小米智能科技有限公司 Method, electronic equipment, medium and the system of voice responsive signal
JP2022545113A (en) 2019-08-23 2022-10-25 シュアー アクイジッション ホールディングス インコーポレイテッド One-dimensional array microphone with improved directivity
US10887709B1 (en) * 2019-09-25 2021-01-05 Amazon Technologies, Inc. Aligned beam merger
US11670408B2 (en) 2019-09-30 2023-06-06 Nuance Communications, Inc. System and method for review of automated clinical documentation
CN111025233B (en) * 2019-11-13 2023-09-15 阿里巴巴集团控股有限公司 Sound source direction positioning method and device, voice equipment and system
US11552611B2 (en) 2020-02-07 2023-01-10 Shure Acquisition Holdings, Inc. System and method for automatic adjustment of reference gain
US11277689B2 (en) 2020-02-24 2022-03-15 Logitech Europe S.A. Apparatus and method for optimizing sound quality of a generated audible signal
US11240621B2 (en) 2020-04-11 2022-02-01 LI Creative Technologies, Inc. Three-dimensional audio systems
US11025324B1 (en) * 2020-04-15 2021-06-01 Cirrus Logic, Inc. Initialization of adaptive blocking matrix filters in a beamforming array using a priori information
USD944776S1 (en) 2020-05-05 2022-03-01 Shure Acquisition Holdings, Inc. Audio device
WO2021243368A2 (en) 2020-05-29 2021-12-02 Shure Acquisition Holdings, Inc. Transducer steering and configuration systems and methods using a local positioning system
JP2022061673A (en) 2020-10-07 2022-04-19 ヤマハ株式会社 Microphone array system
US11222103B1 (en) 2020-10-29 2022-01-11 Nuance Communications, Inc. Ambient cooperative intelligence system and method
CN112767908A (en) * 2020-12-29 2021-05-07 安克创新科技股份有限公司 Active noise reduction method based on key sound recognition, electronic equipment and storage medium
CN112684412B (en) * 2021-01-12 2022-09-13 中北大学 Sound source positioning method and system based on pattern clustering
JP2024505068A (en) 2021-01-28 2024-02-02 シュアー アクイジッション ホールディングス インコーポレイテッド Hybrid audio beamforming system
US11636842B2 (en) * 2021-01-29 2023-04-25 Iyo Inc. Ear-mountable listening device having a microphone array disposed around a circuit board
CN115061087A (en) * 2022-05-27 2022-09-16 上海事凡物联网科技有限公司 Signal processing method, DOA estimation method and electronic equipment
CN116055869B (en) * 2022-05-30 2023-10-20 荣耀终端有限公司 Video processing method and terminal
CN114863943B (en) * 2022-07-04 2022-11-04 杭州兆华电子股份有限公司 Self-adaptive positioning method and device for environmental noise source based on beam forming
CN114858271B (en) * 2022-07-05 2022-09-23 杭州兆华电子股份有限公司 Array amplification method for sound detection
CN116953615B (en) * 2023-08-04 2024-04-12 中国水利水电科学研究院 Networking detection positioning technology for termite nest of dam

Citations (79)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5315562A (en) * 1992-10-23 1994-05-24 Rowe, Deines Instruments Inc. Correlation sonar system
US5825898A (en) 1996-06-27 1998-10-20 Lamar Signal Processing Ltd. System and method for adaptive interference cancelling
US6198693B1 (en) 1998-04-13 2001-03-06 Andrea Electronics Corporation System and method for finding the direction of a wave source using an array of sensors
US6236862B1 (en) * 1996-12-16 2001-05-22 Intersignal Llc Continuously adaptive dynamic signal separation and recovery system
US20030204397A1 (en) * 2002-04-26 2003-10-30 Mitel Knowledge Corporation Method of compensating for beamformer steering delay during handsfree speech recognition
US20040071284A1 (en) 2002-08-16 2004-04-15 Abutalebi Hamid Reza Method and system for processing subband signals using adaptive filters
US20040161121A1 (en) * 2003-01-17 2004-08-19 Samsung Electronics Co., Ltd Adaptive beamforming method and apparatus using feedback structure
EP1538867A1 (en) 2003-06-30 2005-06-08 Harman Becker Automotive Systems GmbH Handsfree system for use in a vehicle
US7039199B2 (en) 2002-08-26 2006-05-02 Microsoft Corporation System and process for locating a speaker using 360 degree sound source localization
US7068801B1 (en) 1998-12-18 2006-06-27 National Research Council Of Canada Microphone array diffracting structure
US20060153360A1 (en) 2004-09-03 2006-07-13 Walter Kellermann Speech signal processing with combined noise reduction and echo compensation
US20060245601A1 (en) 2005-04-27 2006-11-02 Francois Michaud Robust localization and tracking of simultaneously moving sound sources using beamforming and particle filtering
US20060269080A1 (en) 2004-10-15 2006-11-30 Lifesize Communications, Inc. Hybrid beamforming
US20070055505A1 (en) 2003-07-11 2007-03-08 Cochlear Limited Method and device for noise reduction
US20070076898A1 (en) 2003-11-24 2007-04-05 Koninkiljke Phillips Electronics N.V. Adaptive beamformer with robustness against uncorrelated noise
WO2008041878A2 (en) 2006-10-04 2008-04-10 Micronas Nit System and procedure of hands free speech communication using a microphone array
US20080112574A1 (en) 2001-08-08 2008-05-15 Ami Semiconductor, Inc. Directional audio signal processing using an oversampled filterbank
US20080181430A1 (en) 2007-01-26 2008-07-31 Microsoft Corporation Multi-sensor sound source localization
US20080232607A1 (en) 2007-03-22 2008-09-25 Microsoft Corporation Robust adaptive beamforming with enhanced noise suppression
US20090067642A1 (en) 2007-08-13 2009-03-12 Markus Buck Noise reduction through spatial selectivity and filtering
US20090073040A1 (en) 2006-04-20 2009-03-19 Nec Corporation Adaptive array control device, method and program, and adaptive array processing device, method and program
US20090141907A1 (en) * 2007-11-30 2009-06-04 Samsung Electronics Co., Ltd. Method and apparatus for canceling noise from sound input through microphone
US20090279714A1 (en) 2008-05-06 2009-11-12 Samsung Electronics Co., Ltd. Apparatus and method for localizing sound source in robot
US20090304200A1 (en) 2008-06-09 2009-12-10 Samsung Electronics Co., Ltd. Adaptive mode control apparatus and method for adaptive beamforming based on detection of user direction sound
KR20090128221A (en) 2008-06-10 2009-12-15 삼성전자주식회사 Method for sound source localization and system thereof
US20100150364A1 (en) 2008-12-12 2010-06-17 Nuance Communications, Inc. Method for Determining a Time Delay for Time Delay Compensation
US20120327115A1 (en) 2011-06-21 2012-12-27 Chhetri Amit S Signal-enhancing Beamforming in an Augmented Reality Environment
US20130265276A1 (en) 2012-04-09 2013-10-10 Amazon Technologies, Inc. Multiple touch sensing modes
US8855295B1 (en) 2012-06-25 2014-10-07 Rawles Llc Acoustic echo cancellation using blind source separation
US8885815B1 (en) 2012-06-25 2014-11-11 Rawles Llc Null-forming techniques to improve acoustic echo cancellation
US20150006176A1 (en) 2013-06-27 2015-01-01 Rawles Llc Detecting Self-Generated Wake Expressions
US8953777B1 (en) 2013-05-30 2015-02-10 Amazon Technologies, Inc. Echo path change detector with robustness to double talk
US8983057B1 (en) 2013-09-20 2015-03-17 Amazon Technologies, Inc. Step size control for acoustic echo cancellation
US9116962B1 (en) 2012-03-28 2015-08-25 Amazon Technologies, Inc. Context dependent recognition
US9229526B1 (en) 2012-09-10 2016-01-05 Amazon Technologies, Inc. Dedicated image processor
US9319782B1 (en) 2013-12-20 2016-04-19 Amazon Technologies, Inc. Distributed speaker synchronization
US9319783B1 (en) 2014-02-19 2016-04-19 Amazon Technologies, Inc. Attenuation of output audio based on residual echo
US9332167B1 (en) 2012-11-20 2016-05-03 Amazon Technologies, Inc. Multi-directional camera module for an electronic device
US9354731B1 (en) 2012-06-20 2016-05-31 Amazon Technologies, Inc. Multi-dimension touch input
US9363616B1 (en) 2014-04-18 2016-06-07 Amazon Technologies, Inc. Directional capability testing of audio devices
US9373338B1 (en) 2012-06-25 2016-06-21 Amazon Technologies, Inc. Acoustic echo cancellation processing based on feedback from speech recognizer
US9390723B1 (en) 2014-12-11 2016-07-12 Amazon Technologies, Inc. Efficient dereverberation in networked audio systems
US9423886B1 (en) 2012-10-02 2016-08-23 Amazon Technologies, Inc. Sensor connectivity approaches
US9431982B1 (en) 2015-03-30 2016-08-30 Amazon Technologies, Inc. Loudness learning and balancing system
US9432769B1 (en) 2014-07-30 2016-08-30 Amazon Technologies, Inc. Method and system for beam selection in microphone array beamformers
US9432768B1 (en) 2014-03-28 2016-08-30 Amazon Technologies, Inc. Beam forming for a wearable computer
US9456276B1 (en) 2014-09-30 2016-09-27 Amazon Technologies, Inc. Parameter selection for audio beamforming
US9473646B1 (en) 2013-09-16 2016-10-18 Amazon Technologies, Inc. Robust acoustic echo cancellation
US9516410B1 (en) 2015-06-29 2016-12-06 Amazon Technologies, Inc. Asynchronous clock frequency domain acoustic echo canceller
US9589575B1 (en) 2015-12-02 2017-03-07 Amazon Technologies, Inc. Asynchronous clock frequency domain acoustic echo canceller
US9591404B1 (en) 2013-09-27 2017-03-07 Amazon Technologies, Inc. Beamformer design using constrained convex optimization in three-dimensional space
US9614486B1 (en) 2015-12-30 2017-04-04 Amazon Technologies, Inc. Adaptive gain control
US9653060B1 (en) 2016-02-09 2017-05-16 Amazon Technologies, Inc. Hybrid reference signal for acoustic echo cancellation
US9661438B1 (en) 2015-03-26 2017-05-23 Amazon Technologies, Inc. Low latency limiter
US9658738B1 (en) 2012-11-29 2017-05-23 Amazon Technologies, Inc. Representation management on an electronic device
US9659555B1 (en) 2016-02-09 2017-05-23 Amazon Technologies, Inc. Multichannel acoustic echo cancellation
US9677986B1 (en) 2014-09-24 2017-06-13 Amazon Technologies, Inc. Airborne particle detection with user device
US9678559B1 (en) 2015-09-18 2017-06-13 Amazon Technologies, Inc. Determining a device state based on user presence detection
US20170178662A1 (en) 2015-12-17 2017-06-22 Amazon Technologies, Inc. Adaptive beamforming to create reference channels
US9689960B1 (en) 2013-04-04 2017-06-27 Amazon Technologies, Inc. Beam rejection in multi-beam microphone systems
US9704478B1 (en) 2013-12-02 2017-07-11 Amazon Technologies, Inc. Audio output masking for improved automatic speech recognition
US9734845B1 (en) 2015-06-26 2017-08-15 Amazon Technologies, Inc. Mitigating effects of electronic audio sources in expression detection
US9754605B1 (en) 2016-06-09 2017-09-05 Amazon Technologies, Inc. Step-size control for multi-channel acoustic echo canceller
US9767828B1 (en) 2012-06-27 2017-09-19 Amazon Technologies, Inc. Acoustic echo cancellation using visual cues
US9820036B1 (en) 2015-12-30 2017-11-14 Amazon Technologies, Inc. Speech processing of reflected sound
US9818425B1 (en) 2016-06-17 2017-11-14 Amazon Technologies, Inc. Parallel output paths for acoustic echo cancellation
US9940949B1 (en) 2014-12-19 2018-04-10 Amazon Technologies, Inc. Dynamic adjustment of expression detection criteria
US9966059B1 (en) 2017-09-06 2018-05-08 Amazon Technologies, Inc. Reconfigurale fixed beam former using given microphone array
US9973849B1 (en) 2017-09-20 2018-05-15 Amazon Technologies, Inc. Signal quality beam selection
US9978387B1 (en) 2013-08-05 2018-05-22 Amazon Technologies, Inc. Reference signal generation for acoustic echo cancellation
US9997151B1 (en) 2016-01-20 2018-06-12 Amazon Technologies, Inc. Multichannel acoustic echo cancellation for wireless applications
WO2018118895A2 (en) 2016-12-23 2018-06-28 Amazon Technologies, Inc. Voice activated modular controller
US10062372B1 (en) 2014-03-28 2018-08-28 Amazon Technologies, Inc. Detecting device proximities
US10109294B1 (en) 2016-03-25 2018-10-23 Amazon Technologies, Inc. Adaptive echo cancellation
US10147439B1 (en) 2017-03-30 2018-12-04 Amazon Technologies, Inc. Volume adjustment for listening environment
US10147441B1 (en) 2013-12-19 2018-12-04 Amazon Technologies, Inc. Voice controlled system
US10229698B1 (en) 2017-06-21 2019-03-12 Amazon Technologies, Inc. Playback reference signal-assisted multi-microphone interference canceler
US10237647B1 (en) 2017-03-01 2019-03-19 Amazon Technologies, Inc. Adaptive step-size control for beamformer
US10304475B1 (en) 2017-08-14 2019-05-28 Amazon Technologies, Inc. Trigger word based beam selection

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101656908A (en) * 2008-08-19 2010-02-24 深圳华为通信技术有限公司 Method for controlling sound focusing, communication device and communication system
CN101510426B (en) * 2009-03-23 2013-03-27 北京中星微电子有限公司 Method and system for eliminating noise
US20110096915A1 (en) * 2009-10-23 2011-04-28 Broadcom Corporation Audio spatialization for conference calls with multiple and moving talkers
US20110317522A1 (en) * 2010-06-28 2011-12-29 Microsoft Corporation Sound source localization based on reflections and room estimation

Patent Citations (93)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5315562A (en) * 1992-10-23 1994-05-24 Rowe, Deines Instruments Inc. Correlation sonar system
US5825898A (en) 1996-06-27 1998-10-20 Lamar Signal Processing Ltd. System and method for adaptive interference cancelling
US6236862B1 (en) * 1996-12-16 2001-05-22 Intersignal Llc Continuously adaptive dynamic signal separation and recovery system
US6198693B1 (en) 1998-04-13 2001-03-06 Andrea Electronics Corporation System and method for finding the direction of a wave source using an array of sensors
US7068801B1 (en) 1998-12-18 2006-06-27 National Research Council Of Canada Microphone array diffracting structure
US20080112574A1 (en) 2001-08-08 2008-05-15 Ami Semiconductor, Inc. Directional audio signal processing using an oversampled filterbank
US20030204397A1 (en) * 2002-04-26 2003-10-30 Mitel Knowledge Corporation Method of compensating for beamformer steering delay during handsfree speech recognition
US20040071284A1 (en) 2002-08-16 2004-04-15 Abutalebi Hamid Reza Method and system for processing subband signals using adaptive filters
US7039199B2 (en) 2002-08-26 2006-05-02 Microsoft Corporation System and process for locating a speaker using 360 degree sound source localization
US20040161121A1 (en) * 2003-01-17 2004-08-19 Samsung Electronics Co., Ltd Adaptive beamforming method and apparatus using feedback structure
EP1538867A1 (en) 2003-06-30 2005-06-08 Harman Becker Automotive Systems GmbH Handsfree system for use in a vehicle
US20070055505A1 (en) 2003-07-11 2007-03-08 Cochlear Limited Method and device for noise reduction
US20070076898A1 (en) 2003-11-24 2007-04-05 Koninkiljke Phillips Electronics N.V. Adaptive beamformer with robustness against uncorrelated noise
US20060153360A1 (en) 2004-09-03 2006-07-13 Walter Kellermann Speech signal processing with combined noise reduction and echo compensation
US20060269080A1 (en) 2004-10-15 2006-11-30 Lifesize Communications, Inc. Hybrid beamforming
US7970151B2 (en) 2004-10-15 2011-06-28 Lifesize Communications, Inc. Hybrid beamforming
US20060245601A1 (en) 2005-04-27 2006-11-02 Francois Michaud Robust localization and tracking of simultaneously moving sound sources using beamforming and particle filtering
US20090073040A1 (en) 2006-04-20 2009-03-19 Nec Corporation Adaptive array control device, method and program, and adaptive array processing device, method and program
WO2008041878A2 (en) 2006-10-04 2008-04-10 Micronas Nit System and procedure of hands free speech communication using a microphone array
US20080181430A1 (en) 2007-01-26 2008-07-31 Microsoft Corporation Multi-sensor sound source localization
US20080232607A1 (en) 2007-03-22 2008-09-25 Microsoft Corporation Robust adaptive beamforming with enhanced noise suppression
US20090067642A1 (en) 2007-08-13 2009-03-12 Markus Buck Noise reduction through spatial selectivity and filtering
US20090141907A1 (en) * 2007-11-30 2009-06-04 Samsung Electronics Co., Ltd. Method and apparatus for canceling noise from sound input through microphone
US20090279714A1 (en) 2008-05-06 2009-11-12 Samsung Electronics Co., Ltd. Apparatus and method for localizing sound source in robot
US20090304200A1 (en) 2008-06-09 2009-12-10 Samsung Electronics Co., Ltd. Adaptive mode control apparatus and method for adaptive beamforming based on detection of user direction sound
KR20090128221A (en) 2008-06-10 2009-12-15 삼성전자주식회사 Method for sound source localization and system thereof
US20100150364A1 (en) 2008-12-12 2010-06-17 Nuance Communications, Inc. Method for Determining a Time Delay for Time Delay Compensation
US20120327115A1 (en) 2011-06-21 2012-12-27 Chhetri Amit S Signal-enhancing Beamforming in an Augmented Reality Environment
US9973848B2 (en) 2011-06-21 2018-05-15 Amazon Technologies, Inc. Signal-enhancing beamforming in an augmented reality environment
US9116962B1 (en) 2012-03-28 2015-08-25 Amazon Technologies, Inc. Context dependent recognition
US20130265276A1 (en) 2012-04-09 2013-10-10 Amazon Technologies, Inc. Multiple touch sensing modes
WO2013155098A1 (en) 2012-04-09 2013-10-17 Amazon Technologies, Inc. Multiple touch sensing modes
US9354731B1 (en) 2012-06-20 2016-05-31 Amazon Technologies, Inc. Multi-dimension touch input
US8855295B1 (en) 2012-06-25 2014-10-07 Rawles Llc Acoustic echo cancellation using blind source separation
US8885815B1 (en) 2012-06-25 2014-11-11 Rawles Llc Null-forming techniques to improve acoustic echo cancellation
US9373338B1 (en) 2012-06-25 2016-06-21 Amazon Technologies, Inc. Acoustic echo cancellation processing based on feedback from speech recognizer
US9767828B1 (en) 2012-06-27 2017-09-19 Amazon Technologies, Inc. Acoustic echo cancellation using visual cues
US10242695B1 (en) 2012-06-27 2019-03-26 Amazon Technologies, Inc. Acoustic echo cancellation using visual cues
US9229526B1 (en) 2012-09-10 2016-01-05 Amazon Technologies, Inc. Dedicated image processor
US9423886B1 (en) 2012-10-02 2016-08-23 Amazon Technologies, Inc. Sensor connectivity approaches
US9332167B1 (en) 2012-11-20 2016-05-03 Amazon Technologies, Inc. Multi-directional camera module for an electronic device
US9658738B1 (en) 2012-11-29 2017-05-23 Amazon Technologies, Inc. Representation management on an electronic device
US9689960B1 (en) 2013-04-04 2017-06-27 Amazon Technologies, Inc. Beam rejection in multi-beam microphone systems
US8953777B1 (en) 2013-05-30 2015-02-10 Amazon Technologies, Inc. Echo path change detector with robustness to double talk
US9521249B1 (en) 2013-05-30 2016-12-13 Amazon Technologies, Inc. Echo path change detector with robustness to double talk
US20150006176A1 (en) 2013-06-27 2015-01-01 Rawles Llc Detecting Self-Generated Wake Expressions
US20180130468A1 (en) 2013-06-27 2018-05-10 Amazon Technologies, Inc. Detecting Self-Generated Wake Expressions
US9747899B2 (en) 2013-06-27 2017-08-29 Amazon Technologies, Inc. Detecting self-generated wake expressions
US9978387B1 (en) 2013-08-05 2018-05-22 Amazon Technologies, Inc. Reference signal generation for acoustic echo cancellation
US9473646B1 (en) 2013-09-16 2016-10-18 Amazon Technologies, Inc. Robust acoustic echo cancellation
US8983057B1 (en) 2013-09-20 2015-03-17 Amazon Technologies, Inc. Step size control for acoustic echo cancellation
US9591404B1 (en) 2013-09-27 2017-03-07 Amazon Technologies, Inc. Beamformer design using constrained convex optimization in three-dimensional space
US9704478B1 (en) 2013-12-02 2017-07-11 Amazon Technologies, Inc. Audio output masking for improved automatic speech recognition
US10147441B1 (en) 2013-12-19 2018-12-04 Amazon Technologies, Inc. Voice controlled system
US9319782B1 (en) 2013-12-20 2016-04-19 Amazon Technologies, Inc. Distributed speaker synchronization
US9319783B1 (en) 2014-02-19 2016-04-19 Amazon Technologies, Inc. Attenuation of output audio based on residual echo
US10062372B1 (en) 2014-03-28 2018-08-28 Amazon Technologies, Inc. Detecting device proximities
US10244313B1 (en) 2014-03-28 2019-03-26 Amazon Technologies, Inc. Beamforming for a wearable computer
US9432768B1 (en) 2014-03-28 2016-08-30 Amazon Technologies, Inc. Beam forming for a wearable computer
US9363616B1 (en) 2014-04-18 2016-06-07 Amazon Technologies, Inc. Directional capability testing of audio devices
US9432769B1 (en) 2014-07-30 2016-08-30 Amazon Technologies, Inc. Method and system for beam selection in microphone array beamformers
US9837099B1 (en) 2014-07-30 2017-12-05 Amazon Technologies, Inc. Method and system for beam selection in microphone array beamformers
US9677986B1 (en) 2014-09-24 2017-06-13 Amazon Technologies, Inc. Airborne particle detection with user device
US9456276B1 (en) 2014-09-30 2016-09-27 Amazon Technologies, Inc. Parameter selection for audio beamforming
US9390723B1 (en) 2014-12-11 2016-07-12 Amazon Technologies, Inc. Efficient dereverberation in networked audio systems
US9940949B1 (en) 2014-12-19 2018-04-10 Amazon Technologies, Inc. Dynamic adjustment of expression detection criteria
US9661438B1 (en) 2015-03-26 2017-05-23 Amazon Technologies, Inc. Low latency limiter
US9431982B1 (en) 2015-03-30 2016-08-30 Amazon Technologies, Inc. Loudness learning and balancing system
US9734845B1 (en) 2015-06-26 2017-08-15 Amazon Technologies, Inc. Mitigating effects of electronic audio sources in expression detection
US9918163B1 (en) 2015-06-29 2018-03-13 Amazon Technologies, Inc. Asynchronous clock frequency domain acoustic echo canceller
US9516410B1 (en) 2015-06-29 2016-12-06 Amazon Technologies, Inc. Asynchronous clock frequency domain acoustic echo canceller
US9678559B1 (en) 2015-09-18 2017-06-13 Amazon Technologies, Inc. Determining a device state based on user presence detection
US9589575B1 (en) 2015-12-02 2017-03-07 Amazon Technologies, Inc. Asynchronous clock frequency domain acoustic echo canceller
US9747920B2 (en) 2015-12-17 2017-08-29 Amazon Technologies, Inc. Adaptive beamforming to create reference channels
WO2017105998A1 (en) 2015-12-17 2017-06-22 Amazon Technologies, Inc. Adaptive beamforming to create reference channels
US20170178662A1 (en) 2015-12-17 2017-06-22 Amazon Technologies, Inc. Adaptive beamforming to create reference channels
US9820036B1 (en) 2015-12-30 2017-11-14 Amazon Technologies, Inc. Speech processing of reflected sound
US9614486B1 (en) 2015-12-30 2017-04-04 Amazon Technologies, Inc. Adaptive gain control
US9997151B1 (en) 2016-01-20 2018-06-12 Amazon Technologies, Inc. Multichannel acoustic echo cancellation for wireless applications
US9653060B1 (en) 2016-02-09 2017-05-16 Amazon Technologies, Inc. Hybrid reference signal for acoustic echo cancellation
US9967661B1 (en) 2016-02-09 2018-05-08 Amazon Technologies, Inc. Multichannel acoustic echo cancellation
US9659555B1 (en) 2016-02-09 2017-05-23 Amazon Technologies, Inc. Multichannel acoustic echo cancellation
US10109294B1 (en) 2016-03-25 2018-10-23 Amazon Technologies, Inc. Adaptive echo cancellation
US9754605B1 (en) 2016-06-09 2017-09-05 Amazon Technologies, Inc. Step-size control for multi-channel acoustic echo canceller
US9818425B1 (en) 2016-06-17 2017-11-14 Amazon Technologies, Inc. Parallel output paths for acoustic echo cancellation
WO2018118895A2 (en) 2016-12-23 2018-06-28 Amazon Technologies, Inc. Voice activated modular controller
US20180182387A1 (en) 2016-12-23 2018-06-28 Amazon Technologies, Inc. Voice activated modular controller
US10237647B1 (en) 2017-03-01 2019-03-19 Amazon Technologies, Inc. Adaptive step-size control for beamformer
US10147439B1 (en) 2017-03-30 2018-12-04 Amazon Technologies, Inc. Volume adjustment for listening environment
US10229698B1 (en) 2017-06-21 2019-03-12 Amazon Technologies, Inc. Playback reference signal-assisted multi-microphone interference canceler
US10304475B1 (en) 2017-08-14 2019-05-28 Amazon Technologies, Inc. Trigger word based beam selection
US9966059B1 (en) 2017-09-06 2018-05-08 Amazon Technologies, Inc. Reconfigurale fixed beam former using given microphone array
US9973849B1 (en) 2017-09-20 2018-05-15 Amazon Technologies, Inc. Signal quality beam selection

Non-Patent Citations (44)

* Cited by examiner, † Cited by third party
Title
Afsaneh Asaei, Mohammad Javad Taghizadeh, Marjan Bahrololum, Mohammed Ghanbari, Verified speaker localization utiiizing voicing level in split-bands Signal Processing 89 (2009) 1038-1049, 12 pages.
Andrea DA-350 Microphone Performance, 1 page.
Andrea's Technologies Overview Oct. 21, 2001-Sep. 11, 2011, 4 pages.
Baruch Berdugo, Miriam A. Doron, Judith Rosenhouse, Haim Azhari On direction finding of an emitting source from time delays 33 pages.
Cha Zhang, Dinei Florencio, Demba E. Ba, and Zhengyou Zhang, Maximum Likelihood Sound Source Localization and Beamforming for Directional Microphone Arrays in Distributed Meetings, IEEE Transactions on Multimedia, vol. 10, No. 3, Apr. 2008, 11 pages
Cha Zhang, Dinei Florencio, Demba E. Ba, Zhengyou Zhang, Maximum Likelihood Sound Source Localization and Beamforming for Directional Microphone Arrays in Distributed Meetings, Journal of Latex Class files, vol. 6, No. 1, Jan. 2007, 10 pages.
Charles H. Knapp and G. Clifford Carter The Generalized Correlation Method for Estimation of Time Delay IEEE transactions on acoustics, speech, and signal processing, Vol, ASSP-24, No. 4, Aug. 1976, 8 pages.
Crispmic USB-Based Microphone Array for Laptops and PCs LI Creative Technologies, Inc. 2 pages.
DA-350 Auto Array Feb. 25, 2006-Jun. 29, 2016, 1 page.
DA-350 Hands Free Linear Array Microphone, 1 page.
Darpa 172 Phase I Selections from the 07.2 Solicitation, 69 pages.
Digital Super Directional Array (DSDA® 2.0) Far-Field Microphone Technology, 1 page.
Dmitry N. Zotkin , Ramani Duraiswami Accelerated Speech Source Localization via a Hierarchical Search of Steered Response Power University of Maryland,MD,USA , 20 pages.
Doh H. Johnson and Dan E. Dudgeon Array Signal Processing: Concepts and Techniques, 1993 Prentice Hall Signal Processing Series, 554 pages.
EchoStop, Digital Noise Reduction Technology, 1 page.
Group Videoconferencing Systems: Video Made Easy HD5000 Series, Multimedia Workgroup Conferencing System, Installation & Setup Guide, 70 pages.
Harry L. Van Trees Arrays and Spatial Filters, Optimum Array Processing: Part IV of Detection, Estimation, and Modulation Theory, John Wiley & Sons, Inc., 73 pages.
Harry L. Van Trees Optimum Array Processing, Part IV of Detection, Estimation, and Modulation Theory A John Wiley & Sons, Inc., Publication, 192 pages.
Introducing First Low-cost, Light-weight, and Portable USB Array Microphone for Consumer Market, Li Creative Technologies, Inc., Feb. 2. 2010, 1 page.
Ivan J. Tashev, Sound Capture and Processing Practical Approaches, 2009 Wiley Publisher, 196 pages.
Jacek Dmochowski, Jacob Benesty, Sofiane Affes Direction of Arrival Estimation Using the Parameterized Spatial Correlation Matrix, IEEE Transaction on Audio, Speech, and Language Processing, vol. 15, No. 4, May 2007.
John Mcdonough, Kenichi Kumatani, Matthias Wolfel,Tobias Gehrig,Emilian Stoimenov, Uwe Mayer, Stefan Schacht, and Dietrich Klakow To Separate Speech! A System for Recognizing Simultaneous Speech, Jun. 2007, 13 pages.
Joseph Hector Dibiase, A High-Accuracy, Low-Latency Technique for Talker Localization in Reverberant Environments Using Microphone Arrays, Thesis, Division of Engineering at Brown University, Providence, Rhode Island, May 2000, 122 pages.
Joseph Marash DSDA, Andrea Electronics Corporation Technology, 4 pages.
Manli Zhu, Qi (Peter) Li, Joshua J. Hajicek Circular and Linear Microphone Arrays for Robust Speech Recognition and Conference Phone, ICASSP 2009 Thursday, Apr. 23, 2009, 1 page.
Matthias Wolfel and John McDonough Distant Speech Recognition A John Wiley and Sons, Ltd. Publication, 2009, 592 pages.
MediaConnect 9000 A workgroup conferencing system for medium and large room environments, 1 page.
Michael Brandstein, Darren Ward Microphone Arrays,Signal Processing Techniques and Applications Springer-Verlag,Berlin,Heidelberg,New York in 2001, 401 pages.
Osamu Hoshuyama, Akihiko Sugiyama, and Akihiro Hirano, A Robust Adaptive Beamformer for Microphone Arrays with a Blocking Matrix Using Constrained Adaptive Filters, IEEE Transactions on Signal Processing, vol. 47, No. 10, Oct. 1999, 8 Pgs.
PureAudio 2.0 Noise Reduction Algorithm, 1 page.
Qi (Peter) Li, "A Portable USB-Based Mirophone Array Device For Robust Speech Recognition", "2009 IEEE International Conference on Acoustics, Speech, and Signal Processing", Apr. 19-24, 2009, Seven pages.
Qi (Peter) Li, Manli Zhu, and Wei Li A Portable Usb-Based Mirophone Array Device For Robust Speech Recognition IEEE International Conference on Acoustics, Speech and Signal Processing Proceedings, Apr. 19-24, 2009, 7 pages.
Qi (Peter) Li, Manli Zhu, and Wei Li, "A Portable USB-Based Mirophone Array Device For Robust Speech Recognition", "2009 IEEE International Conference on Acoustics, Speech, and Signal Processing", Apr. 19-24, 2009, 7 pages.
Qi Li, Manli Zhu, Wei Li A portable USB-based microphone array device for robustn speech recognition IEEE International Conference on Acoustics, Speech and Signal Processing, Apr. 19-24, 2009, 2 pages.
Scott Matthew Griebel, A Microphone Array System for Speech Source Localization, Denoising and Dereverberation, Thesis, The Division of Engineering and Applied Sciences,Harvard University,Cambridge,Massachusetts, Apr. 2002 163 pages.
US 9,711,140 B2, 07/2017, Ayrapetian et al. (withdrawn)
VCON Group Videoconferencing Systems HD4000 Software-only Multimedia Videoconferencing Version 3.5, 50 pages.
VCON Group Videoconferencing Systems HD5000 Series Rollabout and Compact Systems Installation & Setup Guide, 74 pages.
VCON-Hardware Addons-Introducing VoiceFinder, Sep. 23, 2003-Feb. 25, 2004, 2 pages.
VCON—Hardware Addons—Introducing VoiceFinder, Sep. 23, 2003-Feb. 25, 2004, 2 pages.
VCON-Hardware Addons-VoiceFinder Sep. 23, 2003-Feb. 8, 2004, 1 page.
VCON—Hardware Addons—VoiceFinder Sep. 23, 2003-Feb. 8, 2004, 1 page.
VCON-Solutions-Videoconferencing-Group Video Products MediaConnect 9000 Sep. 21, 2003-Feb. 8, 2004, 1 page.
VCON—Solutions—Videoconferencing—Group Video Products MediaConnect 9000 Sep. 21, 2003-Feb. 8, 2004, 1 page.

Cited By (94)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11589329B1 (en) 2010-12-30 2023-02-21 Staton Techiya Llc Information processing using a population of data acquisition devices
US11726742B2 (en) 2016-02-22 2023-08-15 Sonos, Inc. Handling of loss of pairing between networked devices
US11832068B2 (en) 2016-02-22 2023-11-28 Sonos, Inc. Music service selection
US11556306B2 (en) 2016-02-22 2023-01-17 Sonos, Inc. Voice controlled media playback system
US11405430B2 (en) 2016-02-22 2022-08-02 Sonos, Inc. Networked microphone device control
US11212612B2 (en) 2016-02-22 2021-12-28 Sonos, Inc. Voice control of a media playback system
US11137979B2 (en) 2016-02-22 2021-10-05 Sonos, Inc. Metadata exchange involving a networked playback system and a networked microphone system
US11513763B2 (en) 2016-02-22 2022-11-29 Sonos, Inc. Audio response playback
US11514898B2 (en) 2016-02-22 2022-11-29 Sonos, Inc. Voice control of a media playback system
US11736860B2 (en) 2016-02-22 2023-08-22 Sonos, Inc. Voice control of a media playback system
US11863593B2 (en) 2016-02-22 2024-01-02 Sonos, Inc. Networked microphone device control
US11750969B2 (en) 2016-02-22 2023-09-05 Sonos, Inc. Default playback device designation
US11184704B2 (en) 2016-02-22 2021-11-23 Sonos, Inc. Music service selection
US11133018B2 (en) 2016-06-09 2021-09-28 Sonos, Inc. Dynamic player selection for audio signal processing
US11545169B2 (en) 2016-06-09 2023-01-03 Sonos, Inc. Dynamic player selection for audio signal processing
US11184969B2 (en) 2016-07-15 2021-11-23 Sonos, Inc. Contextualization of voice inputs
US11664023B2 (en) 2016-07-15 2023-05-30 Sonos, Inc. Voice detection by multiple devices
US11531520B2 (en) 2016-08-05 2022-12-20 Sonos, Inc. Playback device supporting concurrent voice assistants
US11641559B2 (en) 2016-09-27 2023-05-02 Sonos, Inc. Audio playback settings for voice interaction
US11516610B2 (en) 2016-09-30 2022-11-29 Sonos, Inc. Orientation-based playback device microphone selection
US11308961B2 (en) 2016-10-19 2022-04-19 Sonos, Inc. Arbitration-based voice recognition
US11727933B2 (en) 2016-10-19 2023-08-15 Sonos, Inc. Arbitration-based voice recognition
US11183181B2 (en) 2017-03-27 2021-11-23 Sonos, Inc. Systems and methods of multiple voice services
US11900937B2 (en) 2017-08-07 2024-02-13 Sonos, Inc. Wake-word detection suppression
US11380322B2 (en) 2017-08-07 2022-07-05 Sonos, Inc. Wake-word detection suppression
US11500611B2 (en) 2017-09-08 2022-11-15 Sonos, Inc. Dynamic computation of system response volume
US11646045B2 (en) 2017-09-27 2023-05-09 Sonos, Inc. Robust short-time fourier transform acoustic echo cancellation during audio playback
US11769505B2 (en) 2017-09-28 2023-09-26 Sonos, Inc. Echo of tone interferance cancellation using two acoustic echo cancellers
US11538451B2 (en) 2017-09-28 2022-12-27 Sonos, Inc. Multi-channel acoustic echo cancellation
US11302326B2 (en) 2017-09-28 2022-04-12 Sonos, Inc. Tone interference cancellation
US11288039B2 (en) 2017-09-29 2022-03-29 Sonos, Inc. Media playback system with concurrent voice assistance
US11175888B2 (en) 2017-09-29 2021-11-16 Sonos, Inc. Media playback system with concurrent voice assistance
US11893308B2 (en) 2017-09-29 2024-02-06 Sonos, Inc. Media playback system with concurrent voice assistance
US11451908B2 (en) 2017-12-10 2022-09-20 Sonos, Inc. Network microphone devices with automatic do not disturb actuation capabilities
US11676590B2 (en) 2017-12-11 2023-06-13 Sonos, Inc. Home graph
US11343614B2 (en) 2018-01-31 2022-05-24 Sonos, Inc. Device designation of playback and network microphone device arrangements
US11689858B2 (en) 2018-01-31 2023-06-27 Sonos, Inc. Device designation of playback and network microphone device arrangements
US11797263B2 (en) 2018-05-10 2023-10-24 Sonos, Inc. Systems and methods for voice-assisted media content selection
US11175880B2 (en) 2018-05-10 2021-11-16 Sonos, Inc. Systems and methods for voice-assisted media content selection
US11715489B2 (en) 2018-05-18 2023-08-01 Sonos, Inc. Linear filtering for noise-suppressed speech detection
US11792590B2 (en) 2018-05-25 2023-10-17 Sonos, Inc. Determining and adapting to changes in microphone performance of playback devices
US11197096B2 (en) 2018-06-28 2021-12-07 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
US11696074B2 (en) 2018-06-28 2023-07-04 Sonos, Inc. Systems and methods for associating playback devices with voice assistant services
US11482978B2 (en) 2018-08-28 2022-10-25 Sonos, Inc. Audio notifications
US11563842B2 (en) 2018-08-28 2023-01-24 Sonos, Inc. Do not disturb feature for audio notifications
US11432030B2 (en) 2018-09-14 2022-08-30 Sonos, Inc. Networked devices, systems, and methods for associating playback devices based on sound codes
US11778259B2 (en) 2018-09-14 2023-10-03 Sonos, Inc. Networked devices, systems and methods for associating playback devices based on sound codes
US11551690B2 (en) 2018-09-14 2023-01-10 Sonos, Inc. Networked devices, systems, and methods for intelligently deactivating wake-word engines
US11790937B2 (en) 2018-09-21 2023-10-17 Sonos, Inc. Voice detection optimization using sound metadata
US11727936B2 (en) 2018-09-25 2023-08-15 Sonos, Inc. Voice detection optimization based on selected voice assistant service
US11790911B2 (en) 2018-09-28 2023-10-17 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US11100923B2 (en) 2018-09-28 2021-08-24 Sonos, Inc. Systems and methods for selective wake word detection using neural network models
US11501795B2 (en) 2018-09-29 2022-11-15 Sonos, Inc. Linear filtering for noise-suppressed speech detection via multiple network microphone devices
US11899519B2 (en) 2018-10-23 2024-02-13 Sonos, Inc. Multiple stage network microphone device with reduced power consumption and processing load
US11741948B2 (en) 2018-11-15 2023-08-29 Sonos Vox France Sas Dilated convolutions and gating for efficient keyword spotting
US11200889B2 (en) 2018-11-15 2021-12-14 Sonos, Inc. Dilated convolutions and gating for efficient keyword spotting
US11557294B2 (en) 2018-12-07 2023-01-17 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11183183B2 (en) 2018-12-07 2021-11-23 Sonos, Inc. Systems and methods of operating media playback systems having multiple voice assistant services
US11132989B2 (en) 2018-12-13 2021-09-28 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
US11538460B2 (en) 2018-12-13 2022-12-27 Sonos, Inc. Networked microphone devices, systems, and methods of localized arbitration
US11540047B2 (en) 2018-12-20 2022-12-27 Sonos, Inc. Optimization of network microphone devices using noise classification
US11159880B2 (en) 2018-12-20 2021-10-26 Sonos, Inc. Optimization of network microphone devices using noise classification
US11315556B2 (en) 2019-02-08 2022-04-26 Sonos, Inc. Devices, systems, and methods for distributed voice processing by transmitting sound data associated with a wake word to an appropriate device for identification
US11646023B2 (en) 2019-02-08 2023-05-09 Sonos, Inc. Devices, systems, and methods for distributed voice processing
US11798553B2 (en) 2019-05-03 2023-10-24 Sonos, Inc. Voice assistant persistence across multiple network microphone devices
US20230169956A1 (en) * 2019-05-03 2023-06-01 Sonos, Inc. Locally distributed keyword detection
US11771866B2 (en) * 2019-05-03 2023-10-03 Sonos, Inc. Locally distributed keyword detection
US11120794B2 (en) 2019-05-03 2021-09-14 Sonos, Inc. Voice assistant persistence across multiple network microphone devices
US11200894B2 (en) 2019-06-12 2021-12-14 Sonos, Inc. Network microphone device with command keyword eventing
US11854547B2 (en) 2019-06-12 2023-12-26 Sonos, Inc. Network microphone device with command keyword eventing
US11501773B2 (en) 2019-06-12 2022-11-15 Sonos, Inc. Network microphone device with command keyword conditioning
US11361756B2 (en) 2019-06-12 2022-06-14 Sonos, Inc. Conditional wake word eventing based on environment
US11714600B2 (en) 2019-07-31 2023-08-01 Sonos, Inc. Noise classification for event detection
US11710487B2 (en) 2019-07-31 2023-07-25 Sonos, Inc. Locally distributed keyword detection
US11138975B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US11138969B2 (en) 2019-07-31 2021-10-05 Sonos, Inc. Locally distributed keyword detection
US20210358481A1 (en) * 2019-07-31 2021-11-18 Sonos, Inc. Locally distributed keyword detection
US11551669B2 (en) * 2019-07-31 2023-01-10 Sonos, Inc. Locally distributed keyword detection
US11354092B2 (en) 2019-07-31 2022-06-07 Sonos, Inc. Noise classification for event detection
US11189286B2 (en) 2019-10-22 2021-11-30 Sonos, Inc. VAS toggle based on device orientation
US11862161B2 (en) 2019-10-22 2024-01-02 Sonos, Inc. VAS toggle based on device orientation
US11200900B2 (en) 2019-12-20 2021-12-14 Sonos, Inc. Offline voice control
US11869503B2 (en) 2019-12-20 2024-01-09 Sonos, Inc. Offline voice control
US11562740B2 (en) 2020-01-07 2023-01-24 Sonos, Inc. Voice verification for media playback
US11556307B2 (en) 2020-01-31 2023-01-17 Sonos, Inc. Local voice data processing
US11308958B2 (en) 2020-02-07 2022-04-19 Sonos, Inc. Localized wakeword verification
US11961519B2 (en) 2020-02-07 2024-04-16 Sonos, Inc. Localized wakeword verification
US11308962B2 (en) 2020-05-20 2022-04-19 Sonos, Inc. Input detection windowing
US11482224B2 (en) 2020-05-20 2022-10-25 Sonos, Inc. Command keywords with input detection windowing
US11727919B2 (en) 2020-05-20 2023-08-15 Sonos, Inc. Memory allocation for keyword spotting engines
US11694689B2 (en) 2020-05-20 2023-07-04 Sonos, Inc. Input detection windowing
US11698771B2 (en) 2020-08-25 2023-07-11 Sonos, Inc. Vocal guidance engines for playback devices
US11551700B2 (en) 2021-01-25 2023-01-10 Sonos, Inc. Systems and methods for power-efficient keyword detection
US11890168B2 (en) 2022-03-21 2024-02-06 Li Creative Technologies Inc. Hearing protection and situational awareness system

Also Published As

Publication number Publication date
USRE47049E1 (en) 2018-09-18
US8861756B2 (en) 2014-10-14
US20120076316A1 (en) 2012-03-29

Similar Documents

Publication Publication Date Title
USRE48371E1 (en) Microphone array system
KR101566649B1 (en) Near-field null and beamforming
US9966059B1 (en) Reconfigurale fixed beam former using given microphone array
US8098844B2 (en) Dual-microphone spatial noise suppression
US10229698B1 (en) Playback reference signal-assisted multi-microphone interference canceler
US9094496B2 (en) System and method for stereophonic acoustic echo cancellation
US6584203B2 (en) Second-order adaptive differential microphone array
US10269369B2 (en) System and method of noise reduction for a mobile device
KR101470262B1 (en) Systems, methods, apparatus, and computer-readable media for multi-microphone location-selective processing
US20130142355A1 (en) Near-field null and beamforming
US6084973A (en) Digital and analog directional microphone
US10341759B2 (en) System and method of wind and noise reduction for a headphone
US20140003635A1 (en) Audio signal processing device calibration
US20140093091A1 (en) System and method of detecting a user&#39;s voice activity using an accelerometer
KR20070073735A (en) Headset for separation of speech signals in a noisy environment
WO2014051969A1 (en) System and method of detecting a user&#39;s voice activity using an accelerometer
WO2007059255A1 (en) Dual-microphone spatial noise suppression
Priyanka A review on adaptive beamforming techniques for speech enhancement
Gaubitch et al. On near-field beamforming with smartphone-based ad-hoc microphone arrays
CN108650593A (en) A kind of three microphone array far field sound pick-up methods for videoconference
Liu et al. Simulation of fixed microphone arrays for directional hearing aids
Kowalczyk et al. On the extraction of early reflection signals for automatic speech recognition
Šarić et al. Performance analysis of MVDR beamformer applied on an end-fire microphone array composed of unidirectional microphones
CN216016922U (en) From rapping bar
Li et al. Noise reduction method based on generalized subtractive beamformer

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

AS Assignment

Owner name: VOCALIFE LLC, TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ZHU, MANLI;LI, QI;REEL/FRAME:049770/0423

Effective date: 20190131

IPR Aia trial proceeding filed before the patent and appeal board: inter partes review

Free format text: TRIAL NO: IPR2021-01331

Opponent name: AMAZON.COM, INC. AND AMAZON.COM SERVICES, INC.

Effective date: 20210730

IPR Aia trial proceeding filed before the patent and appeal board: inter partes review

Free format text: TRIAL NO: IPR2022-00005

Opponent name: GOOGLE LLC

Effective date: 20211007

IPR Aia trial proceeding filed before the patent and appeal board: inter partes review

Free format text: TRIAL NO: IPR2022-00382

Opponent name: SONOS, INC.

Effective date: 20211229