US20200116820A1 - Audio based motion detection - Google Patents

Audio based motion detection Download PDF

Info

Publication number
US20200116820A1
US20200116820A1 US16/655,601 US201916655601A US2020116820A1 US 20200116820 A1 US20200116820 A1 US 20200116820A1 US 201916655601 A US201916655601 A US 201916655601A US 2020116820 A1 US2020116820 A1 US 2020116820A1
Authority
US
United States
Prior art keywords
endpoint
signal
audio signal
primary
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US16/655,601
Inventor
Oystein BIRKENES
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cisco Technology Inc
Original Assignee
Cisco Technology Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cisco Technology Inc filed Critical Cisco Technology Inc
Priority to US16/655,601 priority Critical patent/US20200116820A1/en
Publication of US20200116820A1 publication Critical patent/US20200116820A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/18Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations using ultrasonic, sonic, or infrasonic waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/72Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using ultrasonic, sonic or infrasonic waves
    • G01S1/74Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S1/00Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith
    • G01S1/72Beacons or beacon systems transmitting signals having a characteristic or characteristics capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters; Receivers co-operating therewith using ultrasonic, sonic or infrasonic waves
    • G01S1/74Details
    • G01S1/75Transmitters
    • G01S1/753Signal details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S5/00Position-fixing by co-ordinating two or more direction or position line determinations; Position-fixing by co-ordinating two or more distance determinations
    • G01S5/01Determining conditions which influence positioning, e.g. radio environment, state of motion or energy consumption
    • G01S5/017Detecting state or type of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3206Monitoring of events, devices or parameters that trigger a change in power modality
    • G06F1/3231Monitoring the presence, absence or movement of users
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F1/00Details not covered by groups G06F3/00 - G06F13/00 and G06F21/00
    • G06F1/26Power supply means, e.g. regulation thereof
    • G06F1/32Means for saving power
    • G06F1/3203Power management, i.e. event-based initiation of a power-saving mode
    • G06F1/3234Power saving characterised by the action undertaken
    • G06F1/3296Power saving characterised by the action undertaken by lowering the supply or operating voltage
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/14Systems for two-way working
    • H04N7/141Systems for two-way working between two video terminals, e.g. videophone
    • H04N7/147Communication arrangements, e.g. identifying the communication as a video-communication, intermediate storage of the signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/005Circuits for transducers, loudspeakers or microphones for combining the signals of two or more microphones
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S2201/00Indexing scheme relating to beacons or beacon systems transmitting signals capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters
    • G01S2201/01Indexing scheme relating to beacons or beacon systems transmitting signals capable of being detected by non-directional receivers and defining directions, positions, or position lines fixed relatively to the beacon transmitters adapted for specific applications or environments
    • G01S2201/02Indoor positioning, e.g. in covered car-parks, mining facilities, warehouses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2430/00Signal processing covered by H04R, not provided for in its groups
    • H04R2430/20Processing of the output signals of the acoustic transducers of an array for obtaining a desired directivity characteristic
    • H04R2430/23Direction finding using a sum-delay beam-former

Definitions

  • the present disclosure relates to motion detection based on at least one source audio signal.
  • Endpoints are generally used for multimedia meetings. These endpoints may include cameras, displays, microphones, speakers, and other equipment configured to facilitate multimedia meetings. Endpoint equipment often remains activated for long periods of time even when the equipment is only used intermittently for multimedia meetings.
  • FIG. 1 is a block diagram of a conference room endpoint configured for motion detection, according to an example embodiment.
  • FIG. 2 is a signal processing diagram for motion detection that includes one integrated primary microphone, a plurality of external source audio speakers, and a plurality of integrated reference microphones, according to an example embodiment.
  • FIG. 3 is a signal processing diagram for motion detection that includes one integrated primary microphone, one external source audio speaker, and one integrated reference microphone, according to an example embodiment.
  • FIG. 4 is a signal processing diagram for motion detection that includes one integrated primary microphone, two external source audio speakers, and two integrated reference microphones, according to an example embodiment.
  • FIG. 5 is a signal processing diagram for motion detection that includes one integrated primary microphone, a plurality of external source audio speakers, one integrated source audio speaker, and a plurality of integrated reference microphones, according to an example embodiment.
  • FIG. 6 is a signal processing diagram for motion detection that includes one integrated primary microphone and one integrated source audio speaker, according to an example embodiment.
  • FIG. 7 is a signal processing diagram for motion detection that includes one integrated primary microphone, one external source audio speaker, one integrated source audio speaker, and one integrated reference microphone, according to an example embodiment.
  • FIG. 8 is a signal processing diagram for motion detection that includes one integrated primary source audio speaker, a plurality of external audio speakers, and a plurality of integrated reference microphones, according to an example embodiment.
  • FIG. 9 is a signal processing diagram for motion detection that includes one integrated primary source audio speaker and one integrated reference microphone, according to an example embodiment.
  • FIG. 10 is signal processing diagram for nonlinear motion detection that includes one integrated primary microphone and one integrated source audio speaker, according to an example embodiment.
  • FIG. 11 is a block diagram of a collaboration endpoint controller configured to execute motion detecting techniques, according to an example embodiment.
  • FIG. 12 is a flowchart of a generalized method in accordance with examples presented herein.
  • a controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker, a reference audio signal for the ultrasonic source audio signal, and, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal.
  • the controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal and determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • endpoint equipment e.g., cameras, displays, microphones, speakers, etc.
  • endpoint equipment may enter a standby mode when the conference room in which the endpoint is located is empty.
  • the endpoint equipment may detect motion in the room and, in response, activate (enter an active mode from a standby mode) in anticipation of a multimedia meeting.
  • FIG. 1 shown is a block diagram of a conference room 102 that includes an endpoint 104 configured for motion detection in accordance with examples presented herein.
  • the endpoint 104 enables a user 106 to participate in a multimedia conference by communicating with users at other endpoints 104 ( a )- 104 ( n ) via network 108 .
  • the network interface unit 110 allows endpoint 102 to connect with network 108 .
  • the endpoint 102 is equipped with camera 112 , display 114 , microphones 116 ( a ) and 116 ( b ), and speaker 118 .
  • the conference room 102 may also include an external speaker 124 that is not connected to the endpoint controller 120 .
  • Camera 112 and microphones 116 ( a )/ 116 ( b ) capture video and audio, respectively, for transmission to one or more of endpoints 104 ( a )- 104 ( n ).
  • Display 114 and speaker 118 output video and audio, respectively, transmitted from one or more of endpoints 104 ( a ) 104( n ).
  • the endpoint controller 120 controls the components of the endpoint 104 to enter a standby mode in order to detect motion (e.g., the motion of one or more persons near the endpoint 102 ). As explained below, when motion is detected (e.g., when one or more persons are near the endpoint 102 ), the endpoint controller 120 controls the components of the endpoint 104 to activate (enter an active mode from the standby mode) in anticipation of a multimedia meeting.
  • motion e.g., the motion of one or more persons near the endpoint 102
  • the endpoint controller 120 controls the components of the endpoint 104 to activate (enter an active mode from the standby mode) in anticipation of a multimedia meeting.
  • speaker 118 , 124 emit ultrasonic source audio signals.
  • speaker 124 continuously emits its ultrasonic source audio signal, even when the components of the endpoint 104 are in standby mode, because the speaker 124 is not connected to the endpoint controller 120 .
  • microphones 116 ( a )/ 116 ( b ) continuously sample the ultrasonic source audio signals from speakers 118 and 124 .
  • the signals produced by microphones 116 ( a )/ 116 ( b ) in response to the ultrasonic source audio signals serve as reference audio signals.
  • the ultrasonic source audio signal produced by speaker 118 is known to the endpoint controller 120 and serves as a primary audio signal.
  • the endpoint controller executes the motion detection logic 122 to generate, based on the reference audio signals, a predicted signal that is predictive of the primary audio signal.
  • the motion detection logic 122 compares the primary audio signal with the predicted signal to determine whether there is motion of one or more persons near the endpoint 104 .
  • the endpoint 104 While in standby mode, the endpoint 104 may operate actively or passively. In active standby mode, the speaker 118 integrated with the endpoint 104 transmits an ultrasonic audio signal into an area/room/location (e.g., conference room). In passive standby mode, the endpoint 104 does not transmit an ultrasonic audio signal.
  • FIGS. 2-4 illustrate example passive mode embodiments and FIGS. 5-10 illustrate example active mode embodiments. In both passive and active standby modes, one or more microphones integrated with the endpoint capture an audio signal from a speaker.
  • the signal processing diagrams of FIGS. 2-10 may be implemented, for example, in a conference room.
  • the endpoint controller 120 performs the signal processing depicted by the signal processing diagrams shown in FIGS. 2-10 .
  • FIG. 2 shows a signal processing diagram for passive standby mode motion detection in accordance with examples presented herein.
  • Speakers 224 ( 1 )-(I) are external to an endpoint 104 .
  • Endpoint 104 includes microphones 216 ( 1 )-(J), filter banks 226 ( 1 )-(J) that respectively map to microphones 216 ( 1 )-(J), and adaptive filters 228 ( 2 )-(J) that respectively map to microphones 216 ( 2 )-(J).
  • the detector function 232 generates an output indicative of detected motion.
  • Speakers 224 ( 1 )-(I) produce respective source audio signals s 1 (t), s 2 (t), s 1 (t).
  • Source audio signals s 1 (t), s 2 (t), s 1 (t) include ultrasonic frequencies (e.g., 20-24 kHz).
  • Each microphone 216 ( 1 )-(J) captures the source audio signals s 1 (t), s 2 (t), s 1 (t) after the source audio signals have traversed the room and, in response, produces a respective signal x 1 , x 2 , x 3 , . . . , x J .
  • x J is a superposition of source audio signals convolved with impulse responses (e.g., h 11 , h 12 , etc.) and a noise signal (e.g., n 1 , n 2 , etc.). For purposes of viewability, only the impulse responses and the noise signals corresponding to the microphones 216 ( 1 ) and 216 ( 2 ) are shown.
  • Signal x 1 may be referred to as a primary audio signal and signals x 2 , x 3 , . . . , x J may be referred to as reference audio signals.
  • the endpoint 104 detects motion by predicting the delayed time frequency representation of primary signal x 1 of microphone 216 ( 1 ) based on the reference signals x 2 , x 3 , . . . , x J of microphones 216 ( 2 )-(J).
  • An endpoint controller of endpoint 104 (not shown) feeds each signal x 1 , x 2 , x 3 , . . . , x J through a respective filter bank 226 ( 1 )-(J). These filter banks 226 ( 1 )-(J) separate the primary audio signal and reference audio signal into sub-bands.
  • the filter banks 226 ( 1 )-(J) respectively partition the signals x 1 , x 2 , x 3 , . . . , x J into time-frequency domain signals X 1 (m,k), X 2 (m,k), X 3 (m,k), . . . , X J (m,k), where m and k denote a frame index and a frequency bin index, respectively.
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in the time-frequency domain signal X 1 (m ⁇ D, k).
  • the endpoint controller of endpoint 104 respectively feeds the time-frequency domain representations of the reference signals, X 2 (m,k), X 3 (m,k), . . . , X J (m,k), through adaptive filters 228 ( 2 )-(J), respectively.
  • the adaptive filters 228 ( 2 )-(J) are normalized least mean square filters.
  • the endpoint controller of endpoint 104 sums the contributions of the filtered time-frequency domain representations of the reference signals, X 2 (m,k), X 3 (m,k), . . .
  • X J (m,k) to generate a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal X 1 (m ⁇ D, k).
  • the endpoint controller of endpoint 104 determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104 .
  • the endpoint controller of endpoint 104 updates the adaptive filter coefficients based on the prediction error and sends the prediction error to the detector 232 .
  • a relatively large prediction error may indicate motion more strongly than a relatively small prediction error.
  • the endpoint controller of endpoint 104 may activate the components of endpoint 104 in anticipation of a multimedia meeting.
  • Signal x 1 is the superposition of the contributions from the audio signals s 1 (t), s 2 (t), . . . , s 1 (t) and a noise signal n 1 . This can be written mathematically as
  • x 1 ( t ) ( h 11 *s 1 ( t )+( h 12 *s 2 )( t )+ . . . +( h 1I *s 1 )( t )+ n 1 ( t ), (1)
  • H j (f) [H j1 (f), H j2 (f), . . . , H jI (f)] T for j ⁇ [2, 3, . . . , J] are I-dimensional vectors. (4)-(6) can be written even more compactly using matrix notation as
  • H ⁇ ( f ) [ H 21 ⁇ ( f ) H 22 ⁇ ( f ) ⁇ H 2 ⁇ I ⁇ ( f ) H 31 ⁇ ( f ) H 43 ⁇ ( f ) ⁇ ⁇ H J ⁇ ⁇ 1 ⁇ ( f ) H JI ] ( 8 )
  • x 1 ( t ) (ti 12 *x 2 )( t )+(h 13 *x 3 )( t )+ . . . +(h 1,J *x j )( t )+h 1 ( t ), (14)
  • (14) is a formulation of the signal x 1 of microphone 216 ( 1 ) as a function of the signals x 2 , x 3 , . . . , x J of microphones 216 ( 2 )-(J).
  • the impulse h ⁇ 12 , h ⁇ 13 , . . . h ⁇ 1,J are not causal.
  • x 1 (t) (h ⁇ 12 (x 2 )(t)+ ⁇ 1 (t).
  • microphone 1 receives the sound before microphone 2 and ⁇ tilde over (h) ⁇ 12 is thus necessarily non-causal.
  • h ⁇ 1,J may be delayed by T samples, where T may be equal to at least an amount of time between generating the primary audio signal and generating the reference audio signal.
  • the delay of D frames corresponds to at least T samples of delay in the time domain.
  • T may be at least as large as the time between microphone 1 receiving the source audio signal and microphone 2 receiving the source audio signal. This implies that the impulse responses will be causal if the T-sample delayed microphone signal x 1 (t ⁇ T) is predicted from non-delayed microphone signals x 2 (t), . . . , x J (t). This can be expressed mathematically as:
  • x 1 ( t ⁇ T ) ( g 12 *x 2 )( t )+( g 13 *x 3 ( t )+ . . . +( g 1,J *x J )( t )+ ⁇ 1 ( t ⁇ T ), (15)
  • T-sample delayed microphone signal x 1 (t ⁇ T) may be predicted with multichannel adaptive filtering, as shown in (16) below:
  • w 2 , w 3 , . . . , w J denote the J ⁇ 1 vectors of adaptive filter coefficients.
  • the prediction error is expected to be small. This is typically the case when a person is not in the room.
  • the prediction error increases.
  • the endpoint 104 detects motion by predicting the T-sample delayed primary signal x 1 of microphone 216 ( 1 ) based on the reference signals x 2 , x 3 , . . . , x J of microphones 216 ( 2 )-(J).
  • the detector 232 detects motion based on an estimate P(m,k) of the mean square error.
  • the detector 232 may use exponential recursive weighting such as
  • is a forgetting factor in the range [0,1).
  • the detector detects motion from an increase in P(m,k). If P(m,k) is less than a threshold P, the endpoint controller of endpoint 104 may conclude that there is no motion of one or more persons near the endpoint 104 . If the P(m,k) is equal to or greater than the threshold P, the endpoint controller of endpoint 104 may conclude that there is motion of one or more persons near the endpoint 104 .
  • time-frequency domain adaptive filtering is computationally cheaper and provides faster convergence than time domain adaptive filtering.
  • time-frequency domain adaptive filtering allows the endpoint 104 to predict only the ultrasonic portion of the signal x 1 while ignoring other frequencies.
  • FIGS. 3 and 4 are specific examples of passive standby mode detection.
  • a signal processing diagram is shown for predicting signal x 1 based on another signal x 2 with one source audio signal s 1 in accordance with examples presented herein.
  • Speaker 224 ( 1 ) is external to endpoint 104 and emits audio signal s 1 .
  • Endpoint 104 includes microphones 216 ( 1 )-( 2 ) (which produce signals x 1 , x 2 ), filter banks 226 ( 1 )-( 2 ), adaptive filter 228 ( 2 ), delay operator 230 , and detector 232 .
  • signal x 1 is the primary signal and signal x 2 is the reference signal.
  • filter banks 226 ( 1 )-( 2 ) respectively partition the signals x 1 , x 2 into time-frequency domain signals X 1 (m,k), X 2 (m,k).
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of endpoint 104 feeds the time-frequency domain representations of the reference signal, X 2 (m,k), through adaptive filter 228 ( 2 ) and generates a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • X 1 ⁇ ( f ) H 11 ⁇ ( f ) H 21 ⁇ ( f ) ⁇ X 2 ⁇ ( f ) + N 1 ⁇ ( f ) - H 11 ⁇ ( f ) H 21 ⁇ ( f ) ⁇ N 2 ⁇ ( f ) . ( 23 )
  • filtering X 2 (m,k) with linear adaptive filter 228 ( 2 ) may enable the endpoint controller of endpoint 104 to generate predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k), and determine whether the prediction error is indicative of the motion of one or more persons near the endpoint 104 .
  • FIG. 4 a signal processing diagram is shown for predicting the delayed time-frequency representation of the primary signal x 1 (t) from the time-frequency representations of the reference signals x 2 (t) and x 3 (t) in accordance with examples presented herein.
  • X 1 (m ⁇ D,k) is predicted from X 2 (m,k) and X 3 (m,k).
  • Speakers 224 ( 1 )-( 2 ) produce source audio signals s 1 , s 2 and are external to endpoint 104 .
  • Endpoint 104 includes microphones 216 ( 1 )-( 3 ) (which produce signals x 1 , x 2 , x 3 ), filter banks 226 ( 1 )-( 3 ), adaptive filters 228 ( 2 )-( 3 ), delay operator 230 , and detector 232 .
  • signal x 1 is the primary signal and signals x 2 , x 3 are reference signals.
  • filter banks 226 ( 1 )-( 3 ) respectively partition the signals x 1 , x 2 , x 3 into time-frequency domain signals X 1 (m,k), X 2 (m,k), X 3 (m,k).
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of endpoint 104 respectively feeds the time-frequency domain representations of the reference signals, X 2 (m,k), X 3 (m,k), through adaptive filters 228 ( 2 )-( 3 ) and generates a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • H ⁇ ( f ) [ H 21 ⁇ ( f ) H 22 ⁇ ( f ) H 31 ⁇ ( f ) H 32 ⁇ ( f ) ] . ( 24 )
  • This matrix has linearly independent columns, which means (13) can be re-written as
  • X 2 (m,k), X 3 (m,k) with linear adaptive filters 228 ( 2 )-( 3 ) may enable the endpoint controller of endpoint 104 to generate predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k), and determine whether the prediction error is indicative of the motion of one or more persons near the endpoint 104 .
  • FIGS. 5-10 illustrate example signal processing diagrams for active standby mode, in which the endpoint itself produces at least one source ultrasonic signal to be used for motion detection.
  • FIGS. 5-7 illustrate a first category of active standby mode embodiments in which an integrated speaker serves as a reference audio device.
  • FIGS. 8-9 illustrate a second category of active standby mode embodiments in which an integrated speaker serves as a primary audio device.
  • FIG. 10 illustrates an example involving non-linear adaptive filtering according to the first category of active standby mode embodiments.
  • a generalized signal processing diagram is shown for predicting the delayed time-frequency representation of the primary signal x 1 (t) from the time-frequency representations of the reference audio signals x 2 (t), x 3 (t), . . . , x J (t) in accordance with examples presented herein.
  • X 1 (m ⁇ D,k) is predicted from X 2 (m,k), X 3 (m,k), . . . , X J (m,k).
  • Speakers 224 ( 2 )-(I) are external to endpoint 104 , which includes microphone 216 ( 1 ), speaker 224 ( 1 ), and microphones 216 ( 3 )-(J).
  • Endpoint 104 also includes filter banks 226 ( 1 )-(J), adaptive filters 228 ( 2 )-(J), delay operator 230 , and detector 232 .
  • signal x 1 is the primary signal and signals x 2 -x J are reference audio signals.
  • Speakers 224 ( 1 )-(I) produce respective source audio signals s 1 (t), s 2 (t), . . . , s 1 (t).
  • Each microphone 216 ( 1 ), 216 ( 3 )-(J) captures the source audio signals s 1 (t), s 2 (t), s 1 (t) after the source audio signals have traversed the room and, in response, produces a respective signal x 1 , x 3 , . . . , x J .
  • endpoint controller generates reference signal x 2 for source audio signal s 1 since the speaker 224 ( 1 ) is integrated into the endpoint 104 and, as such, the endpoint controller of endpoint 104 knows the source audio signal s 1 .
  • the speaker 224 ( 1 ) serves as both a source audio device and a reference audio device.
  • Each signal x 1 , x 3 , . . . , x J is a superposition of source audio signals convolved with impulse responses (e.g., h 11 , h 12 , etc.) and a noise signal (e.g., n 1 , etc.). For purposes of viewability, only the impulse responses and noise signal corresponding to the primary microphone 216 ( 1 ) are shown.
  • the endpoint 104 detects motion by predicting the delayed time-frequency representation of the primary signal x 1 of microphone 216 ( 1 ) based on the time-frequency representations of the reference signals x 2 , x 3 , . . . , x J .
  • X 1 (m ⁇ D,k) is predicted from X 2 (m,k), X 3 (m,k), . . . , X J (m,k).
  • An endpoint controller of endpoint 104 feeds each of the audio signals x 1 , x 2 , x 3 , . . . , x J through a respective filter bank 226 ( 1 )-(J).
  • These filter banks 226 ( 1 )-(J) respectively partition the audio signals x 1 , x 2 , x 3 , . . . , x J into time-frequency domain signals X 1 (m,k), X 2 (m,k), X 3 (m,k), . . . , X J (m,k), where m and k denote a frame index and a frequency bin index, respectively.
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of endpoint 104 respectively feeds the time-frequency domain representations of the reference audio signals, X 2 (m,k), X 3 (m,k), . . . , X J (m,k), through adaptive filters 228 ( 2 )-(J).
  • the endpoint controller of endpoint 104 sums the contributions of the filtered time-frequency domain representations of the reference audio signals to generate a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • x 2 is a reference audio signal that is generated by the endpoint controller of an endpoint and corresponds to source audio signal s 1
  • x 2 may be considered a fictitious microphone signal equal to the source audio signal s 1 , with no contributions from s 2 -s J .
  • n 2 0
  • x 2 is a loudspeaker signal known to endpoint 104 and not a microphone signal.
  • the first row of H(f) is the unit vector H 2 (f), so that (8) becomes
  • H ⁇ ( f ) [ 1 0 ⁇ 0 H 31 ⁇ ( f ) H 32 ⁇ ( f ) ⁇ ⁇ H J ⁇ ⁇ 1 ⁇ ( f ) H JI ] , ( 26 )
  • N ( f ) [0 N 3 ( f ) . . . N J ( f )] T . (27)
  • FIG. 6 is an example of the first category of active standby mode motion detection in which an endpoint controller uses a reference audio signal from a source audio speaker to predict the delayed time-frequency representation of a primary audio signal.
  • endpoint 104 includes microphone 216 ( 1 ), speaker 224 ( 1 ), filter banks 226 ( 1 )-( 2 ), adaptive filter 228 ( 2 ), delay operator 230 , and detector function 232 . There are no external speakers in this example.
  • microphone signal x 1 is the primary audio signal and loudspeaker signal x 2 is the reference audio signal.
  • filter banks 226 ( 1 )-( 2 ) respectively partition the audio signals x 1 , x 2 into time-frequency domain signals X 1 (m,k), X 2 (m,k).
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller feeds the time-frequency domain representations of the reference signal, X 2 (m,k), through adaptive filter 228 ( 2 ) and generates a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • the endpoint controller of the endpoint 104 may generate the predicted signal ⁇ acute over (X) ⁇ 1 (m ⁇ D,k) by filtering X 2 (m,k) through adaptive filter 228 ( 2 ).
  • FIG. 7 is another example of the first category of active standby mode motion detection.
  • the endpoint controller of the endpoint 104 predicts a delayed time-frequency representation of a primary audio signal from a reference audio signal from a source audio speaker, and a microphone signal.
  • Endpoint 104 includes microphone 216 ( 1 ), speaker 224 ( 1 ), microphone 216 ( 3 ), filter banks 226 ( 1 )-( 3 ), adaptive filters 228 ( 2 )-( 3 ), delay operator 230 , and detector 232 .
  • Speaker 224 ( 2 ) is external to the endpoint.
  • microphone signal x 1 is the primary audio signal and audio signals x 2 , x 3 are reference audio signals.
  • filter banks 226 ( 1 )-( 3 ) respectively partition the audio signals x 1 , x 2 , x 3 into time-frequency domain signals X 1 (m,k), X 2 (m,k), X 3 (m,k).
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of the endpoint 104 respectively feeds the time-frequency domain representations of the reference audio signals, X 2 (m,k), X 3 (m,k), through adaptive filters 228 ( 2 )-( 3 ) and generates a predicted signal ⁇ dot over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • H ⁇ ( f ) [ 1 0 H 31 ⁇ ( f ) H 32 ⁇ ( f ) ] . ( 32 )
  • the endpoint controller of the endpoint 104 may generate the predicted signal ⁇ tilde over (X) ⁇ 1 (m ⁇ D,k) by filtering X 2 (m,k), X 3 (m,k) through adaptive filters 228 ( 2 )-( 3 ).
  • FIGS. 8-9 illustrate a second category of active standby mode embodiments in which an integrated speaker serves as a primary audio device.
  • FIG. 8 a generalized signal processing diagram is shown for predicting the delayed time-frequency representation of a primary audio signal x 1 from a plurality of microphone signals x 2 . . . x J in accordance with examples presented herein.
  • Speakers 224 ( 2 )-(I) are external to endpoint 104 , which includes speaker 224 ( 1 ), and microphones 216 ( 2 )-(J).
  • Endpoint 104 also includes filter banks 226 ( 1 )-(J), adaptive filters 228 ( 2 )-(J), delay operator 230 , and detector function 232 .
  • audio signal x 1 is the primary audio signal and microphone signals x 2 -x J are reference audio signals.
  • Speakers 224 ( 1 )-(I) produce respective source audio signals s 1 (t), s 2 (t), . . . , s I (t).
  • Each microphone 216 ( 1 )-(J) captures the source audio signals s 1 (t), s 2 (t), . . . , s I (t) after the source audio signals have traversed the room and, in response, produces a respective signal x 1 , x 3 , . . . , x J .
  • the endpoint controller of the endpoint generates primary audio signal x 1 for source audio signal s 1 since the speaker 224 ( 1 ) is integrated into the endpoint 104 and, as such, the endpoint controller knows the source audio signal s 1 .
  • the speaker 224 ( 1 ) serves as both an audio device and a primary audio device.
  • Each signal x 2 , x 3 , . . . , x J is a superposition of source audio signals convolved with impulse responses (e.g., h 21 , h 22 , etc.) and a noise signal (e.g., n 2 , etc.). For purposes of viewability, only the contributions and noise signal corresponding to the primary microphone 216 ( 2 ) are shown.
  • the endpoint 104 detects motion by predicting the delayed time-frequency representation of the primary signal x 1 (t) from the time-frequency representations of the reference signals x 2 (t), x 3 (t), . . . , x J (t).
  • X 1 (m ⁇ D,k) is predicted from X 2 (m,k), X 3 (m,k), . . . , x J (m,k).
  • the endpoint controller of the endpoint 104 feeds each audio signal x 1 , x 2 , x 3 , . . . , x J through a respective filter bank 226 ( 1 )-(J).
  • These filter banks 226 ( 1 )-(J) respectively partition the audio signals x 1 , x 2 , x 3 , . . . , x J into time-frequency domain signals X 1 (m,k), X 2 (m,k), X 3 (m,k), . . . , X J (m,k), where m and k denote a frame index and a frequency bin index, respectively.
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of the endpoint 104 respectively feeds the time-frequency domain representations of the reference audio signals, X 2 (m,k), X 3 (m,k), . . . , X J (m,k), through adaptive filters 228 ( 2 )-(J).
  • the endpoint controller sums the contributions of the filtered time-frequency domain representations of the reference audio signals to generate a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal X 1 (m ⁇ D, k).
  • FIG. 9 is an example of the second category of active standby mode motion detection in which an endpoint controller of the endpoint uses a reference signal from a microphone to predict the delayed time-frequency representation of a primary audio signal.
  • endpoint 104 includes speaker 224 ( 1 ), microphone 216 ( 1 ), filter banks 226 ( 1 )-( 2 ), adaptive filter 228 ( 2 ), delay operator 230 , and detector 232 . There are no external speakers in this example.
  • audio signal x 1 is the primary audio signal and audio signal x 2 is the reference audio signal.
  • filter banks 226 ( 1 )-( 2 ) respectively partition the audio signals x 1 , x 2 into time-frequency domain signals X 1 (m,k), X 2 (m,k).
  • the delay operator 230 delays the time-frequency representation of the primary audio signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of the endpoint 104 feeds the time-frequency domain representation of the reference signal, X 2 (m,k), through adaptive filter 228 ( 2 ) and generates a predicted signal ⁇ dot over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • H ⁇ 1 ⁇ ( f ) 1 H 21 ⁇ ( f )
  • N ⁇ 1 ⁇ ( f ) - 1 H 21 ⁇ ( f ) ⁇ N 2 ⁇ ( f ) .
  • the endpoint controller of the endpoint 104 may generate the predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) by filtering X 2 (m,k) through adaptive filter 228 ( 2 ).
  • FIGS. 2-9 provide examples of linear prediction (i.e., a model that is linear in its prediction parameters but not necessarily its regressors, which are in this case the time-frequency representations of the reference audio signals).
  • linear prediction i.e., a model that is linear in its prediction parameters but not necessarily its regressors, which are in this case the time-frequency representations of the reference audio signals.
  • non-linear prediction may be desirable.
  • the impulse responses in active mode may not always be sufficiently linear and time-invariant (e.g., when a third party amplifier card is used, when a codec is playing the audio source signal through a television with a poorly designed amplifier/speaker, etc.).
  • FIG. 10 provides an example signal diagram of active motion detection using non-linear prediction.
  • FIG. 10 is an example signal diagram in which an endpoint controller of an endpoint uses a reference audio signal from a source audio speaker to predict the delayed time-frequency representation of a primary audio signal.
  • endpoint 104 includes microphone 216 ( 1 ), speaker 224 ( 1 ), filter banks 226 ( 1 )-( 2 ), non-linear adaptive filter 1028 ( 2 ), delay operator 230 , and detector 232 .
  • Endpoint 104 further includes a phase operator 1034 , absolute value operator 1036 , and a phase application operator 1038 . There are no external speakers in this example.
  • the microphone signal x 1 is the primary audio signal and audio signal x 2 is the reference audio signal.
  • filter banks 226 ( 1 )-( 2 ) respectively partition the audio signals x 1 , x 2 into time-frequency domain signals X 1 (m,k), X 2 (m,k).
  • the delay operator 230 delays the time-frequency representation of the primary signal, X 1 (m,k), by D frames, resulting in X 1 (m ⁇ D, k).
  • the endpoint controller of the endpoint 104 feeds the time-frequency domain representations of the reference signal, X 2 (m,k), through adaptive filter 228 ( 2 ) and generates a predicted signal ⁇ circumflex over (X) ⁇ 1 (m ⁇ D,k) that is predictive of the delayed time-frequency representation of the primary signal, X 1 (m ⁇ D, k).
  • FIG. 11 is a block diagram of endpoint controller 120 , which is configured to execute the operations described herein.
  • the endpoint controller 120 includes one or more application-specific integrated circuits (ASICs) 1140 , a memory 1142 , which stores or is encoded with instructions for the motion detection logic 122 , and one or more processors 1146 .
  • the one or more ASICs 1140 may be customized for one or more particular uses (e.g., audio/video signal processing).
  • the one or more processors 1146 are configured to execute instructions stored in the memory 1142 (e.g., motion detection logic 122 ). When executed by the one or more processors 1146 , the motion detection logic 122 enables the endpoint controller 120 to perform the operations described herein in connection with FIGS. 1-10 and 12 .
  • the memory 1142 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices.
  • ROM read only memory
  • RAM random access memory
  • magnetic disk storage media devices e.g., magnetic disks
  • optical storage media devices e.g., magnetic disks
  • flash memory devices electrical, optical, or other physical/tangible memory storage devices.
  • the memory 1142 may include one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (by the processor 1146 ) it is operable to perform the operations described herein.
  • FIG. 12 is a flowchart depicting a generalized method 1200 in accordance with examples presented herein.
  • an endpoint controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker.
  • the endpoint controller generates a reference audio signal for the ultrasonic source audio signal.
  • the endpoint controller generates, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal.
  • the endpoint controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal.
  • the endpoint controller determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • collaboration endpoints may accurately detect motion in a conference room in which the endpoint is located. Upon detecting motion, a collaboration endpoint may enter active mode and, for example, display a friendly and informative greeting to the user. Endpoint motion detection may also be used to trigger burglar alarms, control lights/blinds, record and/or analyze the usage of the conference room, provide up-to-date information regarding the usage of the conference room, etc.
  • impulse responses may be non-linear and/or time-variant due to, for example, systems having poor quality amplifier cards, or when an ultrasonic pairing signal is played through a television with poor quality amplifiers and/or speakers.
  • the techniques described herein may also be desirable in situations involving multiple collaboration endpoints in the same room that each produce ultrasonic pairing signals. This is often the case in office spaces and classrooms where each user has a personal endpoint in the same room. These techniques may also be compatible with ultrasonic noise from other equipment, such as third party motion detectors in the ceiling. In addition, they are operable in large rooms that are not properly covered with an optimal number of microphones.
  • the number of signals used for motion detection by the endpoint is at least one more than the number of audio source signals.
  • the number of reference audio signals is equal to at least the number of audio source signals.
  • a method comprises: at a controller of a collaboration endpoint, generating a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker; generating a reference audio signal for the ultrasonic source audio signal; generating, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal; producing a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and determining whether the prediction error is indicative of motion of one or more persons near the collaboration endpoint.
  • an apparatus comprises: a source audio speaker; at least one microphone of a collaboration endpoint; and one or more processors coupled to a memory, wherein the one or more processors are configured to: generate a primary audio signal for an ultrasonic source audio signal produced by the source audio speaker; generate a reference audio signal for the ultrasonic source audio signal; generate, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal; produce a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and determine whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • one or more non-transitory computer readable storage media are provided.
  • the non-transitory computer readable storage media are encoded with instructions that, when executed by a processor of a collaboration endpoint, cause the processor to: generate a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker; generate a reference audio signal for the ultrasonic source audio signal; generate, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal; produce a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and determine whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Acoustics & Sound (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Otolaryngology (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Circuit For Audible Band Transducer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Computational Linguistics (AREA)
  • Quality & Reliability (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)

Abstract

A controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker, a reference audio signal for the ultrasonic source audio signal, and, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal. The controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal and determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is a continuation of U.S. patent application Ser. No. 15/496,362, filed Apr. 25, 2017, the entirety of which is incorporated herein by reference.
  • TECHNICAL FIELD
  • The present disclosure relates to motion detection based on at least one source audio signal.
  • BACKGROUND
  • Collaboration endpoints are generally used for multimedia meetings. These endpoints may include cameras, displays, microphones, speakers, and other equipment configured to facilitate multimedia meetings. Endpoint equipment often remains activated for long periods of time even when the equipment is only used intermittently for multimedia meetings.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a block diagram of a conference room endpoint configured for motion detection, according to an example embodiment.
  • FIG. 2 is a signal processing diagram for motion detection that includes one integrated primary microphone, a plurality of external source audio speakers, and a plurality of integrated reference microphones, according to an example embodiment.
  • FIG. 3 is a signal processing diagram for motion detection that includes one integrated primary microphone, one external source audio speaker, and one integrated reference microphone, according to an example embodiment.
  • FIG. 4 is a signal processing diagram for motion detection that includes one integrated primary microphone, two external source audio speakers, and two integrated reference microphones, according to an example embodiment.
  • FIG. 5 is a signal processing diagram for motion detection that includes one integrated primary microphone, a plurality of external source audio speakers, one integrated source audio speaker, and a plurality of integrated reference microphones, according to an example embodiment.
  • FIG. 6 is a signal processing diagram for motion detection that includes one integrated primary microphone and one integrated source audio speaker, according to an example embodiment.
  • FIG. 7 is a signal processing diagram for motion detection that includes one integrated primary microphone, one external source audio speaker, one integrated source audio speaker, and one integrated reference microphone, according to an example embodiment.
  • FIG. 8 is a signal processing diagram for motion detection that includes one integrated primary source audio speaker, a plurality of external audio speakers, and a plurality of integrated reference microphones, according to an example embodiment.
  • FIG. 9 is a signal processing diagram for motion detection that includes one integrated primary source audio speaker and one integrated reference microphone, according to an example embodiment.
  • FIG. 10 is signal processing diagram for nonlinear motion detection that includes one integrated primary microphone and one integrated source audio speaker, according to an example embodiment.
  • FIG. 11 is a block diagram of a collaboration endpoint controller configured to execute motion detecting techniques, according to an example embodiment.
  • FIG. 12 is a flowchart of a generalized method in accordance with examples presented herein.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS Overview
  • A controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker, a reference audio signal for the ultrasonic source audio signal, and, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal. The controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal and determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • Example Embodiments
  • When endpoint equipment (e.g., cameras, displays, microphones, speakers, etc.) remains activated during periods of nonuse during a multimedia conference session, the equipment wastes electricity and experiences decreases in life expectancy. Therefore, as described in greater detail below, endpoint equipment may enter a standby mode when the conference room in which the endpoint is located is empty. When a user enters the room, the endpoint equipment may detect motion in the room and, in response, activate (enter an active mode from a standby mode) in anticipation of a multimedia meeting.
  • With reference made to FIG. 1, shown is a block diagram of a conference room 102 that includes an endpoint 104 configured for motion detection in accordance with examples presented herein. The endpoint 104 enables a user 106 to participate in a multimedia conference by communicating with users at other endpoints 104(a)-104(n) via network 108. The network interface unit 110 allows endpoint 102 to connect with network 108. The endpoint 102 is equipped with camera 112, display 114, microphones 116(a) and 116(b), and speaker 118. These components are connected to an endpoint controller 120, which serves as a control unit for the endpoint 104, and executes, among other software instructions, software instructions for motion detection logic 122. The conference room 102 may also include an external speaker 124 that is not connected to the endpoint controller 120.
  • When user 106 is participating in a multimedia conference, this equipment is active and facilitates user 106 participation in the multimedia conference. Camera 112 and microphones 116(a)/116(b) capture video and audio, respectively, for transmission to one or more of endpoints 104(a)-104(n). Display 114 and speaker 118 output video and audio, respectively, transmitted from one or more of endpoints 104(a) 104(n).
  • When the multimedia conference ends, the user 106 exits conference room 102. At this point, there are no people in conference room 102, and thus no movement in the conference room 102. In accordance with the motion detection logic 122, the endpoint controller 120 controls the components of the endpoint 104 to enter a standby mode in order to detect motion (e.g., the motion of one or more persons near the endpoint 102). As explained below, when motion is detected (e.g., when one or more persons are near the endpoint 102), the endpoint controller 120 controls the components of the endpoint 104 to activate (enter an active mode from the standby mode) in anticipation of a multimedia meeting.
  • During standby mode, camera 112 and display 114 may simply shut off/deactivate/power down/etc. However, speakers 118, 124 emit ultrasonic source audio signals. In an example, speaker 124 continuously emits its ultrasonic source audio signal, even when the components of the endpoint 104 are in standby mode, because the speaker 124 is not connected to the endpoint controller 120. Also during standby mode, microphones 116(a)/ 116(b) continuously sample the ultrasonic source audio signals from speakers 118 and 124.
  • In this example, the signals produced by microphones 116(a)/116(b) in response to the ultrasonic source audio signals serve as reference audio signals. Meanwhile, the ultrasonic source audio signal produced by speaker 118 is known to the endpoint controller 120 and serves as a primary audio signal. As explained in greater detail below, the endpoint controller executes the motion detection logic 122 to generate, based on the reference audio signals, a predicted signal that is predictive of the primary audio signal. The motion detection logic 122 compares the primary audio signal with the predicted signal to determine whether there is motion of one or more persons near the endpoint 104.
  • While in standby mode, the endpoint 104 may operate actively or passively. In active standby mode, the speaker 118 integrated with the endpoint 104 transmits an ultrasonic audio signal into an area/room/location (e.g., conference room). In passive standby mode, the endpoint 104 does not transmit an ultrasonic audio signal. FIGS. 2-4 illustrate example passive mode embodiments and FIGS. 5-10 illustrate example active mode embodiments. In both passive and active standby modes, one or more microphones integrated with the endpoint capture an audio signal from a speaker. The signal processing diagrams of FIGS. 2-10 may be implemented, for example, in a conference room. The endpoint controller 120 performs the signal processing depicted by the signal processing diagrams shown in FIGS. 2-10.
  • FIG. 2 shows a signal processing diagram for passive standby mode motion detection in accordance with examples presented herein. Speakers 224(1)-(I) are external to an endpoint 104. Endpoint 104 includes microphones 216(1)-(J), filter banks 226(1)-(J) that respectively map to microphones 216(1)-(J), and adaptive filters 228(2)-(J) that respectively map to microphones 216(2)-(J). There is a delay operator 230 and a detector function 232. The detector function 232 generates an output indicative of detected motion.
  • Speakers 224(1)-(I) produce respective source audio signals s1(t), s2(t), s1(t). Source audio signals s1(t), s2(t), s1(t) include ultrasonic frequencies (e.g., 20-24 kHz). Each microphone 216(1)-(J) captures the source audio signals s1(t), s2(t), s1(t) after the source audio signals have traversed the room and, in response, produces a respective signal x1, x2, x3, . . . , xJ. Each signal x1, x2, x3, . . . , xJ is a superposition of source audio signals convolved with impulse responses (e.g., h11, h12, etc.) and a noise signal (e.g., n1, n2, etc.). For purposes of viewability, only the impulse responses and the noise signals corresponding to the microphones 216(1) and 216(2) are shown. Signal x1 may be referred to as a primary audio signal and signals x2, x3, . . . , xJ may be referred to as reference audio signals.
  • The endpoint 104 detects motion by predicting the delayed time frequency representation of primary signal x1 of microphone 216(1) based on the reference signals x2, x3, . . . , xJ of microphones 216(2)-(J). An endpoint controller of endpoint 104 (not shown) feeds each signal x1, x2, x3, . . . , xJ through a respective filter bank 226(1)-(J). These filter banks 226(1)-(J) separate the primary audio signal and reference audio signal into sub-bands. For instance, the filter banks 226(1)-(J) respectively partition the signals x1, x2, x3, . . . , xJ into time-frequency domain signals X1(m,k), X2(m,k), X3(m,k), . . . , XJ(m,k), where m and k denote a frame index and a frequency bin index, respectively. As explained in greater detail below, the delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in the time-frequency domain signal X1(m−D, k). The endpoint controller of endpoint 104 respectively feeds the time-frequency domain representations of the reference signals, X2(m,k), X3(m,k), . . . , XJ(m,k), through adaptive filters 228(2)-(J), respectively. In an example, the adaptive filters 228(2)-(J) are normalized least mean square filters. The endpoint controller of endpoint 104 sums the contributions of the filtered time-frequency domain representations of the reference signals, X2(m,k), X3(m,k), . . . , XJ(m,k), to generate a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal X1(m−D, k). The endpoint controller of endpoint 104 generates prediction error E(m−D,k)=X1(m−D,k)−{circumflex over (X)}1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal.
  • The endpoint controller of endpoint 104 determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104. The endpoint controller of endpoint 104 updates the adaptive filter coefficients based on the prediction error and sends the prediction error to the detector 232. A relatively large prediction error may indicate motion more strongly than a relatively small prediction error. As discussed above, if the endpoint controller of endpoint 104 determines that there is motion of one or more persons near the collaboration endpoint 104, the endpoint controller of endpoint 104 may activate the components of endpoint 104 in anticipation of a multimedia meeting.
  • The following is a mathematical derivation of primary signal x1 expressed as a function of the reference signals x2, x3, . . . , xJ.
  • Signal x1 is the superposition of the contributions from the audio signals s1(t), s2(t), . . . , s1(t) and a noise signal n1. This can be written mathematically as

  • x 1(t)=(h 11 *s 1(t)+(h 12 *s 2)(t)+ . . . +(h 1I *s 1)(t)+n 1(t),   (1)
  • where * is the convolution operator and h1i, i ∈ {1, 2, . . . I}, denotes the impulse response from the ith audio speaker to the first microphone. Signal x1 is a digital signal (i.e., the sampled and quantized version of the analog signal captured at microphone 216(1)), and t denotes the sample index. The variables in (1) are functions of time. Assuming these signals are stationary, (1) may be re-written in the frequency domain as

  • X 1(f)=H 11(f)S 1(f)S 1(f)+H 12(f)S 2(f)+ . . . +H 1I(f)S 1(f)+N 1(f),   (2)
  • where upper case variables are used to denote the discrete-time Fourier transform (DTFT) corresponding to the respective lower case variables, with f ∈ [0, 1] denoting digital frequency. Using vector notation, (2) can be written more compactly as

  • X 1(f)=H 1 T(f)S(f)+N 1(f),   (3)
  • where S(f)=[S1(f), S2(f), . . . , SI(f)]T and H1(f)=[H11(f), H12(f), . . . , H1I(f)]T are I-dimensional vectors. Similar equations can be written for the other microphone signals:
  • X 2 ( f ) = H 2 T ( f ) S ( f ) + N 2 ( f ) , ( 4 ) X 3 ( f ) = H 3 T ( f ) S ( f ) + N 3 ( f ) , ( 5 ) X J ( f ) = H J T ( f ) S ( f ) + N J ( f ) . ( 6 )
  • where Hj(f)=[Hj1(f), Hj2(f), . . . , HjI(f)]T for j ∈ [2, 3, . . . , J] are I-dimensional vectors. (4)-(6) can be written even more compactly using matrix notation as

  • X(f)=H(f)S(f)+N(f),   (7)
  • where X(f)=[X2(f), X3(f), . . . , XJ(f)]T and N(f)=[N2(f), N3(f), . . . , NJ(f)]T are J−1-dimensional vectors and
  • H ( f ) = [ H 21 ( f ) H 22 ( f ) H 2 I ( f ) H 31 ( f ) H 43 ( f ) H J 1 ( f ) H JI ] ( 8 )
  • is a J−1×I-dimensional matrix. From (7), if the columns of H(f) are linearly independent (this requires that J>1),

  • S(f)=(H T(f)H(f))−1 H T(f)(X(f)−N(f)),   (9)
  • Substituting S(f) into (3) yields
  • X 1 ( f ) = H 1 T ( f ) S ( f ) + N 1 ( f ) ( 10 ) = H 1 T ( f ) ( H T ( f ) H ( f ) ) - 1 H T ( f ) ( X ( f ) - N ( f ) ) + N 1 ( f ) ( 11 ) = H ~ 1 T ( f ) X ( f ) + N ~ 1 ( f ) , ( 12 )
  • where Ĥ1(f)=H1 T(f)(Ht(f)H(f))−1HT(f)=[{tilde over (H)}12(f), {tilde over (H)}13(f), . . . , Ĥ1.1(f)]T and Ñ1(f)=N1(f)−{tilde over (H)}1 T(f)N(f).
    Returning to scalar notation yields

  • X 1(f)={tilde over (H)} 12(f)X 2(f)+{tilde over (H)} 13(f)X 3(f)+ . . . +{tilde over (H)}1J(f)X J(f)+Ñ 1(f),   (13)

  • and

  • x 1(t)=(ti12 *x 2)(t)+(h13 *x 3)(t)+ . . . +(h1,J *x j)(t)+h1(t),   (14)
  • (14) is a formulation of the signal x1 of microphone 216(1) as a function of the signals x2, x3, . . . , xJ of microphones 216(2)-(J).
  • However, the impulse h̆12, h̆13, . . . h̆1,J are not causal. For example, in a case involving J=2 microphones and I=1 source, x1(t)=(h̆12(x2)(t)+ń1(t). If the single source is located closer to microphone 1 than to microphone 2, microphone 1 receives the sound before microphone 2 and {tilde over (h)}12 is thus necessarily non-causal. Because it is desired for the impulse responses to be causal, one or more of the impulse responses h̆12, h̆13, . . . h̆1,J may be delayed by T samples, where T may be equal to at least an amount of time between generating the primary audio signal and generating the reference audio signal. In the example of FIG. 2, the delay of D frames corresponds to at least T samples of delay in the time domain. In the above example involving J=2 microphones and I=1 source, T may be at least as large as the time between microphone 1 receiving the source audio signal and microphone 2 receiving the source audio signal. This implies that the impulse responses will be causal if the T-sample delayed microphone signal x1(t−T) is predicted from non-delayed microphone signals x2(t), . . . , xJ(t). This can be expressed mathematically as:

  • x 1(t−T)=(g 12 *x 2)(t)+(g 13 *x 3(t)+ . . . +(g 1,J *x J)(t)+ñ 1(t−T),   (15)
  • where g1j, j ∈ {2, 3, . . . , J}, is a T-sample delayed version of {acute over (h)}1j. When the impulse responses are causal, the T-sample delayed microphone signal x1(t−T) may be predicted with multichannel adaptive filtering, as shown in (16) below:

  • {circumflex over (x)} 1(t−T)=(w 2 *x 2(t)+(w 3 *x 3)(t)+ . . . +(w J *x 1)(t),   (16)
  • where w2, w3, . . . , wJ denote the J−1 vectors of adaptive filter coefficients. As long as the impulse responses g12, g13, . . . , g1J are approximately constant and the noise term ñ1(t) is approximately stationary and small compared to the reference signal terms in (15), the prediction error is expected to be small. This is typically the case when a person is not in the room. When one or more of the impulse responses changes or the noise term ñ1(t) suddenly increases (e.g., due to a person entering the conference room in which the endpoint 104 is located), the prediction error increases. Thus, as shown in (16), the endpoint 104 detects motion by predicting the T-sample delayed primary signal x1 of microphone 216(1) based on the reference signals x2, x3, . . . , xJ of microphones 216(2)-(J).
  • In an example, the detector 232 detects motion based on an estimate P(m,k) of the mean square error. The detector 232 may use exponential recursive weighting such as

  • P(m,k)=αP(m−1,k)+(1−α)|E(m,k)|2,   (17)
  • where α is a forgetting factor in the range [0,1). For example, for some threshold P,

  • P(m,k)<P⇒No motion,   (18)

  • P(m,k)≥P⇒Motion.   (19)
  • Thus, in this example, the detector detects motion from an increase in P(m,k). If P(m,k) is less than a threshold P, the endpoint controller of endpoint 104 may conclude that there is no motion of one or more persons near the endpoint 104. If the P(m,k) is equal to or greater than the threshold P, the endpoint controller of endpoint 104 may conclude that there is motion of one or more persons near the endpoint 104.
  • The above derivation involves multichannel adaptive filtering in the digital time domain. Alternatively, the endpoint controller of endpoint 104 may perform multichannel adaptive filtering in the time-frequency domain. Time-frequency domain adaptive filtering is computationally cheaper and provides faster convergence than time domain adaptive filtering. In addition, time-frequency domain adaptive filtering allows the endpoint 104 to predict only the ultrasonic portion of the signal x1 while ignoring other frequencies.
  • FIGS. 3 and 4 are specific examples of passive standby mode detection. With reference first to FIG. 3, a signal processing diagram is shown for predicting signal x1 based on another signal x2 with one source audio signal s1 in accordance with examples presented herein. Speaker 224(1) is external to endpoint 104 and emits audio signal s1. Endpoint 104 includes microphones 216(1)-(2) (which produce signals x1, x2), filter banks 226(1)-(2), adaptive filter 228(2), delay operator 230, and detector 232.
  • In this example, signal x1 is the primary signal and signal x2 is the reference signal. As in FIG. 2, filter banks 226(1)-(2) respectively partition the signals x1, x2 into time-frequency domain signals X1(m,k), X2(m,k). The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of endpoint 104 feeds the time-frequency domain representations of the reference signal, X2(m,k), through adaptive filter 228(2) and generates a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller of endpoint 104 generates prediction error E(m−D,k)=X1(m−D,k)−{tilde over (X)}1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal, and determines whether the prediction error is indicative of the motion of one or more persons near the collaboration endpoint 104.
  • The following is a mathematical derivation of the discrete time Fourier transform of the primary signal X1(f) expressed as a function of the discrete time Fourier transform of the reference signal X2(f) in the example of FIG. 3. In this case, the matrix at (8) reduces to the scalar

  • H(f)=H 21(f),   (20)
  • the channel vector in (12) reduces to the scalar
  • H ~ 1 ( f ) = H 11 ( f ) H 21 ( f ) , ( 21 ) and N ~ 1 ( f ) = N 1 ( f ) - H 11 ( f ) H 21 ( f ) N 2 ( f ) . ( 22 )
  • (13) thus simplifies to
  • X 1 ( f ) = H 11 ( f ) H 21 ( f ) X 2 ( f ) + N 1 ( f ) - H 11 ( f ) H 21 ( f ) N 2 ( f ) . ( 23 )
  • Thus, filtering X2(m,k) with linear adaptive filter 228(2) may enable the endpoint controller of endpoint 104 to generate predicted signal {circumflex over (X)}1(m−D,k), and determine whether the prediction error is indicative of the motion of one or more persons near the endpoint 104.
  • Turning now to FIG. 4, a signal processing diagram is shown for predicting the delayed time-frequency representation of the primary signal x1(t) from the time-frequency representations of the reference signals x2(t) and x3(t) in accordance with examples presented herein. In other words, X1(m−D,k) is predicted from X2(m,k) and X3(m,k). Speakers 224(1)-(2) produce source audio signals s1, s2 and are external to endpoint 104. Endpoint 104 includes microphones 216(1)-(3) (which produce signals x1, x2, x3), filter banks 226(1)-(3), adaptive filters 228(2)-(3), delay operator 230, and detector 232.
  • In this example, signal x1 is the primary signal and signals x2, x3 are reference signals. As in FIG. 2, filter banks 226(1)-(3) respectively partition the signals x1, x2, x3 into time-frequency domain signals X1(m,k), X2(m,k), X3(m,k). The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of endpoint 104 respectively feeds the time-frequency domain representations of the reference signals, X2(m,k), X3(m,k), through adaptive filters 228(2)-(3) and generates a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller of endpoint 104 generates prediction error E(m−D,k)=X1(m−D,k)−{circumflex over (X)}1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal, and determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104.
  • The following is a mathematical derivation of the discrete-time Fourier transform of the primary signal, X1(f), expressed as a function of the discrete-time Fourier transform of the reference signals, X2(1), X3(1), in the example of FIG. 4. In this case, the matrix at (8) reduces to
  • H ( f ) = [ H 21 ( f ) H 22 ( f ) H 31 ( f ) H 32 ( f ) ] . ( 24 )
  • This matrix has linearly independent columns, which means (13) can be re-written as

  • X 1(f)={tilde over (H)} 12(f)X 2(f)+{tilde over (H)} 13(f)X 3(f)+Ñ 1(f).   (25)
  • Thus, respectively filtering X2(m,k), X3(m,k) with linear adaptive filters 228(2)-(3) may enable the endpoint controller of endpoint 104 to generate predicted signal {circumflex over (X)}1(m−D,k), and determine whether the prediction error is indicative of the motion of one or more persons near the endpoint 104.
  • As mentioned above, FIGS. 5-10 illustrate example signal processing diagrams for active standby mode, in which the endpoint itself produces at least one source ultrasonic signal to be used for motion detection. FIGS. 5-7 illustrate a first category of active standby mode embodiments in which an integrated speaker serves as a reference audio device. FIGS. 8-9 illustrate a second category of active standby mode embodiments in which an integrated speaker serves as a primary audio device. FIG. 10 illustrates an example involving non-linear adaptive filtering according to the first category of active standby mode embodiments.
  • With reference first to FIG. 5, a generalized signal processing diagram is shown for predicting the delayed time-frequency representation of the primary signal x1(t) from the time-frequency representations of the reference audio signals x2(t), x3(t), . . . , xJ(t) in accordance with examples presented herein. In other words, X1(m−D,k) is predicted from X2(m,k), X3(m,k), . . . , XJ(m,k). Speakers 224(2)-(I) are external to endpoint 104, which includes microphone 216(1), speaker 224(1), and microphones 216(3)-(J). Endpoint 104 also includes filter banks 226(1)-(J), adaptive filters 228(2)-(J), delay operator 230, and detector 232.
  • In this example, signal x1 is the primary signal and signals x2-xJ are reference audio signals. Speakers 224(1)-(I) produce respective source audio signals s1(t), s2(t), . . . , s1(t). Each microphone 216(1), 216(3)-(J) captures the source audio signals s1(t), s2(t), s1(t) after the source audio signals have traversed the room and, in response, produces a respective signal x1, x3, . . . , xJ. In addition, endpoint controller generates reference signal x2 for source audio signal s1 since the speaker 224(1) is integrated into the endpoint 104 and, as such, the endpoint controller of endpoint 104 knows the source audio signal s1. In other words, the speaker 224(1) serves as both a source audio device and a reference audio device. Each signal x1, x3, . . . , xJ is a superposition of source audio signals convolved with impulse responses (e.g., h11, h12, etc.) and a noise signal (e.g., n1, etc.). For purposes of viewability, only the impulse responses and noise signal corresponding to the primary microphone 216(1) are shown.
  • The endpoint 104 detects motion by predicting the delayed time-frequency representation of the primary signal x1 of microphone 216(1) based on the time-frequency representations of the reference signals x2, x3, . . . , xJ. In other words, X1(m−D,k) is predicted from X2(m,k), X3(m,k), . . . , XJ(m,k). An endpoint controller of endpoint 104 feeds each of the audio signals x1, x2, x3, . . . , xJ through a respective filter bank 226(1)-(J). These filter banks 226(1)-(J) respectively partition the audio signals x1, x2, x3, . . . , xJ into time-frequency domain signals X1(m,k), X2(m,k), X3(m,k), . . . , XJ(m,k), where m and k denote a frame index and a frequency bin index, respectively. The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of endpoint 104 respectively feeds the time-frequency domain representations of the reference audio signals, X2(m,k), X3(m,k), . . . , XJ(m,k), through adaptive filters 228(2)-(J). The endpoint controller of endpoint 104 sums the contributions of the filtered time-frequency domain representations of the reference audio signals to generate a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller of endpoint 104 generates prediction error E(m−D,k)=X1(m−D,k)−{circumflex over (X)}2(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal. As mentioned above, the endpoint controller of endpoint 104 determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104 and, if the endpoint controller determines that there is motion of one or more persons near the collaboration endpoint 104, the endpoint controller of endpoint 104 may activate endpoint components in anticipation of a multimedia meeting.
  • Much of the mathematical framework for passive mode motion detection described above applies to active mode motion detection. This is because the active mode can be viewed as a special case of passive mode in which one of the source audio speakers has been moved very close to one of the microphones, such that the relative contributions from the other source audio signals and any noise are so small that they can be neglected. As such, in the example of FIG. 5, H2(f)=[H21(f), H22(f), . . . , H2I(f)]T=[1, 0, . . . 0]T. In other words, while in reality x2 is a reference audio signal that is generated by the endpoint controller of an endpoint and corresponds to source audio signal s1, conceptually x2 may be considered a fictitious microphone signal equal to the source audio signal s1, with no contributions from s2-sJ. In addition, there is no noise added to x2 (i.e., n2=0) because x2 is a loudspeaker signal known to endpoint 104 and not a microphone signal. Thus, the first row of H(f) is the unit vector H2(f), so that (8) becomes
  • H ( f ) = [ 1 0 0 H 31 ( f ) H 32 ( f ) H J 1 ( f ) H JI ] , ( 26 )
  • and, since N2(f)=0,

  • N(f)=[0 N 3(f) . . . N J(f)]T.   (27)
  • FIG. 6 is an example of the first category of active standby mode motion detection in which an endpoint controller uses a reference audio signal from a source audio speaker to predict the delayed time-frequency representation of a primary audio signal. As shown, endpoint 104 includes microphone 216(1), speaker 224(1), filter banks 226(1)-(2), adaptive filter 228(2), delay operator 230, and detector function 232. There are no external speakers in this example.
  • In this example, microphone signal x1 is the primary audio signal and loudspeaker signal x2 is the reference audio signal. As in FIG. 5, filter banks 226(1)-(2) respectively partition the audio signals x1, x2 into time-frequency domain signals X1(m,k), X2(m,k). The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller feeds the time-frequency domain representations of the reference signal, X2(m,k), through adaptive filter 228(2) and generates a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller generates prediction error E(m−D,k)=X1(m−D,k)−{circumflex over (X)}1(m−D,k) by comparaing the delayed time-frequency representation of the primary audio signal with the predicted signal, and determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104.
  • In this case, the matrix in (8) reduces to the scalar

  • H(f)=1,   (28)
  • the channel vector in (12) reduces to the scalar

  • Ĥ 1(f)=H 12(f),   (29)

  • and

  • Ñ 1(f)=N 1(f),   (30)
  • (13) thus simplifies to

  • X 1(f)=H 11(f)S 1(f)+N 1(f).   (31)
  • Hence, the endpoint controller of the endpoint 104 may generate the predicted signal {acute over (X)}1(m−D,k) by filtering X2(m,k) through adaptive filter 228(2).
  • FIG. 7 is another example of the first category of active standby mode motion detection. In this example, the endpoint controller of the endpoint 104 predicts a delayed time-frequency representation of a primary audio signal from a reference audio signal from a source audio speaker, and a microphone signal. Endpoint 104 includes microphone 216(1), speaker 224(1), microphone 216(3), filter banks 226(1)-(3), adaptive filters 228(2)-(3), delay operator 230, and detector 232. Speaker 224(2) is external to the endpoint.
  • In this example, microphone signal x1 is the primary audio signal and audio signals x2, x3 are reference audio signals. As in FIG. 5, filter banks 226(1)-(3) respectively partition the audio signals x1, x2, x3 into time-frequency domain signals X1(m,k), X2(m,k), X3(m,k). The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of the endpoint 104 respectively feeds the time-frequency domain representations of the reference audio signals, X2(m,k), X3(m,k), through adaptive filters 228(2)-(3) and generates a predicted signal {dot over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller generates prediction error E(m−D,k)=X1(m−D,k)={circumflex over (X)}1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal, and determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104.
  • Mathematically the matrix in (8) reduces to
  • H ( f ) = [ 1 0 H 31 ( f ) H 32 ( f ) ] . ( 32 )
  • This matrix has linearly independent columns in general, which means that (13) can be rewritten as

  • X 1(f)={tilde over (H)} 12(f)X 2(f)+{tilde over (H)} 12(f)X 3(f)+Ñ 1(f),   (33)
  • where {tilde over (H)}12(f) and {tilde over (H)}13(f) are functions of H11(f), H12(f), H31(f), and H32(f). The noise Ñ1(f)=N1(f)−{tilde over (H)}13(f)N3(f). Hence, the endpoint controller of the endpoint 104 may generate the predicted signal {tilde over (X)}1(m−D,k) by filtering X2(m,k), X3(m,k) through adaptive filters 228(2)-(3).
  • As mentioned above, FIGS. 8-9 illustrate a second category of active standby mode embodiments in which an integrated speaker serves as a primary audio device. Turning first to FIG. 8, a generalized signal processing diagram is shown for predicting the delayed time-frequency representation of a primary audio signal x1 from a plurality of microphone signals x2 . . . xJ in accordance with examples presented herein. Speakers 224(2)-(I) are external to endpoint 104, which includes speaker 224(1), and microphones 216(2)-(J). Endpoint 104 also includes filter banks 226(1)-(J), adaptive filters 228(2)-(J), delay operator 230, and detector function 232.
  • In this example, audio signal x1 is the primary audio signal and microphone signals x2-xJ are reference audio signals. Speakers 224(1)-(I) produce respective source audio signals s1(t), s2(t), . . . , sI(t). Each microphone 216(1)-(J) captures the source audio signals s1(t), s2(t), . . . , sI(t) after the source audio signals have traversed the room and, in response, produces a respective signal x1, x3, . . . , xJ. In addition, the endpoint controller of the endpoint generates primary audio signal x1 for source audio signal s1 since the speaker 224(1) is integrated into the endpoint 104 and, as such, the endpoint controller knows the source audio signal s1. In other words, the speaker 224(1) serves as both an audio device and a primary audio device. Each signal x2, x3, . . . , xJ is a superposition of source audio signals convolved with impulse responses (e.g., h21, h22, etc.) and a noise signal (e.g., n2, etc.). For purposes of viewability, only the contributions and noise signal corresponding to the primary microphone 216(2) are shown.
  • The endpoint 104 detects motion by predicting the delayed time-frequency representation of the primary signal x1(t) from the time-frequency representations of the reference signals x2(t), x3(t), . . . , xJ(t). In other words, X1(m−D,k) is predicted from X2(m,k), X3(m,k), . . . , xJ(m,k).The endpoint controller of the endpoint 104 feeds each audio signal x1, x2, x3, . . . , xJ through a respective filter bank 226(1)-(J). These filter banks 226(1)-(J) respectively partition the audio signals x1, x2, x3, . . . , xJ into time-frequency domain signals X1(m,k), X2(m,k), X3(m,k), . . . , XJ(m,k), where m and k denote a frame index and a frequency bin index, respectively. The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of the endpoint 104 respectively feeds the time-frequency domain representations of the reference audio signals, X2(m,k), X3(m,k), . . . , XJ(m,k), through adaptive filters 228(2)-(J). The endpoint controller sums the contributions of the filtered time-frequency domain representations of the reference audio signals to generate a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal X1(m−D, k). The endpoint controller generates prediction error E(m−D,k)=X1(m−D,k)−{circumflex over (X)}1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal. As mentioned above, the endpoint controller determines whether the prediction error is indicative of the motion of one or more persons near the collaboration endpoint 104 and, if the endpoint controller determines that there is motion of one or more persons near the endpoint 104, the endpoint controller may activate endpoint components in anticipation of a multimedia meeting.
  • Like in the example of FIG. 5, much of the mathematical framework for passive mode motion detection described above applies here to active mode motion detection. In this case, H1(f)=[H11(f), H12(f), . . . , H11(f)]T=[1, 0, . . . 0]T and N1(f)=0.
  • FIG. 9 is an example of the second category of active standby mode motion detection in which an endpoint controller of the endpoint uses a reference signal from a microphone to predict the delayed time-frequency representation of a primary audio signal. As shown, endpoint 104 includes speaker 224(1), microphone 216(1), filter banks 226(1)-(2), adaptive filter 228(2), delay operator 230, and detector 232. There are no external speakers in this example.
  • In this example, audio signal x1 is the primary audio signal and audio signal x2 is the reference audio signal. As in FIG. 8, filter banks 226(1)-(2) respectively partition the audio signals x1, x2 into time-frequency domain signals X1(m,k), X2(m,k). The delay operator 230 delays the time-frequency representation of the primary audio signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of the endpoint 104 feeds the time-frequency domain representation of the reference signal, X2(m,k), through adaptive filter 228(2) and generates a predicted signal {dot over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller generates prediction error E(m−D,k)=X1(m−D,k)−{circumflex over (X)}1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal, and determines whether the prediction error is indicative of the motion of one or more persons near the endpoint 104.
  • In this case the matrix in (8) reduces to the scalar

  • H(f)=H 21(f),   (34)
  • the channel vector in (12) reduces to the scalar
  • H ~ 1 ( f ) = 1 H 21 ( f ) , and ( 35 ) N ~ 1 ( f ) = - 1 H 21 ( f ) N 2 ( f ) . ( 36 )
  • (13) then simplifies to
  • X 1 ( f ) = 1 H 21 ( f ) X 2 ( f ) - 1 H 21 ( f ) N 2 ( f ) . ( 37 )
  • Hence, the endpoint controller of the endpoint 104 may generate the predicted signal {circumflex over (X)}1(m−D,k) by filtering X2(m,k) through adaptive filter 228(2).
  • FIGS. 2-9 provide examples of linear prediction (i.e., a model that is linear in its prediction parameters but not necessarily its regressors, which are in this case the time-frequency representations of the reference audio signals). However, in certain situations non-linear prediction may be desirable. For example, the impulse responses in active mode may not always be sufficiently linear and time-invariant (e.g., when a third party amplifier card is used, when a codec is playing the audio source signal through a television with a poorly designed amplifier/speaker, etc.). As such, FIG. 10 provides an example signal diagram of active motion detection using non-linear prediction.
  • More specifically, FIG. 10 is an example signal diagram in which an endpoint controller of an endpoint uses a reference audio signal from a source audio speaker to predict the delayed time-frequency representation of a primary audio signal. As shown, endpoint 104 includes microphone 216(1), speaker 224(1), filter banks 226(1)-(2), non-linear adaptive filter 1028(2), delay operator 230, and detector 232. Endpoint 104 further includes a phase operator 1034, absolute value operator 1036, and a phase application operator 1038. There are no external speakers in this example.
  • In this example, the microphone signal x1 is the primary audio signal and audio signal x2 is the reference audio signal. As in FIG. 5, filter banks 226(1)-(2) respectively partition the audio signals x1, x2 into time-frequency domain signals X1(m,k), X2(m,k). The delay operator 230 delays the time-frequency representation of the primary signal, X1(m,k), by D frames, resulting in X1(m−D, k). The endpoint controller of the endpoint 104 feeds the time-frequency domain representations of the reference signal, X2(m,k), through adaptive filter 228(2) and generates a predicted signal {circumflex over (X)}1(m−D,k) that is predictive of the delayed time-frequency representation of the primary signal, X1(m−D, k). The endpoint controller generates prediction error E(m−D,k)=X1(m−D,k)=X1(m−D,k) by comparing the delayed time-frequency representation of the primary audio signal with the predicted signal, and determines whether the prediction error is indicative of the motion of one or more persons near the collaboration endpoint 104.
  • FIG. 11 is a block diagram of endpoint controller 120, which is configured to execute the operations described herein. In this example, the endpoint controller 120 includes one or more application-specific integrated circuits (ASICs) 1140, a memory 1142, which stores or is encoded with instructions for the motion detection logic 122, and one or more processors 1146. The one or more ASICs 1140 may be customized for one or more particular uses (e.g., audio/video signal processing). The one or more processors 1146 are configured to execute instructions stored in the memory 1142 (e.g., motion detection logic 122). When executed by the one or more processors 1146, the motion detection logic 122 enables the endpoint controller 120 to perform the operations described herein in connection with FIGS. 1-10 and 12.
  • The memory 1142 may include read only memory (ROM), random access memory (RAM), magnetic disk storage media devices, optical storage media devices, flash memory devices, electrical, optical, or other physical/tangible memory storage devices. Thus, in general, the memory 1142 may include one or more tangible (non-transitory) computer readable storage media (e.g., a memory device) encoded with software comprising computer executable instructions and when the software is executed (by the processor 1146) it is operable to perform the operations described herein.
  • FIG. 12 is a flowchart depicting a generalized method 1200 in accordance with examples presented herein. At 1210, an endpoint controller of a collaboration endpoint generates a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker. At 1220, the endpoint controller generates a reference audio signal for the ultrasonic source audio signal. At 1230, the endpoint controller generates, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal. At 1240, the endpoint controller produces a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal. At 1250, the endpoint controller determines whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • Using the techniques presented herein, collaboration endpoints may accurately detect motion in a conference room in which the endpoint is located. Upon detecting motion, a collaboration endpoint may enter active mode and, for example, display a friendly and informative greeting to the user. Endpoint motion detection may also be used to trigger burglar alarms, control lights/blinds, record and/or analyze the usage of the conference room, provide up-to-date information regarding the usage of the conference room, etc.
  • These techniques are power efficient because the endpoint may constantly monitor for motion while in standby mode. Moreover, these techniques may be deployed on existing and/or future (to be developed) endpoints. In addition, extra/additional hardware may not be required, thereby minimizing cost and complexity. Further, impulse responses may be non-linear and/or time-variant due to, for example, systems having poor quality amplifier cards, or when an ultrasonic pairing signal is played through a television with poor quality amplifiers and/or speakers.
  • The techniques described herein may also be desirable in situations involving multiple collaboration endpoints in the same room that each produce ultrasonic pairing signals. This is often the case in office spaces and classrooms where each user has a personal endpoint in the same room. These techniques may also be compatible with ultrasonic noise from other equipment, such as third party motion detectors in the ceiling. In addition, they are operable in large rooms that are not properly covered with an optimal number of microphones.
  • The number of signals used for motion detection by the endpoint is at least one more than the number of audio source signals. Thus, since one signal is used as the primary signal, the number of reference audio signals is equal to at least the number of audio source signals. In addition, it may be desirable to locate microphones far from speakers. In an example, microphones may be located throughout a conference room so as to provide accurate motion detection coverage for the entire room.
  • In one form, a method is provided. The method comprises: at a controller of a collaboration endpoint, generating a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker; generating a reference audio signal for the ultrasonic source audio signal; generating, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal; producing a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and determining whether the prediction error is indicative of motion of one or more persons near the collaboration endpoint.
  • In another form, an apparatus is provided. The apparatus comprises: a source audio speaker; at least one microphone of a collaboration endpoint; and one or more processors coupled to a memory, wherein the one or more processors are configured to: generate a primary audio signal for an ultrasonic source audio signal produced by the source audio speaker; generate a reference audio signal for the ultrasonic source audio signal; generate, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal; produce a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and determine whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • In another form, one or more non-transitory computer readable storage media are provided. The non-transitory computer readable storage media are encoded with instructions that, when executed by a processor of a collaboration endpoint, cause the processor to: generate a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker; generate a reference audio signal for the ultrasonic source audio signal; generate, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal; produce a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and determine whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
  • The above description is intended by way of example only. Although the techniques are illustrated and described herein as embodied in one or more specific examples, it is nevertheless not intended to be limited to the details shown, since various modifications and structural changes may be made within the scope and range of equivalents of the claims.

Claims (1)

What is claimed is:
1. A method comprising:
at a controller of a collaboration endpoint, generating a primary audio signal for an ultrasonic source audio signal produced by a source audio speaker;
generating a reference audio signal for the ultrasonic source audio signal;
generating, based on the reference audio signal, a predicted signal that is predictive of the primary audio signal;
producing a prediction error of the predicted signal by comparing the primary audio signal with the predicted signal; and
determining whether the prediction error is indicative of a motion of one or more persons near the collaboration endpoint.
US16/655,601 2017-04-25 2019-10-17 Audio based motion detection Abandoned US20200116820A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/655,601 US20200116820A1 (en) 2017-04-25 2019-10-17 Audio based motion detection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US15/496,362 US10473751B2 (en) 2017-04-25 2017-04-25 Audio based motion detection
US16/655,601 US20200116820A1 (en) 2017-04-25 2019-10-17 Audio based motion detection

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/496,362 Continuation US10473751B2 (en) 2017-04-25 2017-04-25 Audio based motion detection

Publications (1)

Publication Number Publication Date
US20200116820A1 true US20200116820A1 (en) 2020-04-16

Family

ID=63853821

Family Applications (2)

Application Number Title Priority Date Filing Date
US15/496,362 Active 2038-04-01 US10473751B2 (en) 2017-04-25 2017-04-25 Audio based motion detection
US16/655,601 Abandoned US20200116820A1 (en) 2017-04-25 2019-10-17 Audio based motion detection

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US15/496,362 Active 2038-04-01 US10473751B2 (en) 2017-04-25 2017-04-25 Audio based motion detection

Country Status (1)

Country Link
US (2) US10473751B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE49288E1 (en) 2017-06-23 2022-11-08 Cisco Technology, Inc. Endpoint proximity pairing using acoustic spread spectrum token exchange and ranging information
USRE49462E1 (en) 2018-06-15 2023-03-14 Cisco Technology, Inc. Adaptive noise cancellation for multiple audio endpoints in a shared space

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10473751B2 (en) * 2017-04-25 2019-11-12 Cisco Technology, Inc. Audio based motion detection
JP6633139B2 (en) * 2018-06-15 2020-01-22 レノボ・シンガポール・プライベート・リミテッド Information processing apparatus, program and information processing method
US11395091B2 (en) 2020-07-02 2022-07-19 Cisco Technology, Inc. Motion detection triggered wake-up for collaboration endpoints
US10992905B1 (en) 2020-07-02 2021-04-27 Cisco Technology, Inc. Motion detection triggered wake-up for collaboration endpoints

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8096937B2 (en) 2005-01-11 2012-01-17 Otologics, Llc Adaptive cancellation system for implantable hearing instruments
US8180068B2 (en) 2005-03-07 2012-05-15 Toa Corporation Noise eliminating apparatus
NO332437B1 (en) 2010-01-18 2012-09-17 Cisco Systems Int Sarl Apparatus and method for suppressing an acoustic echo
US9363386B2 (en) * 2011-11-23 2016-06-07 Qualcomm Incorporated Acoustic echo cancellation based on ultrasound motion detection
US9473865B2 (en) 2012-03-01 2016-10-18 Conexant Systems, Inc. Integrated motion detection using changes in acoustic echo path
US20160044394A1 (en) 2014-08-07 2016-02-11 Nxp B.V. Low-power environment monitoring and activation triggering for mobile devices through ultrasound echo analysis
US9800964B2 (en) 2014-12-29 2017-10-24 Sound Devices, LLC Motion detection for microphone gating
US9319633B1 (en) 2015-03-19 2016-04-19 Cisco Technology, Inc. Ultrasonic echo canceler-based technique to detect participant presence at a video conference endpoint
US10473751B2 (en) * 2017-04-25 2019-11-12 Cisco Technology, Inc. Audio based motion detection
US10267912B1 (en) * 2018-05-16 2019-04-23 Cisco Technology, Inc. Audio based motion detection in shared spaces using statistical prediction

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
USRE49288E1 (en) 2017-06-23 2022-11-08 Cisco Technology, Inc. Endpoint proximity pairing using acoustic spread spectrum token exchange and ranging information
USRE49462E1 (en) 2018-06-15 2023-03-14 Cisco Technology, Inc. Adaptive noise cancellation for multiple audio endpoints in a shared space

Also Published As

Publication number Publication date
US10473751B2 (en) 2019-11-12
US20180306900A1 (en) 2018-10-25

Similar Documents

Publication Publication Date Title
US10473751B2 (en) Audio based motion detection
EP3271744B1 (en) Ultrasonic echo canceler-based technique to detect participant presence at a video conference endpoint
EP3857911B1 (en) Linear filtering for noise-suppressed speech detection via multiple network microphone devices
DiBiase A high-accuracy, low-latency technique for talker localization in reverberant environments using microphone arrays
US9462552B1 (en) Adaptive power control
US9385779B2 (en) Acoustic echo control for automated speaker tracking systems
Schmalenstroeer et al. Online diarization of streaming audio-visual data for smart environments
EP2005705B1 (en) System and method for enhanced teleconferencing security
KR102409536B1 (en) Event detection for playback management on audio devices
JP2017530396A (en) Method and apparatus for enhancing a sound source
US10267912B1 (en) Audio based motion detection in shared spaces using statistical prediction
US20140329511A1 (en) Audio conferencing
Guo et al. Evaluation of state-of-the-art acoustic feedback cancellation systems for hearing aids
US20190253796A1 (en) Audio feedback reduction utilizing adaptive filters and nonlinear processing
JP2022542962A (en) Acoustic Echo Cancellation Control for Distributed Audio Devices
JP2024507916A (en) Audio signal processing method, device, electronic device, and computer program
US20210294424A1 (en) Auto-framing through speech and video localizations
CN114143668A (en) Audio signal processing, reverberation detection and conference method, apparatus and storage medium
CN112447184A (en) Voice signal processing method and device, electronic equipment and storage medium
Cohen et al. An online algorithm for echo cancellation, dereverberation and noise reduction based on a Kalman-EM Method
Romoli et al. Multichannel acoustic echo cancellation exploiting effective fundamental frequency estimation
Favrot et al. Adaptive equalizer for acoustic feedback control
US20240160399A1 (en) Spatial Rediscovery Using On-Device Hardware
Küçük et al. Direction of arrival estimation using deep neural network for hearing aid applications using smartphone
Bian et al. Sound source localization in domestic environment

Legal Events

Date Code Title Description
STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION