WO2018164438A1 - Method and apparatus for in-room low-frequency sound power optimization - Google Patents

Method and apparatus for in-room low-frequency sound power optimization Download PDF

Info

Publication number
WO2018164438A1
WO2018164438A1 PCT/KR2018/002608 KR2018002608W WO2018164438A1 WO 2018164438 A1 WO2018164438 A1 WO 2018164438A1 KR 2018002608 W KR2018002608 W KR 2018002608W WO 2018164438 A1 WO2018164438 A1 WO 2018164438A1
Authority
WO
WIPO (PCT)
Prior art keywords
sound power
speaker driver
velocity
sound pressure
room
Prior art date
Application number
PCT/KR2018/002608
Other languages
English (en)
French (fr)
Inventor
Adrian CELESTINOS ARROYO
Original Assignee
Samsung Electronics Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/806,991 external-priority patent/US10469046B2/en
Application filed by Samsung Electronics Co., Ltd. filed Critical Samsung Electronics Co., Ltd.
Priority to EP18763636.0A priority Critical patent/EP3583783A4/en
Priority to CN201880017341.6A priority patent/CN110402585B/zh
Publication of WO2018164438A1 publication Critical patent/WO2018164438A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/301Automatic calibration of stereophonic sound system, e.g. with test microphone
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G3/00Gain control in amplifiers or frequency changers without distortion of the input signal
    • H03G3/20Automatic control
    • H03G3/30Automatic control in amplifiers having semiconductor devices
    • H03G3/32Automatic control in amplifiers having semiconductor devices the control being dependent upon ambient noise level or sound level
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/16Automatic control
    • H03G5/165Equalizers; Volume or gain control in limited frequency bands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R29/00Monitoring arrangements; Testing arrangements
    • H04R29/001Monitoring arrangements; Testing arrangements for loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/002Damping circuit arrangements for transducers, e.g. motional feedback circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R7/00Diaphragms for electromechanical transducers; Cones
    • H04R7/02Diaphragms for electromechanical transducers; Cones characterised by the construction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems

Definitions

  • the present disclosure relates generally to loudspeakers, and in particular, a sound power optimization system.
  • a loudspeaker produces sound when connected to an integrated amplifier, a television (TV) set, a radio, a music player, an electronic sound producing device (e.g., a smartphone), a video player, etc.
  • TV television
  • radio radio
  • music player e.g., a music player
  • electronic sound producing device e.g., a smartphone
  • An exemplary embodiment of the disclosure may provide a system and method for in-room sound field control.
  • One embodiment provides a device comprising a speaker driver, a microphone configured to obtain a measurement of a near-field sound pressure of the speaker driver, and a controller.
  • the controller is configured to determine a velocity of a diaphragm of the speaker driver, and automatically calibrate sound power levels of audio reproduced by the speaker driver based on the velocity and the measurement of the near-field sound pressure to automatically adjust the sound power levels to an acoustic environment of the device.
  • FIG. 1 illustrates an example sound power optimization system according to various embodiments of the present disclosure
  • FIG. 2 is a cross section of an example loudspeaker device according to various embodiments of the present disclosure
  • FIG. 3A illustrates a first example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3B illustrates a second example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3C illustrates a third example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3D illustrates a fourth example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3E illustrates a fifth example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3F illustrates a sixth example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3G illustrates a seventh example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 3H illustrates an eighth example microphone position for a microphone according to various embodiments of the present disclosure
  • FIG. 4 is an example graph illustrating errors in estimated in-room total sound power output for different microphone positions according to various embodiments of the present disclosure
  • FIG. 5 is an example graph illustrating an impedance curve for an example closed-box loudspeaker device according to various embodiments of the present disclosure
  • FIG. 6 is an example graph illustrating near-field sound pressure and velocity of a diaphragm of a speaker driver according to various embodiments of the present disclosure
  • FIG. 7 is an example graph illustrating phase difference between near-field sound pressure and velocity of a diaphragm of a speaker driver according to various embodiments of the present disclosure
  • FIG. 8 is an example flowchart of a sound power optimization system for estimating an in-room total sound power output according to various embodiments of the present disclosure
  • FIG. 9 is an example graph illustrating alignment of a phase of velocity of a diaphragm of a speaker driver with a phase of near-field sound pressure at about 20 Hertz (Hz) according to various embodiments of the present disclosure
  • FIG. 10 is an example graph illustrating fitting of a phase curve for near-field sound pressure according to various embodiments of the present disclosure
  • FIG. 11 is an example graph illustrating a phase curve for a product term over the frequency domain according to various embodiments of the present disclosure
  • FIG. 12 is an example graph illustrating a phase curve for a product term over the frequency domain according to various embodiments of the present disclosure
  • FIG. 13 is an example graph illustrating near-field sound pressure and complex conjugate of velocity of a diaphragm of a speaker driver according to various embodiments of the present disclosure
  • FIG. 14 is an example graph illustrating phase difference between near-field sound pressure and complex conjugate of velocity of a diaphragm of a speaker driver according to various embodiments of the present disclosure
  • FIG. 15 is an example graph illustrating estimated in-room total sound power output and actual in-room total sound power output according to various embodiments of the present disclosure
  • FIG. 16 is an example graph illustrating estimated in-room total sound power output, pre-determined target/desired sound power output, and equalized sound power output according to various embodiments of the present disclosure
  • FIG. 17 is an example graph illustrating measured sound power output radiated from the loudspeaker device before and after auto-equalization according to various embodiments of the present disclosure
  • FIG. 18 is an example flowchart of a process for sound power optimization system according to various embodiments of the present disclosure.
  • FIG. 19 is a high-level block diagram showing an information processing system comprising a computer system useful for implementing the various embodiments of the present disclosure.
  • One or more embodiments relate generally to loudspeakers, and in particular, a sound power optimization system.
  • One embodiment provides a device comprising a speaker driver, a microphone configured to obtain a measurement of a near-field sound pressure of the speaker driver, and a controller.
  • the controller is configured to determine a velocity of a diaphragm of the speaker driver, and automatically calibrate sound power levels of audio reproduced by the speaker driver based on the velocity and the measurement of the near-field sound pressure to automatically adjust the sound power levels to an acoustic environment of the device.
  • a loudspeaker device is positioned/placed in a room. For example, at low frequencies where sound wavelengths are similar to physical dimensions of the room, total sound power output of the loudspeaker device may be affected by resonances in the room, resulting in peaks and valleys that deteriorate spectral uniformity reproduced by the loudspeaker device. If no steps are takes to remedy the effects of the resonances, bass reproduced by the loudspeaker device may be perceived to be weak in some regions in the frequency domain and overpowering in other regions in the frequency domain where the resonances are excited, depending on a position/location of the loudspeaker device in the room.
  • One embodiment provides a system and method for in-room sound field control.
  • the system and method automatically enhance total sound power output of a loudspeaker device in a room based on a position/location of the loudspeaker device in the room.
  • One embodiment provides a system comprising a loudspeaker device, at least one microphone for measuring near-field sound pressure of the loudspeaker device, and at least one sensor device for sensing current of the loudspeaker device. Based on the current sensed, the system determines a velocity of a diaphragm of a speaker driver (e.g., a tweeter, a woofer, etc.) of the loudspeaker device. Based on the velocity determined and the near-field sound pressure measured, the system determines total sound power output radiated from the loudspeaker device and adjusts the total sound power output based on a pre-determined target sound power output. In one example implementation, the total sound power output is improved or optimized to the pre-determined target.
  • a speaker driver e.g., a tweeter, a woofer, etc.
  • the system utilizes only one microphone positioned in front of the diaphragm of the speaker driver and only one sensor device, thereby removing the need for a mechanical moving device.
  • the system provides smooth bass response in a room without needing to obtain measurements at different listening positions in the room.
  • the system automatically adjusts total sound power output radiated from the loudspeaker device based on acoustic conditions of the room (e.g., physical dimensions such as ceiling height, moving the loudspeaker device 101 from one position to another position in the room, changes resulting from one or more physical structures in the room, such as opening all doors, closing a room divider, opening a car window, air-conditioning turned on, etc.) and a position/location of the loudspeaker device in the room, thereby improving overall listening experience by increasing clarity and spectral uniformity of sound/audio reproduced by the loudspeaker device.
  • acoustic conditions of the room e.g., physical dimensions such as ceiling height, moving the loudspeaker device 101 from one position to another position in the room, changes resulting from one or more physical structures in the room, such as opening all doors, closing a room divider, opening a car window, air-conditioning turned on, etc.
  • the system only requires one measurement (i.e., the near-field sound pressure) to automatically equalize total sound power output radiated from the loudspeaker device in the room.
  • FIG. 1 illustrates an example sound power optimization system 100 according to various embodiments of the present disclosure.
  • the sound power optimization system 100 comprises a loudspeaker device 101 positioned/located in a room.
  • the loudspeaker device 101 is a closed-box loudspeaker comprising a speaker housing 210 (FIG. 2) including at least one speaker driver 220 (FIG. 2) for reproducing sound, such as a woofer, etc.
  • at least one speaker driver 220 is a forward-facing speaker driver.
  • at least one speaker driver 220 is an upward-facing driver.
  • at least one speaker driver 220 is a downward-facing driver.
  • the system 100 further comprises at least one microphone 102 for capturing audio signals. As described in detail later herein, the audio signals captured are used to measure near-field sound pressure of the loudspeaker device 101.
  • the microphone 102 may be positioned/placed in different positions relative to a speaker driver 220.
  • the system 100 comprises only one microphone 102 positioned/placed as close as possible to a diaphragm 230 (FIG. 2) of a speaker driver 220.
  • the microphone 102 is attached to the diaphragm 230 of the speaker driver 220.
  • the microphone 102 is positioned/placed substantially about 1 inch in front of the diaphragm 230 of the speaker driver 220.
  • the system 100 further comprises at least one microphone pre-amplifier 103 connected to at least one microphone 102 for amplifying audio signals captured by the microphone 102.
  • the system 100 further comprises a current and voltage sensor device 104 connected to the loudspeaker device 101 for sensing current and voltage of the loudspeaker device 101.
  • the sensor device 104 is connected to terminals of the speaker driver 220.
  • the system 100 further comprises an analog-to-digital (A/D) converter 105 comprising multiple input channels.
  • the A/D converter 105 is configured to: (1) receive a first input from the sensor device 104 via a first input channel (“I1”), (2) receive a second input from the sensor device 104 via a second input channel (“I2”), and (3) receive a third input via a third input channel (“I3”).
  • the A/D converter 105 converts each analog input received via an input channel to digital signals (e.g., analog audio from the media player 112).
  • the first input comprises information indicative of a current, sensed by the sensor device 104, of the loudspeaker device 101.
  • the second input comprises information indicative of a voltage, sensed by the sensor device 104, of the loudspeaker device 101.
  • the system 100 switches between the microphone pre-amplifier 103 and a media player 112 as a source for the third input. If the A/D converter 105 receives the third input from the microphone pre-amplifier 103, the third input comprises amplified audio signals captured by a microphone 102 and amplified by the microphone pre-amplifier 103. If the A/D converter 105 receives the third input from the media player 112, the third input comprises audio for reproduction by the loudspeaker device 101.
  • the media player 112 comprises, but is not limited to, a mobile electronic device (e.g., a smartphone, a laptop, a tablet, etc.), a content playback device (e.g., a television, a radio, a music player such as a CD player, a video player such as a DVD player, a turntable, etc.), an audio receiver, etc.
  • a mobile electronic device e.g., a smartphone, a laptop, a tablet, etc.
  • a content playback device e.g., a television, a radio, a music player such as a CD player, a video player such as a DVD player, a turntable, etc.
  • an audio receiver e.g., a music player such as a CD player, a video player such as a DVD player, a turntable, etc.
  • the system 100 further comprises a sound power estimation unit 110.
  • the sound power estimation unit 110 operates as a controller configured to initiate and perform automatic calibration of sound power levels of audio reproduced by the speaker driver 220 based on velocity of the diaphragm 230 of the speaker driver 220 and measurement of the near-field sound pressure to automatically adjust the sound power levels to an acoustic environment of the loudspeaker device 101.
  • the automatic calibration performed by the sound power estimation unit 110 comprises estimating in-room total sound power output radiated from the loudspeaker device 101 based on digital signals from the A/D converter 105.
  • the terms “sound power estimation unit” and “controller” are used interchangeably in this specification.
  • the system 100 in response to the sound power estimation unit 110 initiating and performing automatic calibration of sound power levels of audio reproduced by the speaker driver 220, switches to the microphone pre-amplifier 103 as a source for the third input (i.e., the A/D converter 105 receives the third input from the microphone pre-amplifier 103 during the calibration). After the calibration, the system 100 automatically switches back to the media player 112 as a source for the third input (i.e., the A/D converter 105 receives the third input from the media player 112 after the calibration).
  • the system 100 further comprises a digital filter 111.
  • the digital filter 111 is a memory infinite impulse response (IIR) filter or a minimum phase finite impulse response filter (FIR).
  • the digital filter 111 is configured to: (1) receive, from the sound power estimation unit 110, an estimated in-room total sound power output radiated from the loudspeaker device 101, and (2) adjust the estimated in-room total sound power output based on a pre-determined target sound power output. In one embodiment, the digital filter 111 improves or optimizes the estimated in-room total sound power output to the pre-determined target.
  • the system 100 further comprises an auto-equalization (auto-EQ) filter 106 configured to: (1) receive, from the digital filter 111, estimated in-room total sound power output, and (2) receive, from the A/D converter 105, digital signals.
  • auto-EQ filter 106 is configured to automatically equalize the estimated in-room total sound power output.
  • the system 100 further comprises a digital-to-analog (D/A) converter 108 configured to: (1) receive, from the auto-EQ filter 106, equalized in-room total sound power output, and (2) convert the equalized in-room total sound power output to analog signals.
  • D/A digital-to-analog
  • the system 100 further comprises an amplifier 109 configured to: (1) receive analog signals from the D/A converter 108, (2) amplify the analog signals, and (3) forward the amplified analog signals to the loudspeaker device 101 for reproduction by at least one speaker driver 220.
  • the amplified signals may also be forwarded to the sensor device 104 to create a dynamic feedback loop.
  • the system 100 further comprises a signal generator 107 configured to acquire actual in-room total sound power output for the loudspeaker device 101 in response to the sound power estimation unit 110 initiating calibration.
  • the actual in-room total sound power output is based on measurements from multiple microphones placed at different locations of the room.
  • the system 100 may be integrated in, but not limited to, one or more of the following: a smart device (e.g., smart TV), a subwoofer, wireless and portable speakers, car speakers, etc.
  • a smart device e.g., smart TV
  • a subwoofer e.g., wireless and portable speakers
  • car speakers e.g., car speakers, etc.
  • FIG. 2 is a cross section of an example loudspeaker device 101 according to various embodiments of the present disclosure.
  • the loudspeaker device 101 is a closed-box loudspeaker comprising a speaker housing 210 including a speaker driver 220 (e.g., a woofer, etc.) for reproducing sound.
  • the speaker driver 220 is a forward-facing speaker driver with a diaphragm 230 disposed along a front face 210F of the speaker housing 210.
  • the system 100 utilizes only one microphone 102 positioned as close as possible to the diaphragm 230, thereby removing the need for a mechanical moving device.
  • the microphone 102 is positioned/placed substantially about 1 inch in front of the diaphragm 230.
  • W source denote an actual radiated sound power output from a compact sound source (e.g., a speaker driver 220).
  • the sound power output W source may be determined in accordance with equations (1)-(2) provided below:
  • Z rad is a complex radiation impedance for the compact sound source in the frequency domain
  • p source is a complex pressure for the compact sound source
  • U is a complex volume velocity for the compact sound source
  • Re source ⁇ Z rad ⁇ is a real part of the complex radiation impedance of the compact sound source
  • the system 100 is configured to estimate a in-room total sound power output W radiated from the closed-box loudspeaker device 101 (FIG. 2) in accordance with equation (3) provided below:
  • p is a near-field sound pressure in front of the diaphragm 230 (FIG. 2) of the speaker driver 220 (FIG. 2) of the loudspeaker device 101
  • u* is the complex conjugate of a velocity u of the diaphragm 230
  • real is the real part of the product term pu*.
  • the system 100 is configured to perform the following steps: (1) obtain a measurement of the near-field sound pressure p and an impedance Z of the speaker driver 220, (2) determine a velocity u of the diaphragm 230, (3) apply phase correction of the velocity u to obtain the complex conjugate u*, and (4) estimate the in-room total sound power output W based in part on the near-field sound pressure p and the complex conjugate u* (i.e., see equation (3) provided above).
  • the system 100 obtains a measurement of the near-field sound pressure p at discrete frequencies (e.g., frequencies within the 20 Hz to 400 Hz frequency range) in the frequency domain by applying a multisine algorithm for frequency response estimation based on audio signals captured by the microphone 102.
  • the microphone 102 is attached as close as possible to the diaphragm 230 of the speaker driver 220 (e.g., approximately 1 inch in front of the diaphragm 230). In other embodiments, the microphone 102 may be positioned/placed in different positions relative to the speaker driver 220.
  • FIGS. 3A-3H illustrate different microphone positions for a microphone according to various embodiments of the present disclosure.
  • FIG. 3A illustrates a first example microphone position 310A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to the front face 210F of the speaker housing 210 and positioned substantially about a corner end of the front face 210F and above the diaphragm 230, in accordance with an embodiment.
  • FIG. 3B illustrates a second example microphone position 320A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to a sidewall 210S of the speaker housing 210 and positioned substantially about a corner end of a proximate edge of the sidewall 210S, in accordance with an embodiment.
  • FIG. 3C illustrates a third example microphone position 330A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to the front face 210F of the speaker housing 210 and positioned to a side of the diaphragm 230, in accordance with an embodiment.
  • FIG. 3D illustrates a fourth example microphone position 340A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to a top face 210T of the speaker housing 210 and positioned substantially about a distal edge of the top face 210T, in accordance with an embodiment.
  • FIG. 3E illustrates a fifth example microphone position 350A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to the top face 210T of the speaker housing 210 and positioned substantially about a center of the top face 210T, in accordance with an embodiment.
  • FIG. 3F illustrates a sixth example microphone position 360A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to a sidewall 210S of the speaker housing 210 and positioned substantially about a center of a proximate edge of the sidewall 210S, in accordance with an embodiment.
  • FIG. 3G illustrates a seventh example microphone position 370A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to the top face 210T of the speaker housing 210 and positioned substantially about a center of a proximate edge of the top face 210F, in accordance with an embodiment.
  • FIG. 3H illustrates an eighth example microphone position 380A for the microphone 102 according to various embodiments of the present disclosure, wherein the microphone 102 is attached to a center of the diaphragm 230, in accordance with an embodiment.
  • FIG. 4 is an example graph 400 illustrating errors in estimated in-room total sound power output W for different microphone positions according to various embodiments of the present disclosure.
  • a vertical axis of the graph 400 represents sound power levels expressed in decibel (dB) units.
  • a horizontal axis of the graph 400 represents frequency values in the frequency domain expressed in Hertz (Hz) units.
  • the graph 400 comprises each of the following curves: (1) a first curve 310B representing an error (e.g., average of 2.58 dB) between an estimated in-room total sound power output W for the first microphone position 310A (FIG.
  • a second curve 320B representing an error (e.g., average of 2.85 dB) between an estimated in-room total sound power output W for the second microphone position 320A (FIG. 3B) and the actual in-room total sound power output
  • a third curve 330B representing an error (e.g., average of 2.53 dB) between an estimated in-room total sound power output W for the third microphone position 330A (FIG.
  • a fourth curve 340B representing an error (e.g., average of 3.84 dB) between an estimated in-room total sound power output W for the fourth microphone position 340A (FIG. 3D) and the actual in-room total sound power output
  • a fifth curve 350B representing an error (e.g., average of 2.53 dB) between an estimated in-room total sound power output W for the fifth microphone position 350A (FIG. 3E) and the actual in-room total sound power output
  • a sixth curve 360B representing an error (e.g., average of 2.62 dB) between an estimated in-room total sound power output W for the sixth microphone position 360A (FIG.
  • a seventh curve 370B representing an error (e.g., average of 2.64 dB) between an estimated in-room total sound power output W for the seventh microphone position 370A (FIG. 3G) and the actual in-room total sound power output
  • an eighth curve 380B representing an error (e.g., average of 2.56 dB) between an estimated in-room total sound power output W for the eighth microphone position 380A (FIG. 3H) and the actual in-room total sound power output.
  • an optimal microphone position for the microphone 102 may be in front of the diaphragm 230 (e.g., microphone position 340A).
  • the multisine algorithm for frequency response estimation utilizes repeated frames of multisines as an excitation signal and includes a dual channel fast Fourier transform (FFT) analysis.
  • FFT fast Fourier transform
  • frequencies of the sines are all harmonics of an inverse of a frame duration.
  • Phases are randomized to obtain a Gaussian amplitude distribution.
  • Frame repetition allows averaging out noise that may be included in measurements of the near-field sound pressure p, and repetition of frames with different phase patterns allows averaging out nonlinear effects.
  • the estimation of in-room total sound power output using the measurement of the near-field sound pressure p is performed at the sound power estimation unit 110.
  • the system 100 determines the velocity u of the diaphragm 230 based on current I sensed/acquired by the sensor device 104 connected to the terminals of the speaker driver 220. In one embodiment, the system 100 computes the impedance Z of the speaker driver 220 in accordance with equation (4) provided below:
  • V is an input voltage to the terminals of the speaker driver 220.
  • the system 101 computes the impedance Z at discrete frequencies (e.g., frequencies within the 20 Hz to 400 Hz frequency range) in the frequency domain based on the input voltage V and the sensed/acquired current I. In one embodiment, a high resolution in frequency is required across a low frequency range to obtain an accurate impedance Z.
  • FIG. 5 is an example graph 500 illustrating an impedance curve 510 for an example closed-box loudspeaker device 101 according to various embodiments of the present disclosure.
  • the loudspeaker device 101 comprises a 12-inch subwoofer.
  • a vertical axis of the graph 500 represents impedance expressed in ohm units.
  • a horizontal axis of the graph 500 represents frequency values in the frequency domain expressed in Hz units.
  • the system 100 computes a resonant frequency f c for the loudspeaker device 101 based on a maximum value of the impedance curve 510.
  • Let f 1 and f 2 generally denote points on the impedance curve 510, wherein f 1 and f 2 satisfy expressions (5) and (6) provided below:
  • R c R max /R e
  • R e is a direct current resistance for the loudspeaker 101
  • R max is a maximum direct current resistance for the loudspeaker device 101.
  • the system 100 computes a mechanical Q factor Q mc , an electrical Q factor Q ec , and a total Q factor Q tc for the loudspeaker device 101 in accordance with equations (7)-(9) provided below:
  • a time constant T c of the system 100 is represented in accordance with equations (10)-(11) provided below:
  • a transfer function X(s) from voltage (i.e., the input voltage V to the terminals of the speaker driver 220) to displacement of the diaphragm 230 is represented in accordance with equation (12) provided below:
  • is a real value.
  • the transfer function X(s) is proportional to a prototype low pass second-order filter function normalized to unity in a passband.
  • the system 100 computes the velocity u of the diaphragm 230 in accordance with equation (14) provided below:
  • the velocity u of the diaphragm 230 is computed at the sound power estimation unit 110.
  • the velocity u of the diaphragm 230 may be obtained by other known methods such as, but not limited to, an accelerometer, a lap laser, etc.
  • the sound power estimation unit 110 is configured to identify one or more parameters of the loudspeaker device 101 (e.g., the total Q factor Q tc and the resonant frequency f c for the loudspeaker device 101, or the impedance Z of the speaker driver 220) using system identification in the frequency domain or in the time domain based on measurements of current, voltage, and/or near-field sound pressure.
  • the sound power estimation unit 110 is configured to identify one or more parameters of the loudspeaker device 101 (e.g., the total Q factor Q tc and the resonant frequency f c for the loudspeaker device 101, or the impedance Z of the speaker driver 220) using system identification in the frequency domain or in the time domain based on measurements of current, voltage, and/or near-field sound pressure.
  • the impedance Z of the speaker driver 220 may be obtained by time domain algorithms (e.g., Kalman filters, recursive least square etc.).
  • FIG. 6 is an example graph 600 illustrating the near-field sound pressure p and the velocity u according to various embodiments of the present disclosure.
  • a vertical axis of the graph 600 represents phase angles expressed in degree units.
  • a horizontal axis of the graph 600 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 600 comprises: (1) a first curve 610 representing a phase curve for the near-field sound pressure p over the frequency domain, and (2) a second curve 620 representing a phase curve for the velocity u over the frequency domain.
  • the system 100 applies phase correction of the velocity u of the diaphragm 230 to have an accurate estimation of the in-room total sound power output W.
  • FIG. 7 is an example graph 700 illustrating phase difference between the near-field sound pressure p and the velocity u of a diaphragm of speaker driver according to various embodiments of the present disclosure.
  • a vertical axis of the graph 700 represents phase difference expressed in degree units.
  • a horizontal axis of the graph 700 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 700 comprises: (1) a first horizontal line 710 representing a phase angle at 90 degrees, and (2) a curve 720 representing phase difference between the near-field sound pressure p and the velocity u over the frequency domain.
  • a phase mismatch may result due to propagation delay between the microphone 102 and the diaphragm 103.
  • the near-field sound pressure p is 90 degrees ahead of the velocity u.
  • kr ⁇ 1 wherein k is a wave number, and r is a distance to the sound source.
  • phase difference between the near-field sound pressure p and the velocity u may not be constant across frequencies, resulting in inaccurate sound power estimations.
  • FIG. 8 is an example flowchart of a sound power optimization system for estimating an in-room total sound power output according to various exemplary embodiments of the present disclosure.
  • step 801 the system aligns a phase of a velocity u of a diaphragm by a phase of a near-field sound pressure p at a specific frequency.
  • the system first aligns a phase of the velocity u with a phase of the near-field sound pressure p at about 20 Hz to obtain an adjusted/modified complex velocity u x , as represented by equations (15)-(16) provided below:
  • the system aligns a phase curve 930 for the velocity u over the frequency domain with a phase curve 920 for the near-field sound pressure p at about 20 Hz by moving the phase curve 930 for the velocity u over the frequency domain.
  • a vertical axis of the graph 900 of FIG. 9 represents phase angles expressed in degree units.
  • a horizontal axis of the graph 900 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 900 comprises: (1) a first curve 920 representing a phase curve for the near-field sound pressure p over the frequency domain, (2) a second curve 930 representing a phase curve for the velocity u over the frequency domain, and (3) a third curve 910 representing a phase curve for the adjusted/modified complex velocity u x over the frequency domain.
  • step 803 the system corrects the phase of the velocity u of the diaphragm on a basis of a general trend of a phase curve for the near-field sound pressure p.
  • the system finds a general trend that fits a phase curve (e.g., curve 610 in FIG. 6) for the near-field sound pressure p. In one example implementation, this involves fitting of a polynomial using a least squares method.
  • the phase curve for the near-field sound pressure p at discrete frequencies is stored as b i (f) represented by equation (17) provided below:
  • phase curvefor the near-field sound pressure p is fitted to a polynomial with coefficients b 1 ,b 2 ... b n +1 in accordance with equation (18) provided below:
  • the coefficients are evaluated and a final phase correction angle y for a complex velocity u x is obtained by subtractingoriginal phase angles of the adjusted/modified complex velocity u x from a phase angle B for the near-field sound pressure p in polynomial fitted to the phase curve for the near-filed sound pressure p, in accordance with equations (19)-(20) provided below:
  • the system applies phase correction to the adjusted/modified complex velocity u x in accordance with equation (21) provided below:
  • the system matches, through the phase correction, a phase of the complex velocity u x adjusted/modified in step 801 to a phase curve 1020 representing the general trend of the phase curve 1010 for the near-field sound pressure p.
  • a vertical axis of a graph 1000 represents phase angles expressed by the unit of degree.
  • a vertical axis of the graph 1000 represents phase angles expressed in degree units.
  • a horizontal axis of the graph 1000 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 1000 comprises: (1) a first curve 1020 representing a phase curve for the near-field sound pressure p over the frequency domain, and (2) a second curve 1010 representing a general trend that fits the first curve 1020 (e.g., via fitting of a polynomial using a least squares method).
  • step 805 the system corrects the phase of the velocity u of the diaphragm on a basis of a product term pu* of the near-field sound pressure p and a complex conjugate of the velocity u of the diaphragm.
  • the system 100 based on a phase curve for the product term pu*(e.g., phase curve 1130 in FIG. 11) and one or more prominent peaks (e.g., from 25 Hz to 100 Hz) included in the phase curve, the system 100 then computes a mean of the peaks and a standard deviation of the peaks in accordance with equations (22)-(23) provided below:
  • the system searches/identifies for the most prominent peaks in accordance with equations (24)-(25) provided below:
  • Peak Threshold is a threshold that the peaks identified mustsatisfy.
  • the system searches/identifies one or more protruded peaks (e.g., in a frequency range of about 25 Hz to 100 Hz), suchas a first peak A, a second peak B and a protruded peak C, at a phase curve1130 for the product term pu* over the frequency domain.
  • a vertical axis of the graph 1110 represents phase angles expressed in degree units.
  • a horizontal axis of the graph 1110 represents frequency values in the frequency domain expressed in Hz units.
  • the phase curve 1130 includes one or more prominent peaks (e.g., at about the 25 Hz to 100 Hz frequency range), such as a first peak A, a second peak B, and a prominent peak C.
  • the graph 1110 further comprises: (1) a first horizontal line 1110 representing the standard deviation of the peaks , and (2) a second horizontal line 1120 representing the mean of the peaks .
  • the system 100 then moves/shifts a phase curve for the complex velocity u x adjusted/modified in step 803 by about ⁇ degrees, as represented in accordance with equations (26)-(27) provided below:
  • K is a constant that can be adjusted as close as possible to 90 degrees, such that the near-field sound pressure p is ahead of the adjusted/modified complex velocity u x by about 90 degrees when the product term pu* is computed.
  • is a stiffness parameter. As the ⁇ is larger, a phase curveof the product term pu* is more limited.
  • the system corrects the phase difference of the near-field sound pressure p and the velocity u of the diaphragm by limiting the phase curve of the product term pu* wherein the phase curve of the product term pu* does not exceed a constant K.
  • a vertical axis of the graph 1200 represents phase angles expressed in degree units.
  • a horizontal axis of the graph 1200 represents frequency values in the frequency domain expressed in Hz units.
  • a phase angle x (identified by reference numeral 1220) for the product term pu* exceeds the constant K.
  • the system is configured to apply a function in accordance with equation (28) provided above to adjust the phase angle x to a new phase angle x new (identified by reference numeral 1230) that does not exceed the constant K.
  • FIG. 13 is an exemplary graph illustrating near-field sound pressure p and complex conjugate of velocity of a diaphragm of a speaker driver according to various embodiments of the disclosure.
  • FIG. 13 is a graph 1300 exemplifying the near-field sound pressure p corrected in step 807 and the complex conjugate u*.
  • a vertical axis of the graph 1300 represents phase angles expressed in degree units.
  • a horizontal axis of the graph 1300 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 1300 comprises: (1) a first curve 1310 representing a phase curve for the near-field sound pressure p over the frequency domain, and (2) a second curve 1320 representing a phase curve for the complex conjugate u* over the frequency domain.
  • FIG. 14 is an examplary graph 1400 illustrating phase difference between the near-field sound pressure p and complex conjugate of velocity u* of a diaphragm of a speaker driver according to various embodiments of the present disclosure.
  • a vertical axis of the graph 1400 represents a phase difference expressed in degree units.
  • a horizontal axis of the graph 1400 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 1400 comprises: (1) a first horizontal line 1410 representing a phase angle at 90 degrees, and (2) a curve 1420 representing phase difference between the near-field sound pressure p and the complex conjugate u* over the frequency domain.
  • phase difference between the near-field sound pressure p and the complex conjugate u* is relatively constant across frequencies.
  • step 809 the system estimates a radiated in-room total sound power output W on the basis of the corrected product term pu* with the near-field sound pressure p and the complex conjugate of the diaphragm velocity u*.
  • the system estimates the in-room total sound power output W radiated from a closed-box loudspeaker by using equation(3). As illustrated in FIG. 14, the phase difference between the near-field sound pressure p and the velocity u is constant over frequencies and therefore, the system may comparatively exactly estimate the radiated indoor total sound power output.
  • the sound power estimation unit 110 estimates the in-room total sound power output W using the complex conjugate u* and the near-field sound pressure p in accordance with equation (3) provided above.
  • W dB denote an expression of the estimated in-room total sound power output W in dB units.
  • the sound power estimation unit 110 provides the estimated in-room total sound power output W dB in accordance with equation (30) provided below:
  • W ref is a pre-determined target/desired sound power output.
  • FIG. 15 is an example graph 1500 illustrating estimated in-room total sound power output and actual in-room total sound power output according to various embodiments of the present disclosure.
  • a vertical axis of the graph 1500 represents sound power levels expressed in dB units.
  • a horizontal axis of the graph 1500 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 1500 comprises each of the following curves: (1) a first curve 1520 representing an estimated in-room total sound power output W dB , and (2) a second curve 1510 representing an actual in-room total sound power output (e.g., measured from about nine microphones placed at different locations of the room).
  • the curves 1510 and 1520 are substantially similar, with a slight deviation at around 25 Hz and 55 Hz.
  • the system 100 is configured to equalize sound power output radiated from the loudspeaker device 101 to reduce/attenuate peaks related to room resonances and a position/location of the loudspeaker device 101. In one embodiment, the system 100 performs auto-equalization using a number of biquads constructing an IIR filter in front of the loudspeaker device 101 (e.g., the auto-EQ filter 106 in FIG. 1).
  • FIG. 16 is an example graph 1600 illustrating estimated in-room total sound power output, pre-determined target/desired sound power output, and equalized sound power output according to various embodiments of the present disclosure.
  • a vertical axis of the graph 1600 represents sound power levels expressed in dB units.
  • a horizontal axis of the graph 1600 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 1600 comprises each of the following curves: (1) a first curve 1610 representing an estimated in-room total sound power output W dB , (2) a second curve 1620 representing a pre-determined target/desired sound power output, and (3) a third curve 1630 representing equalized sound power output.
  • FIG. 17 is an example graph 1700 illustrating measured sound power output radiated from the loudspeaker device 101 before and after auto-equalization according to various embodiments of the present disclosure.
  • a vertical axis of the graph 1700 represents sound power levels expressed in dB units.
  • a horizontal axis of the graph 1700 represents frequency values in the frequency domain expressed in Hz units.
  • the graph 1700 comprises each of the following curves: (1) a first curve 1710 representing measured sound power output radiated from the loudspeaker device 101 before auto-equalization (e.g., measured from about nine microphones placed at different locations of the room) and (2) a second curve 1720 representing measured sound power output radiated from the loudspeaker device 101 after the auto-equalization (e.g., measured from about nine microphones placed at different locations of the room).
  • the curves 1510 and 1520 are substantially similar, with a slight deviation at around 25 Hz and 55 Hz.
  • the system 100 is configured to: (1) either continually measure or periodically measure (e.g., once every hour) the near-field sound pressure p, and (2) automatically initiate and perform calibration of sound power output radiated from the loudspeaker device 101 in real-time based on a new measurement of the near-field sound pressure p.
  • the system 100 via the sound power estimation unit 110, is configured to: (1) automatically detect one or more changes in acoustic conditions of the room (e.g., moving the loudspeaker device 101 from one position to another position in the room, changes resulting from one or more physical structures in the room, such as opening all doors, closing a room divider, opening a car window, air-conditioning turned on, etc.), and (2) automatically initiate and perform calibration of sound power output radiated from the loudspeaker device 101 in real-time with minimal user intervention based on the changes detected.
  • one or more changes in acoustic conditions of the room e.g., moving the loudspeaker device 101 from one position to another position in the room, changes resulting from one or more physical structures in the room, such as opening all doors, closing a room divider, opening a car window, air-conditioning turned on, etc.
  • the system 100 is configured to automatically detect one or more changes in acoustic conditions of the room by: (1) reproducing, via the loudspeaker device 101, test signals or audio (e.g., music samples), and (2) receiving data indicative of measured sound power output radiated from the loudspeaker device 101 and associated with the test signals or audio reproduced.
  • test signals or audio e.g., music samples
  • the system 100 via the sound power estimation unit 110, is configured to identify a position of the loudspeaker device 101 in the room based in part on detected acoustic conditions of the room.
  • the system 100 is configured to optimize/enhance sound power output radiated from the loudspeaker device 101 based on the position identified.
  • the system 100 is able to identify a best position in the room to place the loudspeaker device 101 based in part on historical data (e.g., data indicative of different measured sound power outputs for different positions in the room that the loudspeaker device 101 may be positioned).
  • the system 100 is configured to exchange data with an external electronic device (e.g., a smartphone, an audio receiver, a tablet, a remote control device, etc.) over a wired or wireless connection.
  • the external electronic device may include one or more sensors, such a microphone.
  • the external electronic device may be used to collect data (e.g., via its sensors), such as measured sound power output at particular listening position in the room, user input, etc.
  • the system 100 may use the collected data to optimize user listening experience at a particular listening position in the room (e.g., equalizing and weighting sound power output radiated from the loudspeaker device 101 towards the particular listening position).
  • FIG. 18 is an example flowchart of a process for sound power optimization system according to various embodiments of the present disclosure.
  • step 1801 obtain a measurement of a near-field sound pressure of a speaker driver using a microphone.
  • step 1803 determine a velocity of a diaphragm of the speaker driver.
  • step 1805 automatically calibrate sound power levels of audio reproduced by the speaker driver based on the velocity and the measurement of the near-field sound pressure to automatically adjust the sound power levels to an acoustic environment of the speaker driver.
  • one or more components of the system 100 is configured to perform steps 1801-1805.
  • FIG. 19 is a high-level block diagram showing an information processing system comprising a computer system 1900 useful for implementing the various embodiments of the present disclosure.
  • the computer system 1900 includes one or more processors 1910, and can further include an electronic display device 1920 (for displaying video, graphics, text, and other data), a main memory 1930 (e.g., random access memory (RAM)), storage device 1940 (e.g., hard disk drive), removable storage device 1950 (e.g., removable storage drive, removable memory module, a magnetic tape drive, optical disk drive, computer readable medium having stored therein computer software and/or data), user interface device 1960 (e.g., keyboard, touch screen, keypad, pointing device), and a communication interface 1970 (e.g., modem, a network interface (such as an Ethernet card), a communications port, or a PCMCIA slot and card).
  • the main memory 1930 may store instructions that when executed by the one or more processors 1910 cause the one or more processors 1910 to perform steps 1801-1805.
  • the communication interface 1970 allows software and data to be transferred between the computer system and external devices.
  • the system 1900 further includes a communications infrastructure 1980 (e.g., a communications bus, cross-over bar, or network) to which the aforementioned devices/modules 1910 through 1970 are connected.
  • a communications infrastructure 1980 e.g., a communications bus, cross-over bar, or network
  • Information transferred via communications interface 1970 may be in the form of signals such as electronic, electromagnetic, optical, or other signals capable of being received by communications interface 1970, via a communication link that carries signals and may be implemented using wire or cable, fiber optics, a phone line, a cellular phone link, an radio frequency (RF) link, and/or other communication channels.
  • Computer program instructions representing the block diagram and/or flowcharts herein may be loaded onto a computer, programmable data processing apparatus, or processing devices to cause a series of operations performed thereon to produce a computer implemented process.
  • processing instructions for steps 1801-1805 (FIG. 18) may be stored as program instructions on the memory 1930, storage device 1940 and the removable storage device 1950 for execution by the processor 1910.
  • Embodiments have been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products. Each block of such illustrations/diagrams, or combinations thereof, can be implemented by computer program instructions.
  • the computer program instructions when provided to a processor produce a machine, such that the instructions, which execute via the processor create means for implementing the functions/operations specified in the flowchart and/or block diagram.
  • Each block in the flowchart /block diagrams may represent a hardware and/or software module or logic. In alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures, concurrently, etc.
  • computer program medium “computer usable medium,” “computer readable medium”, and “computer program product,” are used to generally refer to media such as main memory, secondary memory, removable storage drive, a hard disk installed in hard disk drive, and signals. These computer program products are means for providing software to the computer system.
  • the computer readable medium allows the computer system to read data, instructions, messages or message packets, and other computer readable information from the computer readable medium.
  • the computer readable medium may include non-volatile memory, such as a floppy disk, ROM, flash memory, disk drive memory, a CD-ROM, and other permanent storage. It is useful, for example, for transporting information, such as data and computer instructions, between computer systems.
  • Computer program instructions may be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • aspects of the embodiments may be embodied as a system, method or computer program product. Accordingly, aspects of the embodiments may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc.) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit,” “module” or “system.” Furthermore, aspects of the embodiments may take the form of a computer program product embodied in one or more computer readable medium(s) having computer readable program code embodied thereon.
  • the computer readable medium may be a computer readable storage medium.
  • a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
  • a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • Computer program code for carrying out operations for aspects of one or more embodiments may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, Smalltalk, C++ or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer may be connected to the user's computer through any type of network, including a local area network (LAN) or a wide area network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet Service Provider).
  • LAN local area network
  • WAN wide area network
  • Internet Service Provider for example, AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • These computer program instructions may also be stored in a computer readable medium that can direct a computer, other programmable data processing apparatus, or other devices to function in a particular manner, such that the instructions stored in the computer readable medium produce an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
  • the computer program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide processes for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s).
  • the functions noted in the block may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
PCT/KR2018/002608 2017-03-10 2018-03-06 Method and apparatus for in-room low-frequency sound power optimization WO2018164438A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP18763636.0A EP3583783A4 (en) 2017-03-10 2018-03-06 METHOD AND DEVICE FOR LOW-FREQUENCY SOUND POWER OPTIMIZATION IN SPACE
CN201880017341.6A CN110402585B (zh) 2017-03-10 2018-03-06 室内低频声功率优化方法和装置

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201762470049P 2017-03-10 2017-03-10
US62/470,049 2017-03-10
US15/806,991 US10469046B2 (en) 2017-03-10 2017-11-08 Auto-equalization, in-room low-frequency sound power optimization
US15/806,991 2017-11-08
KR10-2018-0015036 2018-02-07
KR1020180015036A KR102452256B1 (ko) 2017-03-10 2018-02-07 실내 저-주파수 사운드 파워 최적화를 위한 방법 및 장치

Publications (1)

Publication Number Publication Date
WO2018164438A1 true WO2018164438A1 (en) 2018-09-13

Family

ID=63447901

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2018/002608 WO2018164438A1 (en) 2017-03-10 2018-03-06 Method and apparatus for in-room low-frequency sound power optimization

Country Status (2)

Country Link
CN (1) CN110402585B (zh)
WO (1) WO2018164438A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111998934A (zh) * 2020-08-28 2020-11-27 国网湖南省电力有限公司 一种声源声功率测试方法

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07114392A (ja) * 1993-10-20 1995-05-02 Nissan Motor Co Ltd 能動型騒音制御装置及び能動型振動制御装置
US5666427A (en) * 1995-09-30 1997-09-09 Samsung Heavy Industries Co. Ltd. Method of and apparatus for controlling noise generated in confined spaces
US20120121098A1 (en) * 2010-11-16 2012-05-17 Nxp B.V. Control of a loudspeaker output
US20120195447A1 (en) * 2011-01-27 2012-08-02 Takahiro Hiruma Sound field control apparatus and method
US20160309276A1 (en) * 2014-06-30 2016-10-20 Microsoft Technology Licensing, Llc Audio calibration and adjustment

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5795794A (en) * 1980-12-05 1982-06-14 Sony Corp Microphone
EP0113370A1 (en) * 1982-06-30 1984-07-18 B & W LOUDSPEAKERS LIMITED Environment-adaptive loudspeaker systems
GB9506725D0 (en) * 1995-03-31 1995-05-24 Hooley Anthony Improvements in or relating to loudspeakers
GB9513894D0 (en) * 1995-07-07 1995-09-06 Univ Salford The Loudspeaker circuit
US5850453A (en) * 1995-07-28 1998-12-15 Srs Labs, Inc. Acoustic correction apparatus
JP4392513B2 (ja) * 1995-11-02 2010-01-06 バン アンド オルフセン アクティー ゼルスカブ 室内のスピーカシステムを制御する方法及び装置
EP0772374B1 (en) * 1995-11-02 2008-10-08 Bang & Olufsen A/S Method and apparatus for controlling the performance of a loudspeaker in a room
US5771300A (en) * 1996-09-25 1998-06-23 Carrier Corporation Loudspeaker phase distortion control using velocity feedback
CN2319986Y (zh) * 1997-12-19 1999-05-19 李贺文 低失真扬声器振动负反馈系统组件
AUPQ938000A0 (en) * 2000-08-14 2000-09-07 Moorthy, Surya Method and system for recording and reproduction of binaural sound
WO2007028094A1 (en) * 2005-09-02 2007-03-08 Harman International Industries, Incorporated Self-calibrating loudspeaker
EP2901711B1 (en) * 2012-09-24 2021-04-07 Cirrus Logic International Semiconductor Limited Control and protection of loudspeakers
CN104185125A (zh) * 2014-08-14 2014-12-03 瑞声声学科技(深圳)有限公司 扬声器系统及其驱动方法
GB2532796A (en) * 2014-11-28 2016-06-01 Relec Sa Low frequency active acoustic absorber by acoustic velocity control through porous resistive layers

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07114392A (ja) * 1993-10-20 1995-05-02 Nissan Motor Co Ltd 能動型騒音制御装置及び能動型振動制御装置
US5666427A (en) * 1995-09-30 1997-09-09 Samsung Heavy Industries Co. Ltd. Method of and apparatus for controlling noise generated in confined spaces
US20120121098A1 (en) * 2010-11-16 2012-05-17 Nxp B.V. Control of a loudspeaker output
US20120195447A1 (en) * 2011-01-27 2012-08-02 Takahiro Hiruma Sound field control apparatus and method
US20160309276A1 (en) * 2014-06-30 2016-10-20 Microsoft Technology Licensing, Llc Audio calibration and adjustment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3583783A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111998934A (zh) * 2020-08-28 2020-11-27 国网湖南省电力有限公司 一种声源声功率测试方法
CN111998934B (zh) * 2020-08-28 2022-07-01 国网湖南省电力有限公司 一种声源声功率测试方法

Also Published As

Publication number Publication date
CN110402585B (zh) 2021-12-24
CN110402585A (zh) 2019-11-01

Similar Documents

Publication Publication Date Title
EP3583783A1 (en) Method and apparatus for in-room low-frequency sound power optimization
WO2019143150A1 (en) Method and system for nonlinear control of motion of a speaker driver
WO2019172715A1 (en) Energy limiter for loudspeaker protection
WO2020032333A1 (en) Nonlinear control of loudspeaker systems with current source amplifier
WO2020057227A1 (zh) 电视机声音调整方法、电视机和存储介质
WO2019045474A1 (ko) 비선형 특성을 갖는 오디오 필터를 이용하여 오디오 신호를 처리하는 방법 및 장치
WO2018128342A1 (en) Displacement limiter for loudspeaker mechanical protection
WO2020017806A1 (en) Method and apparatus for processing audio signal
WO2022092741A1 (en) Nonlinear control of a loudspeaker with a neural network
WO2017188648A1 (ko) 이어셋 및 그 제어 방법
WO2018217059A1 (en) Method and electronic device for managing loudness of audio signal
WO2020252886A1 (zh) 定向拾音方法、录音设备和存储介质
WO2020050699A1 (en) Port velocity limiter for vented box loudspeakers cross-reference to related applications
WO2018164438A1 (en) Method and apparatus for in-room low-frequency sound power optimization
WO2020138843A1 (en) Home appliance and method for voice recognition thereof
EP3991452A1 (en) Personalized headphone equalization
WO2016182184A1 (ko) 입체 음향 재생 방법 및 장치
WO2023008749A1 (en) Method and apparatus for calibration of a loudspeaker system
WO2021040201A1 (ko) 전자 장치 및 이의 제어 방법
WO2021010562A1 (en) Electronic apparatus and controlling method thereof
WO2019083125A1 (en) AUDIO SIGNAL PROCESSING METHOD AND ELECTRONIC DEVICE FOR SUPPORTING IT
WO2020040541A1 (ko) 전자장치, 그 제어방법 및 기록매체
WO2016117833A1 (ko) 소음제어방법
WO2020032363A1 (ko) 외부 전자 장치와의 거리에 기반하여 스피커의 출력 세기를 조정하기 위한 방법 및 전자 장치
WO2022085953A1 (ko) 전자 장치 및 이의 제어 방법

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18763636

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2018763636

Country of ref document: EP

Effective date: 20190918