EP2878137A1 - Tragbare elektronische vorrichtung mit audiowiedergabevorrichtung und audiowiedergabeverfahren - Google Patents

Tragbare elektronische vorrichtung mit audiowiedergabevorrichtung und audiowiedergabeverfahren

Info

Publication number
EP2878137A1
EP2878137A1 EP12788148.0A EP12788148A EP2878137A1 EP 2878137 A1 EP2878137 A1 EP 2878137A1 EP 12788148 A EP12788148 A EP 12788148A EP 2878137 A1 EP2878137 A1 EP 2878137A1
Authority
EP
European Patent Office
Prior art keywords
electronic device
portable electronic
sensor
positioning
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP12788148.0A
Other languages
English (en)
French (fr)
Inventor
Christof Faller
Alexis Favrot
David Virette
Yue Lang
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Publication of EP2878137A1 publication Critical patent/EP2878137A1/de
Withdrawn legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R3/00Circuits for transducers, loudspeakers or microphones
    • H04R3/04Circuits for transducers, loudspeakers or microphones for correcting frequency response
    • HELECTRICITY
    • H03ELECTRONIC CIRCUITRY
    • H03GCONTROL OF AMPLIFICATION
    • H03G5/00Tone control or bandwidth control in amplifiers
    • H03G5/16Automatic control
    • H03G5/165Equalizers; Volume or gain control in limited frequency bands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04MTELEPHONIC COMMUNICATION
    • H04M9/00Arrangements for interconnection not involving centralised switching
    • H04M9/08Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
    • H04M9/082Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic using echo cancellers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/02Spatial or constructional arrangements of loudspeakers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R2499/00Aspects covered by H04R or H04S not otherwise provided for in their subgroups
    • H04R2499/10General applications
    • H04R2499/11Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's

Definitions

  • the present invention relates to a portable electronic device comprising one or more loudspeakers and audio rendering means and relates to a method for audio rendering an audio signal.
  • Audio manipulation algorithms are often used to calibrate and tune audio devices for better audio rendering, including equalization as proposed in “Zoelzer, U. (2002), DAFX - Digital Audio Effects, Wiley, New York” and bandwidth extension as described in “Larsen, E. and Aarts, R. M. (2004), Audio Bandwidth Extension: Application of Psychoacoustics, Signal Processing and Loudspeaker Design, Wiley, New York”. These algorithms can be implemented in various ways but are fixed.
  • Figure 1 a shows a schematic diagram 100 of the frequency response differences between two distinct usages of a mobile device 101 having a loudspeaker 107 only at the back.
  • the left side of Fig. 1a shows the front side listening perspective 103 while the right side of Fig. 1 a shows the back side listening perspective 105.
  • the frequency response related to the front side listening perspective 103 is illustrated by a first curve 109 and the frequency response related to the back side listening perspective 105 is illustrated by a second curve 1 1 1 .
  • FIG. 1 b shows a schematic diagram 120 of the frequency response differences between two distinct supports for a mobile device 121 .
  • the first support of the mobile device carried in a hand 123 is illustrated by a first curve 129 and the second support of the mobile device put on a table 125 is illustrated by a second curve 131 .
  • the invention is based on the finding that exploiting position and/or orientation data of a mobile device relative to a listener position improves audio rendering.
  • the positioning of an audio device is defined as its position, including orientation and proximity to a surface or a support, relatively to the listener position.
  • an optimization criterion e.g. making the audio rendering constant and consistent over all possible positioning the audio rendering can be improved.
  • Equalization and bandwidth extension can be adaptively implemented to follow device position with respect to the listener.
  • a positioning detection algorithm can be implemented to figure out how the device lies with respect to the listener. Then, once the positioning has been estimated, the audio rendering can be modified accordingly, using a digital signal processing (DSP) unit or other kinds of processing circuits.
  • DSP digital signal processing
  • the implementation can be realized adaptively such that audio rendering stays constant towards positioning changes.
  • a processing circuit e.g. a digital signal processing (DSP) unit can be controlled by a positioning detection algorithm.
  • the DSP unit may be designed of three blocks which are an adaptive gain compensation block, a block comprising an adaptive equalization curve and a block comprising an adaptive bandwidth extension bass algorithm, e.g. for devices with small loudspeakers.
  • audio rendering is significantly improved as will be presented in the following.
  • abbreviations and notations will be used: audio
  • rendering a reproduction technique capable of creating spatial sound fields h an
  • DSP digital signal processing
  • the invention relates to a portable electronic device, comprising: at least one loudspeaker; and audio rendering means configured for adapting an audio signal before submitting the audio signal to the at least one loudspeaker, wherein the adapting is according to a function of a positioning of the portable electronic device relative to a listener using the portable electronic device.
  • the portable electronic device comprises at least one sensor configured for sensing the positioning of the portable electronic device.
  • the sensor may be used for sensing the positioning of the portable electronic device.
  • An already implemented sensor e.g. a camera, a gyroscope or a proximity sensor may be used to provide this information to the audio rendering means. That means, no hardware changes are necessary, the provided information can be efficiently used for improving the audio rendering.
  • the at least one sensor comprises face detection means configured for providing orientation information of the portable electronic device with respect to the listener.
  • the device can detect its orientation towards the listener. Exploiting orientation information by the audio rendering means improves the audio rendering.
  • the at least one sensor is configured for providing proximity information of the portable electronic device with respect to a support.
  • the device can detect its proximity towards the listener.
  • Exploiting proximity information by the audio rendering means improves the sensitivity of the audio rendering.
  • the at least one sensor is configured for providing orientation information of the portable electronic device with respect to an environment of the portable electronic device.
  • the device can detect its orientation towards a table, a chair or another object in the environment of the listener. Exploiting such orientation information by the audio rendering means improves the sensitivity of the audio rendering.
  • the at least one sensor comprises at least one of the following: a gyroscope, a camera, a proximity sensor, a microphone, a gravity sensor, an accelerometer, a temperature sensor, a light sensor, a magnetic field sensor, a pressure sensor, a humidity sensor, a position sensor, in particular a global positioning system.
  • One sensor is sufficient to apply audio rendering using adaptation according to a function of a positioning with respect to the listener for improving audio rendering in situations where the mobile phone is moved. By using more sensors, however, audio rendering is additionally improved.
  • the portable electronic device comprises detection means configured for detecting the positioning of the portable electronic device based on information of the at least one sensor.
  • the portable electronic device comprises filtering means configured for filtering the audio signal and providing a filtered audio signal to the at least one loudspeaker.
  • the filtering means is used for filtering the audio signal according to the desired characteristic.
  • the filtering means is configured to modify an audio rendering of the audio signal by using digital signal processing.
  • digital signal processing the filtering means can be flexibly implemented on the mobile device. The software for digital signal processing can be changed if required, e.g. filtering can be implemented in time domain or in frequency domain.
  • the filtering means comprises at least one or a combination of the following:
  • equalization means virtual bass adaptation means, loudness adjustment means, steering stereo rendering means, acoustic echo cancellation means, acoustic noise cancellation means, de-reverberation means.
  • the audio signal can be filtered depending on the environment where the mobile device is operating.
  • the portable electronic device can thus be adapted to different room characteristics, e.g. conference, office, theater etc. for providing optimal performance.
  • the portable electronic device comprises adaptation means configured for adjusting the filtering means based on the positioning of the portable electronic device.
  • Adaptation means can flexibly control the filtering means depending on the environment. Different adaptation algorithms can be applied for providing fast convergence and good tracking properties of the filtering means.
  • the adaptation means are configured for adjusting the filtering means for matching a desired audio rendering irrespective of a movement of the portable electronic device.
  • the adaptation means are configured for adjusting the filtering means adaptively such that the audio rendering stays constant towards positioning changes of the portable electronic device.
  • Positioning changes of the portable electronic device can be tracked such that audio quality stays constant with movements of the device.
  • the invention relates to a method for audio rendering an audio signal of a portable electronic device comprising at least one loudspeaker, the method comprising: processing the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing is adapted according to a function of a positioning of the portable electronic device relative to a listener using the portable electronic device.
  • the method further comprises: estimating the positioning of the portable electronic device by using at least one of a gyroscope, a camera and a proximity sensor; and filtering the audio signal by using at least one of a gain adjustment, an equalization and a virtual bass adaptation.
  • the sensor may be used for sensing the positioning of the portable electronic device.
  • An already implemented sensor e.g. a camera, a gyroscope or a proximity sensor may be used to provide this information for the processing of the audio signal. That means, the method can run on conventional mobile devices without requiring hardware changes, the provided information can be efficiently used for improving the audio rendering.
  • the methods described herein may be implemented as software in a Digital Signal Processor (DSP), in a micro-controller or in any other side-processor or as hardware circuit within an application specific integrated circuit (ASIC).
  • DSP Digital Signal Processor
  • ASIC application specific integrated circuit
  • the invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof, e.g. in available hardware of conventional mobile devices or in new hardware dedicated for processing the methods described herein.
  • Fig. 1 a shows a schematic diagram 100 of the frequency response differences between two distinct usages of a mobile device 101
  • Fig. 1 b shows a schematic diagram 120 of the frequency response differences between two distinct supports for a mobile device 121 ;
  • Fig. 2 shows a block diagram of a portable electronic device 200 comprising a
  • loudspeaker 217 and audio rendering means 201 according to an implementation form
  • Fig. 3 shows a schematic diagram of a mobile device 300 comprising different sensors 301 , 303, 305 according to an implementation form
  • Fig. 4a, b and c show diagrams of frequency responses of the mobile device 300 depicted in Fig. 3 wherein the frequency responses depend on different positions of the mobile device 300;
  • Fig. 5 shows a schematic diagram of a method for audio rendering according to an implementation form.
  • Fig. 2 shows a block diagram of a portable electronic device 200 comprising a
  • the portable electronic device 200 e.g. a Smartphone or a tablet PC or a mobile phone, comprises one loudspeaker 217 but can also have more than one loudspeaker and comprises audio rendering means 201.
  • the audio rendering means 201 comprise a filtering means 203, adaptation means 21 1 , detection means 209 and one or more audio device sensors, e.g. gyroscope, proximity sensor, camera, etc.
  • An audio signal 215 is processed by the audio rendering means 201 that adapts the audio signal 215 for providing a filtered audio signal 219 that drives the loudspeaker 217.
  • the audio signal 215 is filtered by filtering means 203 comprising virtual bass processing means 205 and equalizing means 207.
  • the equalizing means 207 comprise adjustment of loudness, i.e. gain.
  • the filtering means 203 are controlled by adaptation means 21 1 which control one or more of the parameters gain, equalization and virtual bass adaptation.
  • the filtering means comprise one or more means configured for equalization, gain adjustment and virtual bass processing.
  • the audio rendering means 201 may be implemented on a digital signal processing (DSP) unit, e.g. an embedded DSP unit in software or as a hardware circuit.
  • DSP digital signal processing
  • the detection means 209, the filtering means 203, the adaptation means 21 1 , the virtual bass processing means 205 and the equalizing means 207 may be implemented on the same or on a different digital signal processing (DSP) unit in software or in hardware on the same hardware circuit or as a different hardware circuit.
  • DSP digital signal processing
  • the audio rendering means 201 First of all a frequency analysis is run on the portable electronic device 200 in order to collect information how the frequency response changes as a function of its positioning. In an implementation form, this is done off-line, e.g. in the development phase of the device 200 to roughly get the influence of the device 200 positioning on its audio rendering.
  • the collected information is used to implement the control of the DSP algorithms implemented by the filtering means 203 from the detection algorithm implemented by the detection means 209.
  • the frequency response of a device 200 for different positions, e.g. front and back, different orientations, e.g. angle between support and axis of the device and various supports, e.g. free air, hand, table, etc. is analyzed and reported.
  • the positioning of the device 200 is estimated and the audio rendering is modified accordingly.
  • a gain compensation, equalization and a bandwidth extension is adapted to the situation of the device 200 with respect to the listener position as shown in Figure 2.
  • the positioning detection is performed by the detection means 209.
  • the audio device 200 is configured for embedding various sensors, e.g. accelerometer, gyroscope, proximity, cameras, etc.
  • a positioning detection algorithm is performed by the detection means 209 to figure out how the device 200 lies with respect to the listener and the environment, e.g. placed on a table or held in hand.
  • three control parameters are derived by the detection means, which give information whether the listener is standing at the front or at the back of the device, what angle is between the device and a possible support and if a support is close to the device and how close. Based on these control parameters, the detection means 209, gives information on the audio signal 215, in particular how the audio signal 215 has to be modified to be as constant as possible. These parameters are derived with a given granularity leading to a gradual adaptation of the audio rendering for the audio signal 215.
  • the low bandwidth extension is performed by the adaptation means 21 1 .
  • the low bandwidth extension technology enhances perception of low frequency audio signals by generating dependent signal components outside of this low frequency bandwidth.
  • the purpose of the low bandwidth extension is to get an acceptable level of bass in devices where loudspeakers have not been designed to reach down to low frequencies, e.g. small headphones, small loudspeakers, speakerphones, etc.
  • the tuning of the low bandwidth extension algorithm is gradually adapted whenever the positioning changes. Knowing from frequency analysis, which positioning requires bass boost, the low bandwidth extension is more aggressively tuned in the cases where the control parameters reflect these positionings and vice versa.
  • the tuning is performed by the filtering means 203 while the adaptation of the tuning is performed by the adaptation means 21 1.
  • the adaptation means 21 1 Given the different positionings which are directly linked to the control parameters and the desired frequency response target, the adaptation means 21 1 derives the gain and equalization curves which are used for adjusting the filtering means 203.
  • Fig. 3 shows a schematic diagram of a mobile device 300 comprising different sensors 301 , 303, 305 according to an implementation form.
  • the mobile device 300 e.g. a
  • Smartphone comprises a gyroscope 301 , a camera 305 and a proximity sensor 303 as can be seen from Fig. 3. Further sensors may be implemented on the mobile device 300.
  • the mobile device 300 may be structured according to the block diagram depicted and described with respect to Fig. 2.
  • the smartphone 300 depicted in Fig. 3 embeds a loudspeaker on its back face. The audio rendering depends significantly on the positioning of the phone with respect to the listener, for example looking from front or back, being in the air or on a support etc.
  • the positioning can be detected by using a gyroscope to derive the orientation, a proximity sensor and/or a back/front camera to decide on the proximity with a support and a camera using a face detection algorithm to decide whether the listener is in front of the screen or not.
  • the detection gives the positioning of the device with respect to the listener and the environment, i.e. if the phone is placed on the table or held in hand, etc.
  • the virtual bass processing and the equalization are adapted such that they match the desired audio rendering, even when the phone 300 is moving.
  • the portable electronic device 300 comprises the following components: a loudspeaker on the back of the terminal, signal processing algorithms for virtual bass, equalization and loudness, sensors comprising camera, proximity sensor and gyroscope.
  • the perception of low frequencies depends on the orientation of the loudspeaker, i.e. if the listener is in front of the loudspeaker or not.
  • a sensor comprises a camera with face detection.
  • adaptation of the virtual bass strength is performed.
  • Frequency is sensitive to positioning, potential reflections and distance to the listener.
  • a proximity sensor, a camera and a gyroscope are used. Depending on the positioning, orientation and/or proximity to the listener, the best adapted equalization curve is selected.
  • information from several sensors is combined for a better adaptation of the rendering.
  • the gyroscope 301 provides horizontal and stable position data
  • the camera 305 cannot detect a face
  • the proximity sensor 303 detects proximity of an object to the back of the terminal 300.
  • the detection decides that the portable electronic device 300 is most likely positioned on a table.
  • the gyroscope 301 provides vertical position data and information on a small movement
  • the camera 305 detects a face on the screen side
  • the proximity sensor 303 detects proximity of an object to the back of the terminal.
  • the detection decides that the portable electronic device 300 is most likely positioned in the hand of the user.
  • the portable electronic device 300 detects sensor or environment parameters delivered by one or more of the following sensors: gyroscope 301 , camera 305, microphone, proximity sensor 303, gravity sensor, accelerometer, temperature sensor, light sensor, magnetic field sensor, pressure sensor, humidity sensor, position sensor, e.g. GPS etc.
  • the audio signal is processed by one or more of the following entities: equalization, virtual bass, bass enhancement, loudness adjustment, steering stereo rendering, acoustic echo cancellation, acoustic noise cancellation, de-reverberation etc.
  • the audio rendering of the portable electronic device 300 is adapted as a function of its detected position, orientation and/or environment.
  • Fig. 4a, b and c show diagrams of frequency responses of the mobile device 300 depicted in Fig. 3 wherein the frequency responses depend on different positions of the mobile device 300.
  • Fig. 4a frequency responses for six different positionings are depicted, which are: mobile device 300 positioned in a hand looking forwards 402, mobile device 300 positioned in a hand looking backwards 404, mobile device 300 positioned on a table looking forwards 407, mobile device 300 positioned on a table looking backwards 405, mobile device 300 positioned in free air looking forwards 406 and mobile device 300 positioned in free air looking backwards 403.
  • the desired frequency response target 401 which is nearly constant is also depicted in Figure 4a.
  • the mobile device 300 derives for that positionings equalization curves, such that frequency response stays constant as a function of the phone situation.
  • Fig. 4b shows the equalization curves applied for the various situations with a maximum allowed gain of 12 dB.
  • Fig. 4c The modified responses of the device 300 as a function of the positionings are shown in Fig. 4c.
  • An audio device 300 embedding a DSP unit for performing the audio rendering which DSP unit is controlled as a function of the positioning sounds more constant and consistent in various testing and real life positionings.
  • Fig. 5 shows a schematic diagram of a method 500 for audio rendering according to an implementation form.
  • the method 500 is used for audio rendering an audio signal of a portable electronic device, e.g. a device 200 described with respect to Fig. 2 or a device 300 described with respect to Fig. 3, the device comprising at least one loudspeaker.
  • the method 500 comprises processing 501 the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing 501 is according to a function of a positioning of the portable electronic device 200, 300 relative to a listener using the portable electronic device 200, 300.
  • the method 500 further comprises estimating the positioning of the portable electronic device 200, 300 by using at least one of a gyroscope 301 , a camera 305 and a proximity sensor 303 and filtering the audio signal by using at least one of a gain adjustment, equalization and a virtual bass adaptation.
  • the presented method 500 modifies the audio rendering, e.g. mono, stereo or
  • the positioning, orientation and/or proximity of the device is estimated and the audio rendering is modified accordingly, using a digital signal processing unit.
  • the rendering of the device is then made according to the positioning, orientation and/or proximity information and the perceived quality is constant over all possible utilizations and independent of the positioning, orientation and/or proximity of the device.
  • a proximity detection algorithm is implemented to figure out how the device lies with respect to the listener and the environment, e.g. placed on a table or hold in a hand.
  • a DSP unit modifies the audio rendering such that it sounds as desired.
  • the implementation is realized adaptively such that audio rendering stays constant towards positioning changes.
  • the adaptation is performed automatically without any user input.
  • the present disclosure also supports a computer program product including computer executable code or computer executable instructions that, when executed, causes at least one computer to execute the performing and computing steps described herein.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • Acoustics & Sound (AREA)
  • Telephone Function (AREA)
EP12788148.0A 2012-10-26 2012-10-26 Tragbare elektronische vorrichtung mit audiowiedergabevorrichtung und audiowiedergabeverfahren Withdrawn EP2878137A1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2012/071306 WO2014063755A1 (en) 2012-10-26 2012-10-26 Portable electronic device with audio rendering means and audio rendering method

Publications (1)

Publication Number Publication Date
EP2878137A1 true EP2878137A1 (de) 2015-06-03

Family

ID=47215508

Family Applications (1)

Application Number Title Priority Date Filing Date
EP12788148.0A Withdrawn EP2878137A1 (de) 2012-10-26 2012-10-26 Tragbare elektronische vorrichtung mit audiowiedergabevorrichtung und audiowiedergabeverfahren

Country Status (2)

Country Link
EP (1) EP2878137A1 (de)
WO (1) WO2014063755A1 (de)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9300266B2 (en) 2013-02-12 2016-03-29 Qualcomm Incorporated Speaker equalization for mobile devices
EP2830327A1 (de) * 2013-07-22 2015-01-28 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Audioprozessor zur ausrichtungsabhängigen Verarbeitung
US9521497B2 (en) * 2014-08-21 2016-12-13 Google Technology Holdings LLC Systems and methods for equalizing audio for playback on an electronic device
CN105895112A (zh) 2014-10-17 2016-08-24 杜比实验室特许公司 面向用户体验的音频信号处理
US10255927B2 (en) 2015-03-19 2019-04-09 Microsoft Technology Licensing, Llc Use case dependent audio processing
US11620976B2 (en) 2020-06-09 2023-04-04 Meta Platforms Technologies, Llc Systems, devices, and methods of acoustic echo cancellation based on display orientation
US11340861B2 (en) 2020-06-09 2022-05-24 Facebook Technologies, Llc Systems, devices, and methods of manipulating audio data based on microphone orientation
US11586407B2 (en) * 2020-06-09 2023-02-21 Meta Platforms Technologies, Llc Systems, devices, and methods of manipulating audio data based on display orientation

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6639987B2 (en) * 2001-12-11 2003-10-28 Motorola, Inc. Communication device with active equalization and method therefor
EP1696702B1 (de) * 2005-02-28 2015-08-26 Sony Ericsson Mobile Communications AB Tragbares Gerät mit verbessertem Stereoton
EP1865745A4 (de) * 2005-04-01 2011-03-30 Panasonic Corp Handapparat, elektronisches gerät und kommunikationsgerät
WO2007004147A2 (en) * 2005-07-04 2007-01-11 Koninklijke Philips Electronics N.V. Stereo dipole reproduction system with tilt compensation.
US20090103744A1 (en) * 2007-10-23 2009-04-23 Gunnar Klinghult Noise cancellation circuit for electronic device
US8144897B2 (en) * 2007-11-02 2012-03-27 Research In Motion Limited Adjusting acoustic speaker output based on an estimated degree of seal of an ear about a speaker port
US9131060B2 (en) * 2010-12-16 2015-09-08 Google Technology Holdings LLC System and method for adapting an attribute magnification for a mobile communication device
US20130266148A1 (en) * 2011-05-13 2013-10-10 Peter Isberg Electronic Devices for Reducing Acoustic Leakage Effects and Related Methods and Computer Program Products

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2014063755A1 *

Also Published As

Publication number Publication date
WO2014063755A1 (en) 2014-05-01

Similar Documents

Publication Publication Date Title
EP2878137A1 (de) Tragbare elektronische vorrichtung mit audiowiedergabevorrichtung und audiowiedergabeverfahren
US9609418B2 (en) Signal processing circuit
CN104424953B (zh) 语音信号处理方法与装置
CN111128210B (zh) 具有声学回声消除的音频信号处理的方法和系统
KR102470962B1 (ko) 사운드 소스들을 향상시키기 위한 방법 및 장치
EP3471442B1 (de) Audiolinse
EP3304548B1 (de) Elektronische vorrichtung und verfahren zur tonverarbeitung davon
US7889872B2 (en) Device and method for integrating sound effect processing and active noise control
JP2016509429A (ja) オーディオ装置及びそのための方法
JP7325445B2 (ja) ギャップ信頼度を用いた背景雑音推定
US8971542B2 (en) Systems and methods for speaker bar sound enhancement
US11395087B2 (en) Level-based audio-object interactions
WO2018234625A1 (en) DETERMINATION OF TARGETED SPACE AUDIOS PARAMETERS AND SPACE AUDIO READING
JP2018516497A (ja) 動的音響環境におけるマルチチャネル音のための音響エコー消去の較正
CN107017000B (zh) 用于编码和解码音频信号的装置、方法和计算机程序
CA2908794A1 (en) Apparatus and method for center signal scaling and stereophonic enhancement based on a signal-to-downmix ratio
EP3934274B1 (de) Verfahren und vorrichtung zur asymmetrischen lautsprecherverarbeitung
WO2015011026A1 (en) Audio processor for object-dependent processing
WO2016042410A1 (en) Techniques for acoustic reverberance control and related systems and methods
CN116367050A (zh) 处理音频信号的方法、存储介质、电子设备和音频设备
CN109076302B (zh) 信号处理装置
EP3201910B1 (de) Kombinierte aktive rauschunterdrückung und rauschkompensierung in kopfhörer
US8929557B2 (en) Sound image control device and sound image control method
US11671752B2 (en) Audio zoom
EP3643083A1 (de) Räumliche audioverarbeitung

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20150227

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20150825