EP2878137A1 - Portable electronic device with audio rendering means and audio rendering method - Google Patents
Portable electronic device with audio rendering means and audio rendering methodInfo
- Publication number
- EP2878137A1 EP2878137A1 EP12788148.0A EP12788148A EP2878137A1 EP 2878137 A1 EP2878137 A1 EP 2878137A1 EP 12788148 A EP12788148 A EP 12788148A EP 2878137 A1 EP2878137 A1 EP 2878137A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- electronic device
- portable electronic
- sensor
- positioning
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R3/00—Circuits for transducers, loudspeakers or microphones
- H04R3/04—Circuits for transducers, loudspeakers or microphones for correcting frequency response
-
- H—ELECTRICITY
- H03—ELECTRONIC CIRCUITRY
- H03G—CONTROL OF AMPLIFICATION
- H03G5/00—Tone control or bandwidth control in amplifiers
- H03G5/16—Automatic control
- H03G5/165—Equalizers; Volume or gain control in limited frequency bands
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M9/00—Arrangements for interconnection not involving centralised switching
- H04M9/08—Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic
- H04M9/082—Two-way loud-speaking telephone systems with means for conditioning the signal, e.g. for suppressing echoes for one or both directions of traffic using echo cancellers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R5/00—Stereophonic arrangements
- H04R5/02—Spatial or constructional arrangements of loudspeakers
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2499/00—Aspects covered by H04R or H04S not otherwise provided for in their subgroups
- H04R2499/10—General applications
- H04R2499/11—Transducers incorporated or for use in hand-held devices, e.g. mobile phones, PDA's, camera's
Definitions
- the present invention relates to a portable electronic device comprising one or more loudspeakers and audio rendering means and relates to a method for audio rendering an audio signal.
- Audio manipulation algorithms are often used to calibrate and tune audio devices for better audio rendering, including equalization as proposed in “Zoelzer, U. (2002), DAFX - Digital Audio Effects, Wiley, New York” and bandwidth extension as described in “Larsen, E. and Aarts, R. M. (2004), Audio Bandwidth Extension: Application of Psychoacoustics, Signal Processing and Loudspeaker Design, Wiley, New York”. These algorithms can be implemented in various ways but are fixed.
- Figure 1 a shows a schematic diagram 100 of the frequency response differences between two distinct usages of a mobile device 101 having a loudspeaker 107 only at the back.
- the left side of Fig. 1a shows the front side listening perspective 103 while the right side of Fig. 1 a shows the back side listening perspective 105.
- the frequency response related to the front side listening perspective 103 is illustrated by a first curve 109 and the frequency response related to the back side listening perspective 105 is illustrated by a second curve 1 1 1 .
- FIG. 1 b shows a schematic diagram 120 of the frequency response differences between two distinct supports for a mobile device 121 .
- the first support of the mobile device carried in a hand 123 is illustrated by a first curve 129 and the second support of the mobile device put on a table 125 is illustrated by a second curve 131 .
- the invention is based on the finding that exploiting position and/or orientation data of a mobile device relative to a listener position improves audio rendering.
- the positioning of an audio device is defined as its position, including orientation and proximity to a surface or a support, relatively to the listener position.
- an optimization criterion e.g. making the audio rendering constant and consistent over all possible positioning the audio rendering can be improved.
- Equalization and bandwidth extension can be adaptively implemented to follow device position with respect to the listener.
- a positioning detection algorithm can be implemented to figure out how the device lies with respect to the listener. Then, once the positioning has been estimated, the audio rendering can be modified accordingly, using a digital signal processing (DSP) unit or other kinds of processing circuits.
- DSP digital signal processing
- the implementation can be realized adaptively such that audio rendering stays constant towards positioning changes.
- a processing circuit e.g. a digital signal processing (DSP) unit can be controlled by a positioning detection algorithm.
- the DSP unit may be designed of three blocks which are an adaptive gain compensation block, a block comprising an adaptive equalization curve and a block comprising an adaptive bandwidth extension bass algorithm, e.g. for devices with small loudspeakers.
- audio rendering is significantly improved as will be presented in the following.
- abbreviations and notations will be used: audio
- rendering a reproduction technique capable of creating spatial sound fields h an
- DSP digital signal processing
- the invention relates to a portable electronic device, comprising: at least one loudspeaker; and audio rendering means configured for adapting an audio signal before submitting the audio signal to the at least one loudspeaker, wherein the adapting is according to a function of a positioning of the portable electronic device relative to a listener using the portable electronic device.
- the portable electronic device comprises at least one sensor configured for sensing the positioning of the portable electronic device.
- the sensor may be used for sensing the positioning of the portable electronic device.
- An already implemented sensor e.g. a camera, a gyroscope or a proximity sensor may be used to provide this information to the audio rendering means. That means, no hardware changes are necessary, the provided information can be efficiently used for improving the audio rendering.
- the at least one sensor comprises face detection means configured for providing orientation information of the portable electronic device with respect to the listener.
- the device can detect its orientation towards the listener. Exploiting orientation information by the audio rendering means improves the audio rendering.
- the at least one sensor is configured for providing proximity information of the portable electronic device with respect to a support.
- the device can detect its proximity towards the listener.
- Exploiting proximity information by the audio rendering means improves the sensitivity of the audio rendering.
- the at least one sensor is configured for providing orientation information of the portable electronic device with respect to an environment of the portable electronic device.
- the device can detect its orientation towards a table, a chair or another object in the environment of the listener. Exploiting such orientation information by the audio rendering means improves the sensitivity of the audio rendering.
- the at least one sensor comprises at least one of the following: a gyroscope, a camera, a proximity sensor, a microphone, a gravity sensor, an accelerometer, a temperature sensor, a light sensor, a magnetic field sensor, a pressure sensor, a humidity sensor, a position sensor, in particular a global positioning system.
- One sensor is sufficient to apply audio rendering using adaptation according to a function of a positioning with respect to the listener for improving audio rendering in situations where the mobile phone is moved. By using more sensors, however, audio rendering is additionally improved.
- the portable electronic device comprises detection means configured for detecting the positioning of the portable electronic device based on information of the at least one sensor.
- the portable electronic device comprises filtering means configured for filtering the audio signal and providing a filtered audio signal to the at least one loudspeaker.
- the filtering means is used for filtering the audio signal according to the desired characteristic.
- the filtering means is configured to modify an audio rendering of the audio signal by using digital signal processing.
- digital signal processing the filtering means can be flexibly implemented on the mobile device. The software for digital signal processing can be changed if required, e.g. filtering can be implemented in time domain or in frequency domain.
- the filtering means comprises at least one or a combination of the following:
- equalization means virtual bass adaptation means, loudness adjustment means, steering stereo rendering means, acoustic echo cancellation means, acoustic noise cancellation means, de-reverberation means.
- the audio signal can be filtered depending on the environment where the mobile device is operating.
- the portable electronic device can thus be adapted to different room characteristics, e.g. conference, office, theater etc. for providing optimal performance.
- the portable electronic device comprises adaptation means configured for adjusting the filtering means based on the positioning of the portable electronic device.
- Adaptation means can flexibly control the filtering means depending on the environment. Different adaptation algorithms can be applied for providing fast convergence and good tracking properties of the filtering means.
- the adaptation means are configured for adjusting the filtering means for matching a desired audio rendering irrespective of a movement of the portable electronic device.
- the adaptation means are configured for adjusting the filtering means adaptively such that the audio rendering stays constant towards positioning changes of the portable electronic device.
- Positioning changes of the portable electronic device can be tracked such that audio quality stays constant with movements of the device.
- the invention relates to a method for audio rendering an audio signal of a portable electronic device comprising at least one loudspeaker, the method comprising: processing the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing is adapted according to a function of a positioning of the portable electronic device relative to a listener using the portable electronic device.
- the method further comprises: estimating the positioning of the portable electronic device by using at least one of a gyroscope, a camera and a proximity sensor; and filtering the audio signal by using at least one of a gain adjustment, an equalization and a virtual bass adaptation.
- the sensor may be used for sensing the positioning of the portable electronic device.
- An already implemented sensor e.g. a camera, a gyroscope or a proximity sensor may be used to provide this information for the processing of the audio signal. That means, the method can run on conventional mobile devices without requiring hardware changes, the provided information can be efficiently used for improving the audio rendering.
- the methods described herein may be implemented as software in a Digital Signal Processor (DSP), in a micro-controller or in any other side-processor or as hardware circuit within an application specific integrated circuit (ASIC).
- DSP Digital Signal Processor
- ASIC application specific integrated circuit
- the invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof, e.g. in available hardware of conventional mobile devices or in new hardware dedicated for processing the methods described herein.
- Fig. 1 a shows a schematic diagram 100 of the frequency response differences between two distinct usages of a mobile device 101
- Fig. 1 b shows a schematic diagram 120 of the frequency response differences between two distinct supports for a mobile device 121 ;
- Fig. 2 shows a block diagram of a portable electronic device 200 comprising a
- loudspeaker 217 and audio rendering means 201 according to an implementation form
- Fig. 3 shows a schematic diagram of a mobile device 300 comprising different sensors 301 , 303, 305 according to an implementation form
- Fig. 4a, b and c show diagrams of frequency responses of the mobile device 300 depicted in Fig. 3 wherein the frequency responses depend on different positions of the mobile device 300;
- Fig. 5 shows a schematic diagram of a method for audio rendering according to an implementation form.
- Fig. 2 shows a block diagram of a portable electronic device 200 comprising a
- the portable electronic device 200 e.g. a Smartphone or a tablet PC or a mobile phone, comprises one loudspeaker 217 but can also have more than one loudspeaker and comprises audio rendering means 201.
- the audio rendering means 201 comprise a filtering means 203, adaptation means 21 1 , detection means 209 and one or more audio device sensors, e.g. gyroscope, proximity sensor, camera, etc.
- An audio signal 215 is processed by the audio rendering means 201 that adapts the audio signal 215 for providing a filtered audio signal 219 that drives the loudspeaker 217.
- the audio signal 215 is filtered by filtering means 203 comprising virtual bass processing means 205 and equalizing means 207.
- the equalizing means 207 comprise adjustment of loudness, i.e. gain.
- the filtering means 203 are controlled by adaptation means 21 1 which control one or more of the parameters gain, equalization and virtual bass adaptation.
- the filtering means comprise one or more means configured for equalization, gain adjustment and virtual bass processing.
- the audio rendering means 201 may be implemented on a digital signal processing (DSP) unit, e.g. an embedded DSP unit in software or as a hardware circuit.
- DSP digital signal processing
- the detection means 209, the filtering means 203, the adaptation means 21 1 , the virtual bass processing means 205 and the equalizing means 207 may be implemented on the same or on a different digital signal processing (DSP) unit in software or in hardware on the same hardware circuit or as a different hardware circuit.
- DSP digital signal processing
- the audio rendering means 201 First of all a frequency analysis is run on the portable electronic device 200 in order to collect information how the frequency response changes as a function of its positioning. In an implementation form, this is done off-line, e.g. in the development phase of the device 200 to roughly get the influence of the device 200 positioning on its audio rendering.
- the collected information is used to implement the control of the DSP algorithms implemented by the filtering means 203 from the detection algorithm implemented by the detection means 209.
- the frequency response of a device 200 for different positions, e.g. front and back, different orientations, e.g. angle between support and axis of the device and various supports, e.g. free air, hand, table, etc. is analyzed and reported.
- the positioning of the device 200 is estimated and the audio rendering is modified accordingly.
- a gain compensation, equalization and a bandwidth extension is adapted to the situation of the device 200 with respect to the listener position as shown in Figure 2.
- the positioning detection is performed by the detection means 209.
- the audio device 200 is configured for embedding various sensors, e.g. accelerometer, gyroscope, proximity, cameras, etc.
- a positioning detection algorithm is performed by the detection means 209 to figure out how the device 200 lies with respect to the listener and the environment, e.g. placed on a table or held in hand.
- three control parameters are derived by the detection means, which give information whether the listener is standing at the front or at the back of the device, what angle is between the device and a possible support and if a support is close to the device and how close. Based on these control parameters, the detection means 209, gives information on the audio signal 215, in particular how the audio signal 215 has to be modified to be as constant as possible. These parameters are derived with a given granularity leading to a gradual adaptation of the audio rendering for the audio signal 215.
- the low bandwidth extension is performed by the adaptation means 21 1 .
- the low bandwidth extension technology enhances perception of low frequency audio signals by generating dependent signal components outside of this low frequency bandwidth.
- the purpose of the low bandwidth extension is to get an acceptable level of bass in devices where loudspeakers have not been designed to reach down to low frequencies, e.g. small headphones, small loudspeakers, speakerphones, etc.
- the tuning of the low bandwidth extension algorithm is gradually adapted whenever the positioning changes. Knowing from frequency analysis, which positioning requires bass boost, the low bandwidth extension is more aggressively tuned in the cases where the control parameters reflect these positionings and vice versa.
- the tuning is performed by the filtering means 203 while the adaptation of the tuning is performed by the adaptation means 21 1.
- the adaptation means 21 1 Given the different positionings which are directly linked to the control parameters and the desired frequency response target, the adaptation means 21 1 derives the gain and equalization curves which are used for adjusting the filtering means 203.
- Fig. 3 shows a schematic diagram of a mobile device 300 comprising different sensors 301 , 303, 305 according to an implementation form.
- the mobile device 300 e.g. a
- Smartphone comprises a gyroscope 301 , a camera 305 and a proximity sensor 303 as can be seen from Fig. 3. Further sensors may be implemented on the mobile device 300.
- the mobile device 300 may be structured according to the block diagram depicted and described with respect to Fig. 2.
- the smartphone 300 depicted in Fig. 3 embeds a loudspeaker on its back face. The audio rendering depends significantly on the positioning of the phone with respect to the listener, for example looking from front or back, being in the air or on a support etc.
- the positioning can be detected by using a gyroscope to derive the orientation, a proximity sensor and/or a back/front camera to decide on the proximity with a support and a camera using a face detection algorithm to decide whether the listener is in front of the screen or not.
- the detection gives the positioning of the device with respect to the listener and the environment, i.e. if the phone is placed on the table or held in hand, etc.
- the virtual bass processing and the equalization are adapted such that they match the desired audio rendering, even when the phone 300 is moving.
- the portable electronic device 300 comprises the following components: a loudspeaker on the back of the terminal, signal processing algorithms for virtual bass, equalization and loudness, sensors comprising camera, proximity sensor and gyroscope.
- the perception of low frequencies depends on the orientation of the loudspeaker, i.e. if the listener is in front of the loudspeaker or not.
- a sensor comprises a camera with face detection.
- adaptation of the virtual bass strength is performed.
- Frequency is sensitive to positioning, potential reflections and distance to the listener.
- a proximity sensor, a camera and a gyroscope are used. Depending on the positioning, orientation and/or proximity to the listener, the best adapted equalization curve is selected.
- information from several sensors is combined for a better adaptation of the rendering.
- the gyroscope 301 provides horizontal and stable position data
- the camera 305 cannot detect a face
- the proximity sensor 303 detects proximity of an object to the back of the terminal 300.
- the detection decides that the portable electronic device 300 is most likely positioned on a table.
- the gyroscope 301 provides vertical position data and information on a small movement
- the camera 305 detects a face on the screen side
- the proximity sensor 303 detects proximity of an object to the back of the terminal.
- the detection decides that the portable electronic device 300 is most likely positioned in the hand of the user.
- the portable electronic device 300 detects sensor or environment parameters delivered by one or more of the following sensors: gyroscope 301 , camera 305, microphone, proximity sensor 303, gravity sensor, accelerometer, temperature sensor, light sensor, magnetic field sensor, pressure sensor, humidity sensor, position sensor, e.g. GPS etc.
- the audio signal is processed by one or more of the following entities: equalization, virtual bass, bass enhancement, loudness adjustment, steering stereo rendering, acoustic echo cancellation, acoustic noise cancellation, de-reverberation etc.
- the audio rendering of the portable electronic device 300 is adapted as a function of its detected position, orientation and/or environment.
- Fig. 4a, b and c show diagrams of frequency responses of the mobile device 300 depicted in Fig. 3 wherein the frequency responses depend on different positions of the mobile device 300.
- Fig. 4a frequency responses for six different positionings are depicted, which are: mobile device 300 positioned in a hand looking forwards 402, mobile device 300 positioned in a hand looking backwards 404, mobile device 300 positioned on a table looking forwards 407, mobile device 300 positioned on a table looking backwards 405, mobile device 300 positioned in free air looking forwards 406 and mobile device 300 positioned in free air looking backwards 403.
- the desired frequency response target 401 which is nearly constant is also depicted in Figure 4a.
- the mobile device 300 derives for that positionings equalization curves, such that frequency response stays constant as a function of the phone situation.
- Fig. 4b shows the equalization curves applied for the various situations with a maximum allowed gain of 12 dB.
- Fig. 4c The modified responses of the device 300 as a function of the positionings are shown in Fig. 4c.
- An audio device 300 embedding a DSP unit for performing the audio rendering which DSP unit is controlled as a function of the positioning sounds more constant and consistent in various testing and real life positionings.
- Fig. 5 shows a schematic diagram of a method 500 for audio rendering according to an implementation form.
- the method 500 is used for audio rendering an audio signal of a portable electronic device, e.g. a device 200 described with respect to Fig. 2 or a device 300 described with respect to Fig. 3, the device comprising at least one loudspeaker.
- the method 500 comprises processing 501 the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing 501 is according to a function of a positioning of the portable electronic device 200, 300 relative to a listener using the portable electronic device 200, 300.
- the method 500 further comprises estimating the positioning of the portable electronic device 200, 300 by using at least one of a gyroscope 301 , a camera 305 and a proximity sensor 303 and filtering the audio signal by using at least one of a gain adjustment, equalization and a virtual bass adaptation.
- the presented method 500 modifies the audio rendering, e.g. mono, stereo or
- the positioning, orientation and/or proximity of the device is estimated and the audio rendering is modified accordingly, using a digital signal processing unit.
- the rendering of the device is then made according to the positioning, orientation and/or proximity information and the perceived quality is constant over all possible utilizations and independent of the positioning, orientation and/or proximity of the device.
- a proximity detection algorithm is implemented to figure out how the device lies with respect to the listener and the environment, e.g. placed on a table or hold in a hand.
- a DSP unit modifies the audio rendering such that it sounds as desired.
- the implementation is realized adaptively such that audio rendering stays constant towards positioning changes.
- the adaptation is performed automatically without any user input.
- the present disclosure also supports a computer program product including computer executable code or computer executable instructions that, when executed, causes at least one computer to execute the performing and computing steps described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Telephone Function (AREA)
Abstract
The application relates to a portable electronic device (200), comprising: at least one loudspeaker (217); and audio rendering means (201) configured for adapting an audio signal (215) before submitting the audio signal (215) to the at least one loudspeaker (217), wherein the adapting is according to a function of a positioning of the portable electronic device (200) relative to a listener using the portable electronic device (200).
Description
DESCRIPTION
PORTABLE ELECTRONIC DEVICE WITH AUDIO RENDERING MEANS AND AUDIO RENDERING METHOD
BACKGROUND OF THE INVENTION
The present invention relates to a portable electronic device comprising one or more loudspeakers and audio rendering means and relates to a method for audio rendering an audio signal.
Audio manipulation algorithms are often used to calibrate and tune audio devices for better audio rendering, including equalization as proposed in "Zoelzer, U. (2002), DAFX - Digital Audio Effects, Wiley, New York" and bandwidth extension as described in "Larsen, E. and Aarts, R. M. (2004), Audio Bandwidth Extension: Application of Psychoacoustics, Signal Processing and Loudspeaker Design, Wiley, New York". These algorithms can be implemented in various ways but are fixed.
Furthermore, listener-head position adaptive audio algorithms with head tracking techniques for better binaural experience have been developed according to "Tikander, M., Harma, A. and Karjalainen, M. (2004), Acoustic positioning and head tracking based on binaural signals, in Audio Engineering Society Convention 1 16".
Most of the audio devices are tuned for optimal audio rendering while their positions, with respect to the listener position, are assumed to be fixed. If a listener is staying immobile while moving its listened device, his audio experience will not stay constant and consistent. The audio rendering characteristics of the device are then changing.
The most annoying effects are related to the possible gain variations and/or frequency response modifications. Figure 1 a shows a schematic diagram 100 of the frequency response differences between two distinct usages of a mobile device 101 having a loudspeaker 107 only at the back. The left side of Fig. 1a shows the front side listening perspective 103 while the right side of Fig. 1 a shows the back side listening perspective 105. The frequency response related to the front side listening perspective 103 is
illustrated by a first curve 109 and the frequency response related to the back side listening perspective 105 is illustrated by a second curve 1 1 1 .
The overall environment can also influence the audio rendering of a device. The proximity effect of the device forming a given surface or support can also modify the gain and/or frequency response. Figure 1 b shows a schematic diagram 120 of the frequency response differences between two distinct supports for a mobile device 121 . The first support of the mobile device carried in a hand 123 is illustrated by a first curve 129 and the second support of the mobile device put on a table 125 is illustrated by a second curve 131 .
SUMMARY OF THE INVENTION It is the object of the invention to provide a concept for improving audio rendering of a mobile device when the listener of the mobile device moves the mobile device.
This object is achieved by the features of the independent claims. Further implementation forms are apparent from the dependent claims, the description and the figures.
The invention is based on the finding that exploiting position and/or orientation data of a mobile device relative to a listener position improves audio rendering. The positioning of an audio device is defined as its position, including orientation and proximity to a surface or a support, relatively to the listener position. By modifying the audio rendering of an audio device as a function of the positioning of the device with respect to an optimization criterion, e.g. making the audio rendering constant and consistent over all possible positioning the audio rendering can be improved. Equalization and bandwidth extension can be adaptively implemented to follow device position with respect to the listener. Implementing an audio device embedding various sensors, e.g., accelerometer, gyroscope, cameras, etc., a positioning detection algorithm can be implemented to figure out how the device lies with respect to the listener. Then, once the positioning has been estimated, the audio rendering can be modified accordingly, using a digital signal processing (DSP) unit or other kinds of processing circuits.
The implementation can be realized adaptively such that audio rendering stays constant towards positioning changes.
In order to adapt the three main aspects of the audio rendering which are the overall gain, the frequency response and the bass, i.e. low frequency rendering, a processing circuit, e.g. a digital signal processing (DSP) unit can be controlled by a positioning detection algorithm. The DSP unit may be designed of three blocks which are an adaptive gain compensation block, a block comprising an adaptive equalization curve and a block comprising an adaptive bandwidth extension bass algorithm, e.g. for devices with small loudspeakers.
By exploiting position and/or orientation data of the mobile device relative to a listener's position audio rendering is significantly improved as will be presented in the following. In order to describe the invention in detail, the following terms, abbreviations and notations will be used: audio
rendering: a reproduction technique capable of creating spatial sound fields h an
extended area by means of loudspeakers or loudspeaker arrays,
DSP: digital signal processing,
EQ: equalization.
According to a first aspect, the invention relates to a portable electronic device, comprising: at least one loudspeaker; and audio rendering means configured for adapting an audio signal before submitting the audio signal to the at least one loudspeaker, wherein the adapting is according to a function of a positioning of the portable electronic device relative to a listener using the portable electronic device.
When the adapting is according to a function of a positioning of the portable electronic device relative to a listener the audio rendering of the mobile device is significantly improved.
In a first possible implementation form of the portable electronic device according to the first aspect, the portable electronic device comprises at least one sensor configured for sensing the positioning of the portable electronic device. The sensor may be used for sensing the positioning of the portable electronic device. An already implemented sensor, e.g. a camera, a gyroscope or a proximity sensor may be used to provide this information to the audio rendering means. That means, no hardware changes are necessary, the provided information can be efficiently used for improving the audio rendering.
In a second possible implementation form of the portable electronic device according to the first implementation form of the first aspect, the at least one sensor comprises face detection means configured for providing orientation information of the portable electronic device with respect to the listener.
By face detection, the device can detect its orientation towards the listener. Exploiting orientation information by the audio rendering means improves the audio rendering.
In a third possible implementation form of the portable electronic device according to any of the preceding implementation forms of the first aspect, the at least one sensor is configured for providing proximity information of the portable electronic device with respect to a support.
By proximity information, the device can detect its proximity towards the listener.
Exploiting proximity information by the audio rendering means improves the sensitivity of the audio rendering.
In a fourth possible implementation form of the portable electronic device according to any of the preceding implementation forms of the first aspect, the at least one sensor is configured for providing orientation information of the portable electronic device with respect to an environment of the portable electronic device.
By orientation information with respect to an environment, the device can detect its orientation towards a table, a chair or another object in the environment of the listener.
Exploiting such orientation information by the audio rendering means improves the sensitivity of the audio rendering.
In a fifth possible implementation form of the portable electronic device according to any of the preceding implementation forms of the first aspect, the at least one sensor comprises at least one of the following: a gyroscope, a camera, a proximity sensor, a microphone, a gravity sensor, an accelerometer, a temperature sensor, a light sensor, a magnetic field sensor, a pressure sensor, a humidity sensor, a position sensor, in particular a global positioning system.
One sensor is sufficient to apply audio rendering using adaptation according to a function of a positioning with respect to the listener for improving audio rendering in situations where the mobile phone is moved. By using more sensors, however, audio rendering is additionally improved.
In a sixth possible implementation form of the portable electronic device according to any of the preceding implementation forms of the first aspect, the portable electronic device comprises detection means configured for detecting the positioning of the portable electronic device based on information of the at least one sensor.
By using detection means, raw data of the sensor can be interpreted such that different use cases of the mobile device can be detected for which different adaptation is required in order to provide optimum performance. In a seventh possible implementation form of the portable electronic device according to the first aspect as such or according to any of the preceding implementation forms of the first aspect, the portable electronic device comprises filtering means configured for filtering the audio signal and providing a filtered audio signal to the at least one loudspeaker. The filtering means is used for filtering the audio signal according to the desired characteristic. By the filtering means the audio signal can be formed constant irrespective of a movement of the mobile device.
In an eighth possible implementation form of the portable electronic device according to the seventh implementation form of the first aspect, the filtering means is configured to modify an audio rendering of the audio signal by using digital signal processing. By using digital signal processing, the filtering means can be flexibly implemented on the mobile device. The software for digital signal processing can be changed if required, e.g. filtering can be implemented in time domain or in frequency domain.
In a ninth possible implementation form of the portable electronic device according to the seventh implementation form or according to the eighth implementation form of the first aspect, the filtering means comprises at least one or a combination of the following:
equalization means, virtual bass adaptation means, loudness adjustment means, steering stereo rendering means, acoustic echo cancellation means, acoustic noise cancellation means, de-reverberation means.
The audio signal can be filtered depending on the environment where the mobile device is operating. The portable electronic device can thus be adapted to different room characteristics, e.g. conference, office, theater etc. for providing optimal performance. In a tenth possible implementation form of the portable electronic device accoiding to any of the seventh to the ninth implementation forms of the first aspect, the portable electronic device comprises adaptation means configured for adjusting the filtering means based on the positioning of the portable electronic device. Adaptation means can flexibly control the filtering means depending on the environment. Different adaptation algorithms can be applied for providing fast convergence and good tracking properties of the filtering means.
In an eleventh possible implementation form of the portable electronic device according to the tenth implementation form of the first aspect, the adaptation means are configured for adjusting the filtering means for matching a desired audio rendering irrespective of a movement of the portable electronic device.
By adjusting the filtering means for matching a desired audio rendering irrespective of a movement of the portable electronic device, the device can be moved without degrading speech or audio quality. In a twelfth possible implementation form of the portable electronic device according to the tenth implementation form or according to the eleventh implementation form of the first aspect, the adaptation means are configured for adjusting the filtering means adaptively such that the audio rendering stays constant towards positioning changes of the portable electronic device.
Positioning changes of the portable electronic device can be tracked such that audio quality stays constant with movements of the device.
According to a second aspect, the invention relates to a method for audio rendering an audio signal of a portable electronic device comprising at least one loudspeaker, the method comprising: processing the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing is adapted according to a function of a positioning of the portable electronic device relative to a listener using the portable electronic device.
When the processing is according to a function of a positioning of the portable electronic device relative to a listener the audio rendering of the mobile device is significantly improved. In a first possible implementation form of the method according to the second aspect, the method further comprises: estimating the positioning of the portable electronic device by using at least one of a gyroscope, a camera and a proximity sensor; and filtering the audio signal by using at least one of a gain adjustment, an equalization and a virtual bass adaptation.
The sensor may be used for sensing the positioning of the portable electronic device. An already implemented sensor, e.g. a camera, a gyroscope or a proximity sensor may be used to provide this information for the processing of the audio signal. That means, the method can run on conventional mobile devices without requiring hardware changes, the provided information can be efficiently used for improving the audio rendering.
The methods described herein may be implemented as software in a Digital Signal Processor (DSP), in a micro-controller or in any other side-processor or as hardware circuit within an application specific integrated circuit (ASIC).
The invention can be implemented in digital electronic circuitry, or in computer hardware, firmware, software, or in combinations thereof, e.g. in available hardware of conventional mobile devices or in new hardware dedicated for processing the methods described herein.
BRIEF DESCRIPTION OF THE DRAWINGS
Further embodiments of the invention will be described with respect to the following figures, in which:
Fig. 1 a shows a schematic diagram 100 of the frequency response differences between two distinct usages of a mobile device 101 ; Fig. 1 b shows a schematic diagram 120 of the frequency response differences between two distinct supports for a mobile device 121 ;
Fig. 2 shows a block diagram of a portable electronic device 200 comprising a
loudspeaker 217 and audio rendering means 201 according to an implementation form;
Fig. 3 shows a schematic diagram of a mobile device 300 comprising different sensors 301 , 303, 305 according to an implementation form;
Fig. 4a, b and c show diagrams of frequency responses of the mobile device 300 depicted in Fig. 3 wherein the frequency responses depend on different positions of the mobile device 300; and
Fig. 5 shows a schematic diagram of a method for audio rendering according to an implementation form.
DETAILED DESCRIPTION OF EMBODIMENTS OF THE INVENTION
Fig. 2 shows a block diagram of a portable electronic device 200 comprising a
loudspeaker 217 and audio rendering means 201 according to an implementation form. The portable electronic device 200, e.g. a Smartphone or a tablet PC or a mobile phone, comprises one loudspeaker 217 but can also have more than one loudspeaker and comprises audio rendering means 201. The audio rendering means 201 comprise a filtering means 203, adaptation means 21 1 , detection means 209 and one or more audio device sensors, e.g. gyroscope, proximity sensor, camera, etc. An audio signal 215 is processed by the audio rendering means 201 that adapts the audio signal 215 for providing a filtered audio signal 219 that drives the loudspeaker 217. The audio signal 215 is filtered by filtering means 203 comprising virtual bass processing means 205 and equalizing means 207. In an implementation form, the equalizing means 207 comprise adjustment of loudness, i.e. gain. The filtering means 203 are controlled by adaptation means 21 1 which control one or more of the parameters gain, equalization and virtual bass adaptation. In an implementation form, the filtering means comprise one or more means configured for equalization, gain adjustment and virtual bass processing. The audio rendering means 201 may be implemented on a digital signal processing (DSP) unit, e.g. an embedded DSP unit in software or as a hardware circuit. Similarly, the detection means 209, the filtering means 203, the adaptation means 21 1 , the virtual bass processing means 205 and the equalizing means 207 may be implemented on the same or on a different digital signal processing (DSP) unit in software or in hardware on the same hardware circuit or as a different hardware circuit.
In the following an operation mode of the audio rendering means 201 is described. First of all a frequency analysis is run on the portable electronic device 200 in order to collect information how the frequency response changes as a function of its positioning. In an implementation form, this is done off-line, e.g. in the development phase of the device 200 to roughly get the influence of the device 200 positioning on its audio rendering. The collected information is used to implement the control of the DSP algorithms implemented by the filtering means 203 from the detection algorithm implemented by the detection means 209.
In an implementation form, the frequency response of a device 200 for different positions, e.g. front and back, different orientations, e.g. angle between support and axis of the device and various supports, e.g. free air, hand, table, etc. is analyzed and reported. Based on a detection algorithm implemented by the detection means, the positioning of the device 200 is estimated and the audio rendering is modified accordingly. In an implementation form, a gain compensation, equalization and a bandwidth extension is adapted to the situation of the device 200 with respect to the listener position as shown in Figure 2.
The positioning detection is performed by the detection means 209. The audio device 200 is configured for embedding various sensors, e.g. accelerometer, gyroscope, proximity, cameras, etc. A positioning detection algorithm is performed by the detection means 209 to figure out how the device 200 lies with respect to the listener and the environment, e.g. placed on a table or held in hand.
In an implementation form, three control parameters are derived by the detection means, which give information whether the listener is standing at the front or at the back of the device, what angle is between the device and a possible support and if a support is close to the device and how close. Based on these control parameters, the detection means 209, gives information on the audio signal 215, in particular how the audio signal 215 has to be modified to be as constant as possible. These parameters are derived with a given granularity leading to a gradual adaptation of the audio rendering for the audio signal 215. The low bandwidth extension is performed by the adaptation means 21 1 . The low bandwidth extension technology enhances perception of low frequency audio signals by generating dependent signal components outside of this low frequency bandwidth. The purpose of the low bandwidth extension is to get an acceptable level of bass in devices where loudspeakers have not been designed to reach down to low frequencies, e.g. small headphones, small loudspeakers, speakerphones, etc.
Based on the previous control parameters the tuning of the low bandwidth extension algorithm is gradually adapted whenever the positioning changes. Knowing from frequency analysis, which positioning requires bass boost, the low bandwidth extension is more aggressively tuned in the cases where the control parameters reflect these
positionings and vice versa. The tuning is performed by the filtering means 203 while the adaptation of the tuning is performed by the adaptation means 21 1.
Given the different positionings which are directly linked to the control parameters and the desired frequency response target, the adaptation means 21 1 derives the gain and equalization curves which are used for adjusting the filtering means 203.
Fig. 3 shows a schematic diagram of a mobile device 300 comprising different sensors 301 , 303, 305 according to an implementation form. The mobile device 300, e.g. a
Smartphone comprises a gyroscope 301 , a camera 305 and a proximity sensor 303 as can be seen from Fig. 3. Further sensors may be implemented on the mobile device 300. The mobile device 300 may be structured according to the block diagram depicted and described with respect to Fig. 2. The smartphone 300 depicted in Fig. 3 embeds a loudspeaker on its back face. The audio rendering depends significantly on the positioning of the phone with respect to the listener, for example looking from front or back, being in the air or on a support etc.
From the usual sensors present in a smartphone the positioning can be detected by using a gyroscope to derive the orientation, a proximity sensor and/or a back/front camera to decide on the proximity with a support and a camera using a face detection algorithm to decide whether the listener is in front of the screen or not. The detection gives the positioning of the device with respect to the listener and the environment, i.e. if the phone is placed on the table or held in hand, etc.
Based on the positioning, orientation and/or proximity detection algorithm results, the virtual bass processing and the equalization are adapted such that they match the desired audio rendering, even when the phone 300 is moving. In an implementation form, the portable electronic device 300 comprises the following components: a loudspeaker on the back of the terminal, signal processing algorithms for virtual bass, equalization and loudness, sensors comprising camera, proximity sensor and gyroscope.
The perception of low frequencies depends on the orientation of the loudspeaker, i.e. if the listener is in front of the loudspeaker or not. A sensor comprises a camera with face detection. Depending on the position of the listener with respect to the loudspeaker, adaptation of the virtual bass strength is performed.
Frequency is sensitive to positioning, potential reflections and distance to the listener. A proximity sensor, a camera and a gyroscope are used. Depending on the positioning, orientation and/or proximity to the listener, the best adapted equalization curve is selected.
In an implementation form, information from several sensors is combined for a better adaptation of the rendering. In a first scenario, the gyroscope 301 provides horizontal and stable position data, the camera 305 cannot detect a face and the proximity sensor 303 detects proximity of an object to the back of the terminal 300. In this first scenario, the detection decides that the portable electronic device 300 is most likely positioned on a table.
In a second scenario, the gyroscope 301 provides vertical position data and information on a small movement, the camera 305 detects a face on the screen side and the proximity sensor 303 detects proximity of an object to the back of the terminal. In this second scenario, the detection decides that the portable electronic device 300 is most likely positioned in the hand of the user.
In an implementation form, the portable electronic device 300 detects sensor or environment parameters delivered by one or more of the following sensors: gyroscope 301 , camera 305, microphone, proximity sensor 303, gravity sensor, accelerometer, temperature sensor, light sensor, magnetic field sensor, pressure sensor, humidity sensor, position sensor, e.g. GPS etc. In an implementation form the audio signal is processed by one or more of the following entities: equalization, virtual bass, bass enhancement, loudness adjustment, steering stereo rendering, acoustic echo cancellation, acoustic noise cancellation, de-reverberation etc.
The audio rendering of the portable electronic device 300 is adapted as a function of its detected position, orientation and/or environment.
Fig. 4a, b and c show diagrams of frequency responses of the mobile device 300 depicted in Fig. 3 wherein the frequency responses depend on different positions of the mobile device 300.
In Fig. 4a, frequency responses for six different positionings are depicted, which are: mobile device 300 positioned in a hand looking forwards 402, mobile device 300 positioned in a hand looking backwards 404, mobile device 300 positioned on a table looking forwards 407, mobile device 300 positioned on a table looking backwards 405, mobile device 300 positioned in free air looking forwards 406 and mobile device 300 positioned in free air looking backwards 403. The desired frequency response target 401 which is nearly constant is also depicted in Figure 4a. The mobile device 300 derives for that positionings equalization curves, such that frequency response stays constant as a function of the phone situation. Fig. 4b shows the equalization curves applied for the various situations with a maximum allowed gain of 12 dB. The modified responses of the device 300 as a function of the positionings are shown in Fig. 4c. An addition of a respective curve depicted in Fig. 4b to a respective curve depicted in Fig. 4c results in the nearly constant frequency response of the desired frequency response target 401 depicted in Fig. 4a.
An audio device 300 embedding a DSP unit for performing the audio rendering which DSP unit is controlled as a function of the positioning sounds more constant and consistent in various testing and real life positionings.
Fig. 5 shows a schematic diagram of a method 500 for audio rendering according to an implementation form. The method 500 is used for audio rendering an audio signal of a portable electronic device, e.g. a device 200 described with respect to Fig. 2 or a device 300 described with respect to Fig. 3, the device comprising at least one loudspeaker. The method 500 comprises processing 501 the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing 501 is according to a function of a positioning of the portable electronic device 200, 300 relative to a listener using the portable electronic device 200, 300.
In an implementation form, the method 500 further comprises estimating the positioning of the portable electronic device 200, 300 by using at least one of a gyroscope 301 , a camera 305 and a proximity sensor 303 and filtering the audio signal by using at least one of a gain adjustment, equalization and a virtual bass adaptation.
The presented method 500 modifies the audio rendering, e.g. mono, stereo or
multichannel of a mobile audio device. Based on a detection algorithm, the positioning, orientation and/or proximity of the device is estimated and the audio rendering is modified accordingly, using a digital signal processing unit. The rendering of the device is then made according to the positioning, orientation and/or proximity information and the perceived quality is constant over all possible utilizations and independent of the positioning, orientation and/or proximity of the device.
For a mobile audio device, e.g. a device 200 as depicted in Fig. 2 or a device 300 as depicted in Fig. 3 embedding various sensors, e.g. an accelerometer, a gyroscope, a proximity sensor, one or more cameras, a positioning sensor and an orientation sensor, a proximity detection algorithm is implemented to figure out how the device lies with respect to the listener and the environment, e.g. placed on a table or hold in a hand. Once the positioning has been estimated, a DSP unit modifies the audio rendering such that it sounds as desired. The implementation is realized adaptively such that audio rendering stays constant towards positioning changes.
In an implementation form of the method 500, the adaptation is performed automatically without any user input.
From the foregoing, it will be apparent to those skilled in the art that a variety of methods, systems, computer programs on recording media, and the like, are provided.
The present disclosure also supports a computer program product including computer executable code or computer executable instructions that, when executed, causes at least one computer to execute the performing and computing steps described herein.
The present disclosure also supports a system configured to execute the performing and computing steps described herein.
Many alternatives, modifications, and variations will be apparent to those skilled in the art in light of the above teachings. Of course, those skilled in the art readily recognize that there are numerous applications of the invention beyond those described herein. While the present inventions has been described with reference to one or more particular embodiments, those skilled in the art recognize that many changes may be made thereto without departing from the spirit and scope of the present invention. It is therefore to be understood that within the scope of the appended claims and their equivalents, the inventions may be practiced otherwise than as specifically described herein.
Claims
1 . Portable electronic device (200), comprising: at least one loudspeaker (217); and audio rendering means (201 ) configured for adapting an audio signal (215) before submitting the audio signal (215) to the at least one loudspeaker (217), wherein the adapting is according to a function of a positioning of the portable electronic device (200) relative to a listener using the portable electronic device (200).
2. The portable electronic device (200) of claim 1 , comprising at least one sensor (213) configured for sensing the positioning of the portable electronic device (200).
3. The portable electronic device (200) of claim 2, wherein the at least one sensor (213) comprises face detection means configured for providing orientation information of the portable electronic device (200) with respect to the listener.
4. The portable electronic device (200) of claim 2 or claim 3, wherein the at least one sensor (213) is configured for providing proximity information of the portable electronic device (200) with respect to a support.
5. The portable electronic device (200) of one of claims 2 to 4, wherein the at least one sensor (213) is configured for providing orientation information of the portable electronic device (200) with respect to an environment of the portable electronic device (200).
6. The portable electronic device (200) of one of claims 2 to 5, wherein the at least one sensor (213) comprises at least one of the following: a gyroscope (301 ), a camera (305), a proximity sensor (303), a microphone, a gravity sensor,
an accelerometer, a temperature sensor, a light sensor, a magnetic field sensor, a pressure sensor, a humidity sensor, a position sensor, in particular a global positioning system.
7. The portable electronic device (200) of one of claims 2 to 6, comprising detection means (209) configured for detecting the positioning of the portable electronic device (200) based on information of the at least one sensor (213).
8. The portable electronic device (200) of one of the preceding claims, wherein the audio rendering means comprises filtering means (203) configured for filtering the audio signal (215) and providing a filtered audio signal (219) to the at least one loudspeaker (217).
9. The portable electronic device (200) of claim 8, wherein the filtering means (203) is configured to modify an audio rendering of the audio signal (215) by using digital signal processing.
10. The portable electronic device (200) of claim 8 or claim 9, wherein the filtering means (203) comprises at least one or a combination of the following: equalizer (207), virtual bass adaptater (205), loudness adjuster (207), steering stereo renderer, acoustic echo canceller, acoustic noise canceller,
de-reverberation means.
1 1. The portable electronic device (200) of one of claims 8 to 10, wherein the audio rendering means comprises adaptation means (21 1 ) configured for adjusting the filtering of the audio signal performed by the filtering means (203) based on the positioning of the portable electronic device (200).
12. The portable electronic device (200) of claim 1 1 , wherein the adaptation means (21 1 ) is configured for adjusting the filtering means (203) for matching a desired audio rendering irrespective of a movement of the portable electronic device (200).
13. The portable electronic device (200) of claim 1 1 or claim 12, wherein the adaptation means (21 1 ) is configured for adjusting the filtering (203) means adaptively such that the audio rendering stays constant towards positioning changes of the portable electronic device (200).
14. Method (500) for audio rendering an audio signal of a portable electronic device (300) comprising at least one loudspeaker, the method (500) comprising: processing (501 ) the audio signal before submitting the audio signal to the at least one loudspeaker, wherein the processing (501 ) is adapted according to a function of a positioning of the portable electronic device (300) relative to a listener using the portable electronic device (300).
15. The method (500) of claim 14, further comprising: estimating the positioning of the portable electronic device (300) by using at least one of a gyroscope (301 ), a camera (305) and a proximity sensor (303); and
filtering the audio signal by using at least one of a gain adjustment, an equalization and a virtual bass adaptation.
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/EP2012/071306 WO2014063755A1 (en) | 2012-10-26 | 2012-10-26 | Portable electronic device with audio rendering means and audio rendering method |
Publications (1)
Publication Number | Publication Date |
---|---|
EP2878137A1 true EP2878137A1 (en) | 2015-06-03 |
Family
ID=47215508
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12788148.0A Withdrawn EP2878137A1 (en) | 2012-10-26 | 2012-10-26 | Portable electronic device with audio rendering means and audio rendering method |
Country Status (2)
Country | Link |
---|---|
EP (1) | EP2878137A1 (en) |
WO (1) | WO2014063755A1 (en) |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9300266B2 (en) | 2013-02-12 | 2016-03-29 | Qualcomm Incorporated | Speaker equalization for mobile devices |
EP2830327A1 (en) * | 2013-07-22 | 2015-01-28 | Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. | Audio processor for orientation-dependent processing |
US9521497B2 (en) * | 2014-08-21 | 2016-12-13 | Google Technology Holdings LLC | Systems and methods for equalizing audio for playback on an electronic device |
CN105895112A (en) | 2014-10-17 | 2016-08-24 | 杜比实验室特许公司 | Audio signal processing oriented to user experience |
US10255927B2 (en) | 2015-03-19 | 2019-04-09 | Microsoft Technology Licensing, Llc | Use case dependent audio processing |
US11620976B2 (en) | 2020-06-09 | 2023-04-04 | Meta Platforms Technologies, Llc | Systems, devices, and methods of acoustic echo cancellation based on display orientation |
US11340861B2 (en) | 2020-06-09 | 2022-05-24 | Facebook Technologies, Llc | Systems, devices, and methods of manipulating audio data based on microphone orientation |
US11586407B2 (en) * | 2020-06-09 | 2023-02-21 | Meta Platforms Technologies, Llc | Systems, devices, and methods of manipulating audio data based on display orientation |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6639987B2 (en) * | 2001-12-11 | 2003-10-28 | Motorola, Inc. | Communication device with active equalization and method therefor |
EP1696702B1 (en) * | 2005-02-28 | 2015-08-26 | Sony Ericsson Mobile Communications AB | Portable device with enhanced stereo image |
EP1865745A4 (en) * | 2005-04-01 | 2011-03-30 | Panasonic Corp | Handset, electronic device, and communication device |
WO2007004147A2 (en) * | 2005-07-04 | 2007-01-11 | Koninklijke Philips Electronics N.V. | Stereo dipole reproduction system with tilt compensation. |
US20090103744A1 (en) * | 2007-10-23 | 2009-04-23 | Gunnar Klinghult | Noise cancellation circuit for electronic device |
US8144897B2 (en) * | 2007-11-02 | 2012-03-27 | Research In Motion Limited | Adjusting acoustic speaker output based on an estimated degree of seal of an ear about a speaker port |
US9131060B2 (en) * | 2010-12-16 | 2015-09-08 | Google Technology Holdings LLC | System and method for adapting an attribute magnification for a mobile communication device |
US20130266148A1 (en) * | 2011-05-13 | 2013-10-10 | Peter Isberg | Electronic Devices for Reducing Acoustic Leakage Effects and Related Methods and Computer Program Products |
-
2012
- 2012-10-26 EP EP12788148.0A patent/EP2878137A1/en not_active Withdrawn
- 2012-10-26 WO PCT/EP2012/071306 patent/WO2014063755A1/en active Application Filing
Non-Patent Citations (1)
Title |
---|
See references of WO2014063755A1 * |
Also Published As
Publication number | Publication date |
---|---|
WO2014063755A1 (en) | 2014-05-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2878137A1 (en) | Portable electronic device with audio rendering means and audio rendering method | |
US9609418B2 (en) | Signal processing circuit | |
CN104424953B (en) | Audio signal processing method and device | |
CN111128210B (en) | Method and system for audio signal processing with acoustic echo cancellation | |
KR102470962B1 (en) | Method and apparatus for enhancing sound sources | |
EP3471442B1 (en) | An audio lens | |
EP3304548B1 (en) | Electronic device and method of audio processing thereof | |
US7889872B2 (en) | Device and method for integrating sound effect processing and active noise control | |
JP2016509429A (en) | Audio apparatus and method therefor | |
JP7325445B2 (en) | Background Noise Estimation Using Gap Confidence | |
US8971542B2 (en) | Systems and methods for speaker bar sound enhancement | |
US11395087B2 (en) | Level-based audio-object interactions | |
WO2018234628A1 (en) | Audio distance estimation for spatial audio processing | |
WO2018234625A1 (en) | Determination of targeted spatial audio parameters and associated spatial audio playback | |
JP2018516497A (en) | Calibration of acoustic echo cancellation for multichannel sounds in dynamic acoustic environments | |
CN107017000B (en) | Apparatus, method and computer program for encoding and decoding an audio signal | |
CA2908794A1 (en) | Apparatus and method for center signal scaling and stereophonic enhancement based on a signal-to-downmix ratio | |
EP3934274B1 (en) | Methods and apparatus for asymmetric speaker processing | |
WO2015011026A1 (en) | Audio processor for object-dependent processing | |
WO2016042410A1 (en) | Techniques for acoustic reverberance control and related systems and methods | |
CN116367050A (en) | Method for processing audio signal, storage medium, electronic device, and audio device | |
CN109076302B (en) | Signal processing device | |
US8929557B2 (en) | Sound image control device and sound image control method | |
US11671752B2 (en) | Audio zoom | |
EP3643083A1 (en) | Spatial audio processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20150227 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20150825 |