WO2022151336A1 - Techniques pour des transducteurs autour de l'oreille - Google Patents
Techniques pour des transducteurs autour de l'oreille Download PDFInfo
- Publication number
- WO2022151336A1 WO2022151336A1 PCT/CN2021/072095 CN2021072095W WO2022151336A1 WO 2022151336 A1 WO2022151336 A1 WO 2022151336A1 CN 2021072095 W CN2021072095 W CN 2021072095W WO 2022151336 A1 WO2022151336 A1 WO 2022151336A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- navigation
- audio
- orientation
- user
- head
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims abstract description 37
- 238000009877 rendering Methods 0.000 claims abstract description 11
- 238000012546 transfer Methods 0.000 claims abstract description 11
- 238000012545 processing Methods 0.000 claims description 54
- 210000000988 bone and bone Anatomy 0.000 claims description 31
- 230000005540 biological transmission Effects 0.000 claims description 27
- 230000005236 sound signal Effects 0.000 claims description 13
- 230000008569 process Effects 0.000 claims description 8
- 230000005355 Hall effect Effects 0.000 claims description 4
- 239000000203 mixture Substances 0.000 claims description 3
- 210000003128 head Anatomy 0.000 description 43
- 238000010586 diagram Methods 0.000 description 20
- 238000005516 engineering process Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 9
- 238000000605 extraction Methods 0.000 description 6
- 230000004927 fusion Effects 0.000 description 4
- 230000007246 mechanism Effects 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 238000004590 computer program Methods 0.000 description 3
- 210000005069 ears Anatomy 0.000 description 3
- 238000001914 filtration Methods 0.000 description 3
- 238000007781 pre-processing Methods 0.000 description 3
- 210000003454 tympanic membrane Anatomy 0.000 description 3
- 238000013461 design Methods 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000035807 sensation Effects 0.000 description 2
- 238000010521 absorption reaction Methods 0.000 description 1
- 230000002457 bidirectional effect Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 210000000959 ear middle Anatomy 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 239000006260 foam Substances 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 208000016354 hearing loss disease Diseases 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 210000005010 torso Anatomy 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C21/00—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
- G01C21/26—Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 specially adapted for navigation in a road network
- G01C21/34—Route searching; Route guidance
- G01C21/36—Input/output arrangements for on-board computers
- G01C21/3626—Details of the output of route guidance instructions
- G01C21/3629—Guidance using speech or audio output, e.g. text-to-speech
Definitions
- the present disclosure relates to techniques for Around-the-Ear transducers, and specifically relates to systems and methods for providing audio output for a user using the techniques for Around-the-Ear transducers.
- Headphone products the most popular types among which are in-ear, on-ear, over-the ear, are widely used. They can realize decent sound quality with small size speaker, thus conveniently worn for listening to music or communication purpose.
- the typical in-ear type is inserted to the ear cannel which is almost completely obstructed. People tend to listen at high loudness hence causes hearing impairment.
- most of the headphones are designed with the environmental sound isolated, therefore causes safety issue. And, the comfort is always a challenge for headphone design.
- a well-designed ear cup with sound absorption foam can isolate most of the high frequency noise.
- the headphone is equipped with the ANC system, it can significantly reduce the low frequency noise as well, which creates a quiet environment for user but causes safety issue at the same time.
- a method for providing audio outputs for a user may comprise obtaining a head orientation relative to a spatial reference coordination by tracking a head of the user; extracting a navigation orientation relative to the spatial reference coordination, from a navigation source; computing a desired output orientation for a navigation audio from the navigation source based on the head orientation and the navigation orientation; rendering the navigation audio from the navigation source with a head related transfer function (HRTF) , based on the computed desired output orientation; and providing the rendered navigation audio to the user.
- HRTF head related transfer function
- a system for providing audio outputs for a user may comprise a pair of bone conducting headphone worn by the user, wherein at least one sensor positioned on the pair of bone conducting headphone for tracking a head of the user.
- the system further comprise a processing device.
- the processing device is configured to obtain a head orientation relative to a spatial reference coordination based on sensor data from the at least one sensor; extract a navigation orientation relative to the spatial reference coordination; compute a desired output orientation for a navigation audio from the navigation source based on the head orientation and the navigation orientation; render the navigation audio from the navigation source with a head related transfer function (HRTF) , based on the computed desired output orientation; and provide the rendered navigation audio to the user via the pair of bone conducting headphone.
- HRTF head related transfer function
- a system for providing audio signal for a user may comprise a pair of bone conducting headphone worn by the user, and a processing device.
- the processing device is configured to receive a microphone input; receive an audio playback input; mix the microphone input and the audio playback input with low latency and generate a mixed audio; and provide the mixed audio to the user via the pair of bone conducting headphone.
- FIG. 1 illustrates a schematic block diagram of the operations for providing navigation audio outputs using bone conducting headphone in accordance with one or more embodiments of the present disclosure.
- FIG. 2 illustrates a schematic block diagram of head tracking in accordance with one or more embodiments of the present disclosure.
- FIG. 3 illustrates a schematic block diagram of wireless transmission mechanism in accordance with one or more embodiments of the present disclosure.
- FIG. 4 illustrates a schematic block diagram of sound rendering using head related transfer function in accordance with one or more embodiments of the present disclosure.
- FIG. 5 illustrates a schematic method for providing navigation audio outputs for a user in accordance with one or more embodiments of the present disclosure.
- FIG. 6 illustrates a schematic diagram of operations for providing audio outputs for the user in accordance with another one or more embodiments of the present disclosure.
- FIG. 7 illustrates a schematic diagram for illustrating signal flow of audio processing in accordance with another one or more embodiments of the present disclosure.
- FIG. 8 illustrates a schematic diagram of low latency wireless audio transmission in accordance with another one or more embodiments of the present disclosure.
- FIG. 9 illustrates a schematic diagram of forward and feedback audio control with a USB interface in accordance with another one or more embodiments of the present disclosure.
- the disclosure will provide a method and system combining the spatial audio technology with bone conducting form factor to provide accurate audio clue for user while not jeopardize traveling safety by still provide high fidelity ambient aware.
- FIG. 1 illustrates a schematic block diagram of the operations for providing navigation audio outputs using bone conducting headphone in accordance with one or more embodiments of the present disclosure.
- the system may comprise a navigation source 101, a pair of bone conducting headphone 102 and an audio processing device 103.
- the navigation source 101 may be any form of devices that can provide navigation information, including without limitation, a mobile device, a smart device, a laptop computer, a tablet computer, an in-vehicle navigation system and so on.
- the bone conducting headphone 102 may be equipped with at least one sensor or sensor array including a plurality sensors, for sensing a head movement of a user.
- the at least one sensor or sensor array could include without limitation, a magnetometer, a Hall effect sensor, a magneto-diode, a magneto-transistor, a magneto-optical sensor, a microelectromechanical (MEMS) compass, one or more accelerometers (e.g., six-axis accelerometers) and one or more gyroscopes and so forth.
- MEMS microelectromechanical
- the audio processing device 103 may be any form of devices that can performing audio processing, including without limitation, a mobile device, a smart device, a laptop computer, a tablet computer, a dongle and any device that can perform audio processing.
- the audio processing device 103 may be a separated device from the navigation source 101 and the bone conducting headphone 102. Alternatively, the audio processing device 103 may be integrated with the navigation source or the bone conducting headphone.
- the audio processing device 103 may be implemented by a processor.
- the processor may be any technically feasible hardware unit configured to process data and execute software applications, including, for example, and without limitation, a central processing unit (CPU) , a microcontroller unit (MCU) , an application specific integrated circuit (ASIC) , a digital signal processor (DSP) chip and so forth.
- CPU central processing unit
- MCU microcontroller unit
- ASIC application specific integrated circuit
- DSP digital signal processor
- the system may further comprise a navigation orientation extraction module 104 and a head tracking module 105.
- the navigation orientation extraction module 104 is configured to obtain navigation orientation from the navigation source 101.
- the navigation orientation extraction module 104 may be further configured to obtain navigation orientations from navigation audio signals or navigation image/data signals from the navigation source 101.
- the navigation orientation extraction module 104 may be implemented by a processing device (e.g., processor) .
- the processor may be any technically feasible hardware unit configured to process data and execute software applications, including, for example, and without limitation, a central processing unit (CPU) , a microcontroller unit (MCU) , an application specific integrated circuit (ASIC) , and so forth.
- CPU central processing unit
- MCU microcontroller unit
- ASIC application specific integrated circuit
- the head tracking module 105 is configured to perform head tracking algorithm to obtain head orientation output based on the sensor data from the at least one sensor or sensor array mentioned above.
- the head tracking module 105 may be implemented in a processing device (e.g., processor) .
- the processor may be any technically feasible hardware unit configured to process data and execute software applications, including, for example, and without limitation, a central processing unit (CPU) , a microcontroller unit (MCU) , an application specific integrated circuit (ASIC) , and so forth.
- CPU central processing unit
- MCU microcontroller unit
- ASIC application specific integrated circuit
- the blocks 106A, 106B and 107 represent wireless audio transmission mechanism. Both the wireless audio transmissions 106A and 106B use Bluetooth classical technology for transmitting the audio stream. The wireless audio transmission 107 uses Bluetooth-low-energy technology for transmitting the data stream associated with orientation outputs.
- the navigation source 101 outputs source signal, e.g., navigation audio to the audio processing device 103, via the wireless audio transmission 106A which uses Bluetooth classical technology.
- the head orientation output from the head tracking module 105 and the navigation orientation output from the navigation orientation extraction module 104 are transmitted to the audio processing device 103, via wireless audio transmission 107 using Bluetooth-low-energy technology.
- the audio processing device 103 is configured to compute a desired output orientation of navigation audio from the navigation source, based on the head orientation and the navigation orientation.
- the audio processing device 103 is further configured to render the navigation audio from the navigation source using head related transfer function (HRTF) , based on the computed desired orientation output.
- HRTF head related transfer function
- the rendered navigation audio may be output to the pair of bone conducting headphone 102, via the wireless audio transmission 106B which uses Bluetooth classical technology.
- the user can experience the sensation of consistent source direction, for example, the navigation audio heard by the user seems to come from the navigation orientation which the user will follow.
- the audio processing device 103 may be further configured to perform equalization and virtual surround algorithm.
- FIG. 2 illustrates a schematic block diagram of head tracking in accordance with one more embodiments of the present disclosure.
- the operation of head tracking shown in FIG. 2 can be performed by head tracking module 105 as described referring to FIG. 1.
- the head tracking may comprise receiving sensor data associated with head orientation from at least one sensor or sensor array that may be positioned on/in one side or both sides of the bone conducting headphone.
- the at least one sensor or sensor array may include without limitation, a magnetometer, a Hall effect sensor, a magneto-diode, a magneto-transistor, a magneto-optical sensor, a MEMS compass, one or more accelerometers (e.g., six-axis accelerometers) and one or more gyroscopes and so forth.
- the head tracking further comprises performing sensor fusion based on the received sensor data.
- Sensor fusion is a term that covers a number of methods and algorithms, for example, including Central Limit Theorem, Kalman filter, Bayesian networks, Dempster-Shafer and so on. Sensor fusion may combine sensor data from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually.
- the head orientation output is obtained which may include orientation information including without limitation a yaw angle, a pitch angle, a roll angle.
- FIG. 3 illustrates a schematic block diagram of wireless transmission mechanism in accordance with one more embodiments of the present disclosure.
- the disclosure uses two kinds of wireless audio transmission mechanisms, which include the Bluetooth classical technology for transmitting the source input associated with the audio stream from the navigation source, and the Bluetooth-low-energy technology for transmitting the orientation data associated with the navigation orientation output and the head orientation output.
- FIG. 3 only shows how to uses two different wireless transmission technologies. According to the context, those skilled in the art can understand that the Bluetooth low energy technology shown in FIG. 3 can also be applied to the transmission of navigational orientation data.
- FIG. 4 illustrates a schematic block diagram showing the process of sound rendering using head related transfer function (HRTF) in accordance with one more embodiments of the present disclosure.
- HRTF is an advanced way of rendering 3-D audio so that sounds appear to come from a specific point in 3D space, synthesizing binaural audio.
- HRTFs are often used as filters describing the sound transmission from a sound source to the listeners’ eardrum. They can improve the sound localization cues, such as interaural time difference (ITD) , interaural level difference (ILD) and the spectral cues derived from the shape of one's ears, head, and torso.
- ITD interaural time difference
- ILD interaural level difference
- spectral cues derived from the shape of one's ears, head, and torso.
- the present disclosure illustrates sound rendering process by rendering the incoming audio input with the corresponding head related transfer function (HRTF) .
- HRTF head related transfer function
- the HRTFs may be precomputed corresponding to different sound orientations and may be stored in the memory as a HRTF database.
- the rendered audio stream may be output to create the sensation of consistent source direction.
- the navigation orientation output and the head orientation output are obtained, for example, obtained respectively from the navigation orientation extraction module 104 and the head tracking module 105 via wireless audio transmission using Bluetooth-low-energy technology as shown in FIG. 1. Then, based on the head orientation and the navigation orientation, a desired output orientation of navigation source audio from the navigation source may be computed. For example, the orientation difference between the head orientation and the navigation orientation is computed to obtain the desired output orientation. According to the orientation difference, the corresponding HRTF may be smoothly adjusted.
- the multi-channel or stereo audio input from the navigation source is processed, such as with short-time Fourier transform and windowing process.
- HRTF filtering in frequency domain per channel is performed, wherein the HRTF filtering is performed with the adjusted HRTF according to the orientation difference.
- the rendered audio input may be further processed with an optimized downmix processing, then an inverse short-time Fourier transform and overlap-add processing.
- the stereo output is output from the bone conducting headphone to the user, which makes the user feel the navigation audio seems to come from the orientation he/she will go forward.
- FIG. 5 illustrates a schematic method for providing navigation audio for a user in accordance with one or more embodiments of the present disclosure. The method can be performed reference to FIGS. 1-4.
- a head orientation of a user relative to a spatial reference coordination may be obtained.
- a navigation orientation relative to a spatial reference coordination may be extracted.
- a desired output orientation of the navigation audio from the navigation source may be computed.
- the navigation audio (i.e., the source input) from the navigation source may be rendered using head related transfer function (HRTF) , based on the computed desired output orientation.
- HRTF head related transfer function
- the method may further comprise computing the desired output orientation of the navigation audio by computing a difference between the head orientation and the navigation orientation.
- the method may further comprise adjusting the HRTF based on the computed difference.
- the method may further comprise rendering the navigation audio from the navigation source with the adjusted HRTF, based on the computed desired output orientation of the navigation audio.
- the present disclosure further provides a technique that is applied to portable recording studio. For example, in applications such as live concert and studio recording, singers and producers will need to both hear the recordings by the capture device (such as a microphone) and from their surroundings to interact and notice any differences between the intent and the recordings. This technique will be described referring to FIGS. 6-9.
- FIG. 6 illustrates a schematic block diagram of a system for providing audio signal for the user in accordance with another one or more embodiments of the present disclosure.
- the system may comprise a processing device 610 and bone conducting transducers, such as a pair of bone conducting headphone 630.
- the processing device 610 may receive a microphone input signal and an audio playback input signal.
- the processing device 610 may mix, with low latency, the microphone input signal and the audio playback input signal and generate a mixed audio, and then output the mixed audio to the user via the pair of bone conducting headphone 630.
- the processing device 610 is further configured to output the mixed audio to the pair of bone conducting headphone 630 via a low latency wireless audio transmission 620.
- the microphone input signal is captured by a microphone device.
- the audio playback input signal may be the audio playback input signal provided from the audio playback device.
- the processing device 610 may comprise a USB interface device 611 and a mixer 612. Using the USB interface 611, the processing device may implement forward and feedback control of audio signals.
- the mixer 612 is a low latency mixer for mixing (with low latency) the microphone input and the audio playback input and generating a mixed audio signal.
- the processing device may be configured to include a limiter (not shown in FIG. 6) for limiting the mixed audio before outputting it.
- FIG. 7 illustrates a schematic diagram for illustrating signal flow of audio processing in accordance with another one or more embodiments of the present disclosure.
- the audio playback input and the microphone input are input to a processing device for performing audio processing.
- the audio processing may include mixing, performing equalization and gain processing performed by the mixer.
- the audio processing may further include limiting processing performed by a limiter through which the audio quality may be conserved. Then, the processed output is generated.
- the mixer may be a low latency mixer that would be implemented in a MCU, a processor, or a chip which may be integrated in the processing device, for example, not for limitation, in the dongle.
- FIG. 8 illustrates a schematic diagram of low latency wireless audio transmission in accordance with one or more embodiments of the present disclosure.
- the low latency proprietary module used in this disclosure may have a transmission latency time (such as about 15ms) which is negligible compared to the typical BT transmission latency time (such as about 180ms) .
- FIG. 8 shows the bidirectional signal flow for low latency wireless audio transmission.
- the sound signal output e.g., music
- the sound signal output may be encoded by a low latency encoder and then be transmitted via radio frequency emission.
- the encoded output is received and then be decoded via a corresponding low latency decoder.
- the decoded output is output to an optimized buffer, and then is output as a sound signal input.
- the sound signal output e.g., data representative of state feedback signal or other indication signals from the headphone
- the processing device side the encoded output is received and then be decoded via a corresponding low latency decoder.
- the decoded output is output to an optimized buffer, and then is output as a signal input.
- the proprietary connection by a proprietary module the problem caused by device pairing such as privacy and user experience may be minimized.
- FIG. 9 illustrates a schematic diagram of forward and feedback audio control with a USB interface in accordance with another one or more embodiments of the present disclosure.
- the USB interface device may include a first soundcard and a second soundcard which are used for different audio inputs.
- the audio playback input may be performed by a digital pre-processing and input into the first soundcard, and the microphone input may be performed by a digital pre-processing, and then input into the second soundcard.
- the audio playback input and the microphone input may be passed through an analog to digital converter.
- the output from the first soundcard and the output from the second soundcard are both processed by a digital post-processing, and the processed signal output may be transmitted to the pair of bone conducting headphone via low latency wireless audio transmission.
- the digital post-processing may include some audio processing, for example, without limitation, mixing, equalization and gaining processing, and limiting processing by a limiter.
- the user may plug microphone directly to the processing device (such as the dongle) and therefore have a direct monitor while listening to the surrounding without obstacles.
- the disclosure further includes a non-transitory computer-readable medium storing program instructions that, when executed by a processor, cause the processor to providing audio signal for a user, by performing the steps of: obtaining a head orientation relative to a spatial reference coordination by tracking a head of the user; extracting a navigation orientation relative to the spatial reference coordination from a navigation source; computing a desired output orientation for a navigation audio signal from the navigation source based on the head orientation and the navigation orientation; rendering the navigation audio signal from the navigation source with a head related transfer function (HRTF) , based on the computed desired output orientation; and outputting the rendered navigation audio signal to the user.
- HRTF head related transfer function
- the system and method described in the present disclosure combines the spatial audio technology with bone conducting form factor, thus provide accurate audio clue while not jeopardize traveling safety by still provide high fidelity ambient aware, and allow user to both hear the recordings by the capture device (such as microphone) and from their surroundings to interact and notice any differences between the intent and the recordings.
- the techniques provide in the present disclosure provide the user better usage experience.
- aspects of the present disclosure may take the form of an entirely hardware embodiment, an entirely software embodiment (including firmware, resident software, micro-code, etc. ) or an embodiment combining software and hardware aspects that may all generally be referred to herein as a “circuit” , “module” or “system” .
- the computer readable medium may be a computer readable signal medium or a computer readable storage medium.
- a computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing.
- a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
Landscapes
- Engineering & Computer Science (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- General Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Stereophonic System (AREA)
Abstract
La divulgation décrit un procédé et un système permettant de fournir une sortie audio pour un utilisateur. Le procédé peut consister à obtenir une orientation de tête par rapport à une coordination de référence spatiale par suivi d'une tête de l'utilisateur; à extraire une orientation de navigation par rapport à la coordination de référence spatiale à partir d'une source de navigation; à calculer une orientation de sortie souhaitée pour un audio de navigation à partir de la source de navigation sur la base de l'orientation de tête et de l'orientation de navigation; à restituer l'audio de navigation provenant de la source de navigation avec une fonction de transfert liée à la tête (HRTF pour Head Related Transfer Function), sur la base de l'orientation de sortie souhaitée calculée; et à fournir l'audio de navigation restitué à l'utilisateur.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/072095 WO2022151336A1 (fr) | 2021-01-15 | 2021-01-15 | Techniques pour des transducteurs autour de l'oreille |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/072095 WO2022151336A1 (fr) | 2021-01-15 | 2021-01-15 | Techniques pour des transducteurs autour de l'oreille |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022151336A1 true WO2022151336A1 (fr) | 2022-07-21 |
Family
ID=82446328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/072095 WO2022151336A1 (fr) | 2021-01-15 | 2021-01-15 | Techniques pour des transducteurs autour de l'oreille |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022151336A1 (fr) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140019037A1 (en) * | 2012-07-12 | 2014-01-16 | GN Store Nord A/S | Hearing device providing spoken information on the surroundings |
US9464912B1 (en) * | 2015-05-06 | 2016-10-11 | Google Inc. | Binaural navigation cues |
CN107478239A (zh) * | 2017-08-15 | 2017-12-15 | 上海摩软通讯技术有限公司 | 基于音频再现装置的导航方法、导航系统及音频再现装置 |
WO2019067443A1 (fr) * | 2017-09-27 | 2019-04-04 | Zermatt Technologies Llc | Navigation spatiale audio |
WO2020123090A1 (fr) * | 2018-12-13 | 2020-06-18 | Google Llc | Mélangeurs de microphones pour casques sans fil |
-
2021
- 2021-01-15 WO PCT/CN2021/072095 patent/WO2022151336A1/fr active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140019037A1 (en) * | 2012-07-12 | 2014-01-16 | GN Store Nord A/S | Hearing device providing spoken information on the surroundings |
US9464912B1 (en) * | 2015-05-06 | 2016-10-11 | Google Inc. | Binaural navigation cues |
CN107478239A (zh) * | 2017-08-15 | 2017-12-15 | 上海摩软通讯技术有限公司 | 基于音频再现装置的导航方法、导航系统及音频再现装置 |
WO2019067443A1 (fr) * | 2017-09-27 | 2019-04-04 | Zermatt Technologies Llc | Navigation spatiale audio |
WO2020123090A1 (fr) * | 2018-12-13 | 2020-06-18 | Google Llc | Mélangeurs de microphones pour casques sans fil |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108141696B (zh) | 用于空间音频调节的系统和方法 | |
US10257637B2 (en) | Shoulder-mounted robotic speakers | |
US11356797B2 (en) | Display a graphical representation to indicate sound will externally localize as binaural sound | |
US20150326963A1 (en) | Real-time Control Of An Acoustic Environment | |
JP6193844B2 (ja) | 選択可能な知覚空間的な音源の位置決めを備える聴覚装置 | |
CN111327980B (zh) | 提供虚拟声音的听力设备 | |
US20190278552A1 (en) | Emoji to Select How or Where Sound Will Localize to a Listener | |
US11221820B2 (en) | System and method for processing audio between multiple audio spaces | |
CN111492342B (zh) | 音频场景处理 | |
US20230276188A1 (en) | Surround Sound Location Virtualization | |
US20240031759A1 (en) | Information processing device, information processing method, and information processing system | |
WO2022151336A1 (fr) | Techniques pour des transducteurs autour de l'oreille | |
EP2887695B1 (fr) | Dispositif d'audition à positionnement spatial perçu sélectionnable de sources acoustiques | |
CN112740326A (zh) | 用于控制带限音频对象的装置、方法和计算机程序 | |
CN114339582A (zh) | 双通道音频处理、方向感滤波器生成方法、装置以及介质 | |
US10764707B1 (en) | Systems, methods, and devices for producing evancescent audio waves | |
Cohen et al. | From whereware to whence-and whitherware: Augmented audio reality for position-aware services | |
EP4404185A1 (fr) | Annulation audio | |
US20230254656A1 (en) | Information processing apparatus, information processing method, and terminal device | |
WO2023061130A1 (fr) | Écouteur, dispositif utilisateur et procédé de traitement de signal | |
WO2023197646A1 (fr) | Procédé de traitement de signal audio et dispositif électronique | |
KR20160073879A (ko) | 3차원 오디오 효과를 이용한 실시간 내비게이션 시스템 | |
WO2024186771A1 (fr) | Systèmes et procédés pour audio spatial hybride | |
JP2007318188A (ja) | 音像提示方法および音像提示装置 | |
KR20170133233A (ko) | 휴대용 음향기기 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21918543 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21918543 Country of ref document: EP Kind code of ref document: A1 |