US9271102B2 - Multi-dimensional parametric audio system and method - Google Patents
Multi-dimensional parametric audio system and method Download PDFInfo
- Publication number
- US9271102B2 US9271102B2 US14/457,588 US201414457588A US9271102B2 US 9271102 B2 US9271102 B2 US 9271102B2 US 201414457588 A US201414457588 A US 201414457588A US 9271102 B2 US9271102 B2 US 9271102B2
- Authority
- US
- United States
- Prior art keywords
- listener
- adjusted
- sound
- hrtf
- audio
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 49
- 238000012545 processing Methods 0.000 claims abstract description 28
- 210000005069 ears Anatomy 0.000 claims abstract description 25
- 238000012634 optical imaging Methods 0.000 claims abstract description 18
- 230000008569 process Effects 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 8
- 230000003287 optical effect Effects 0.000 claims description 8
- 238000012546 transfer Methods 0.000 claims description 8
- 239000002131 composite material Substances 0.000 claims description 7
- 230000003111 delayed effect Effects 0.000 claims description 6
- 230000000694 effects Effects 0.000 abstract description 23
- 238000010586 diagram Methods 0.000 description 15
- 230000005236 sound signal Effects 0.000 description 15
- 238000004891 communication Methods 0.000 description 11
- 238000005516 engineering process Methods 0.000 description 9
- 238000004590 computer program Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000008901 benefit Effects 0.000 description 4
- 239000000470 constituent Substances 0.000 description 4
- 230000026683 transduction Effects 0.000 description 4
- 238000010361 transduction Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- 238000002156 mixing Methods 0.000 description 3
- 230000004044 response Effects 0.000 description 3
- 208000032041 Hearing impaired Diseases 0.000 description 2
- 230000009471 action Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000001934 delay Effects 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000001151 other effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000008685 targeting Effects 0.000 description 2
- 206010048865 Hypoacusis Diseases 0.000 description 1
- 241001025261 Neoraja caerulea Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000001066 destructive effect Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000010363 phase shift Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L19/00—Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
- G10L19/008—Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S1/00—Two-channel systems
- H04S1/002—Non-adaptive circuits, e.g. manually adjustable or static, for enhancing the sound image or the spatial distribution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S3/00—Systems employing more than two channels, e.g. quadraphonic
- H04S3/006—Systems employing more than two channels, e.g. quadraphonic in which a plurality of audio signals are transformed in a combination of audio signals and modulated signals, e.g. CD-4 systems
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S5/00—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation
- H04S5/005—Pseudo-stereo systems, e.g. in which additional channel signals are derived from monophonic signals by means of phase shifting, time delay or reverberation of the pseudo five- or more-channel type, e.g. virtual surround
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2205/00—Details of stereophonic arrangements covered by H04R5/00 but not provided for in any of its subgroups
- H04R2205/041—Adaptation of stereophonic signal reproduction for the hearing impaired
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04R—LOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
- H04R2217/00—Details of magnetostrictive, piezoelectric, or electrostrictive transducers covered by H04R15/00 or H04R17/00 but not provided for in any of their subgroups
- H04R2217/03—Parametric transducers where sound is generated or captured by the acoustic demodulation of amplitude modulated ultrasonic waves
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/01—Multi-channel, i.e. more than two input channels, sound reproduction with two speakers wherein the multi-channel information is substantially preserved
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2400/00—Details of stereophonic systems covered by H04S but not provided for in its groups
- H04S2400/11—Positioning of individual sound objects, e.g. moving airplane, within a sound field
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S2420/00—Techniques used stereophonic systems covered by H04S but not provided for in its groups
- H04S2420/01—Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04S—STEREOPHONIC SYSTEMS
- H04S7/00—Indicating arrangements; Control arrangements, e.g. balance control
- H04S7/30—Control circuits for electronic adaptation of the sound field
- H04S7/302—Electronic adaptation of stereophonic sound system to listener position or orientation
- H04S7/303—Tracking of listener position or orientation
Definitions
- the present invention relates generally to audio systems, and more particularly, some embodiments relate to multi-dimensional audio processing for ultrasonic audio systems.
- Surround sound or audio reproduction from various positions about a listener can be provided using several different methodologies.
- One technique uses multiple speakers encircling the listener to play audio from different directions.
- An example of this is Dolby® Surround Sound, which uses multiple speakers to surround the listener.
- the Dolby 5.1 process digitally encodes five channels (plus a subwoofer) of information onto a digital bitstream. These are the Left Front, Center Front, Right Front, Surround Left, and a Surround Right. Additionally, a Subwoofer output is included (which is designated by the “0.1”).
- a stereo amplifier with Dolby processing receives the encoded audio information and decodes the signal to derive the 5 separate channels. The separate channels are then used to drive five separate speakers (plus a subwoofer) placed around the listening position.
- Dolby 6.1 and Dolby 7.1 are extensions of Dolby 5.1.
- Dolby 6.1 includes a Surround Back Center channel.
- Dolby 7.1 adds left and right back speakers that are preferably placed behind the listening position and the surround speakers are set to the sides of the listening position. An example of this is provided in FIG. 1 .
- the conventional Dolby 7.1 system includes Left Front (LF), Center, Right Front (RF), Left Surround (LS), Right Surround (RS) Back Surround Left (BSL) and Back Surround Right (BSR). Additionally, a Subwoofer, or Low Frequency effects (LFE), is shown.
- LFE Low Frequency effects
- decoders at the audio amplifier decode the encoded information in the audio stream and break up the signal into its constituent channels—e.g., 7 channels plus a subwoofer output for Dolby 7.1.
- the separate channels are amplified and sent to their respective speakers.
- Dolby 7.1 and other multi-speaker surround sound systems require more than two speakers.
- multi-speaker surround sound systems require placement of the speakers around the listening environment. These requirements can lead to increased cost, additional wiring and practical difficulties with speaker placement.
- the sound created by the conventional speakers is always produced on the face of the speaker (i.e., at the speaker cone).
- the sound wave created at the surface propagates through the air in the direction at which the speaker is pointed.
- the sound will appear to be closer or farther away from the listener depending on how far away from the listener the speaker is positioned. The closer the listener is to the speaker, the closer the sound will appear.
- the sound can be made to appear closer by increasing the volume, but this effect is limited.
- speakers may be placed to ‘surround’ the listener, but it is apparent that the sound is produced at discrete points along the perimeter corresponding to the position of the speakers. This is apparent when listening to content in a surround-sound environment. In such environments, the sound can appear to move from one speaker to another, but it always sounds like its source is the speaker itself—which it is. Phasing can have the effect of blending sound between speakers, but conventional surround sound systems cannot achieve placement or apparent placement of sound in the environment at determined distances from a listener or listening location.
- a parametric audio encoder in an audio system is configured to process a sound channel into left input and right input channel signals; apply HRTF filters to the left and right input channel signals to generate adjusted left and adjusted right channel signals; apply acoustic crosstalk cancellation filters to the adjusted left and adjusted right channel signals; and modulate the left and right output channel signal frequencies onto an ultrasonic carrier to generate modulated left output and right output channel signals for playback by a left ultrasonic emitter and a right ultrasonic emitter.
- HRTF filters for the left and right ears of the a listener are determined by scanning the listener with an optical imaging system to determine a profile of the listener.
- the profile of the listener comprises the head, pinna, and torso measurements of the listener.
- the HRTF filters are determined by comparing the scanned profile of the user with a predetermined set of HRTF profiles, each profile including a predetermined range of head, pinna, and torso measurements; and automatically selecting one of the predetermined set of HRTF profiles.
- determining HRTF filters for the left and right ears of the listener includes playing a plurality of sound samples at a predetermined frequency; recording the sound samples at a plurality of microphones, placed in the listener's left and right ears during recording; and recording the listener's position relative to the left and right ultrasonic emitters using the optical imaging system when each sound sample is recorded.
- applying acoustic crosstalk cancellation filters to the adjusted left and adjusted right channel signals to generate left and right output channel signals includes: phase inverting the adjusted right channel signal and the adjusted left channel signal; adding a delay to the phase inverted right channel signal and to the phase inverted left channel signal; combining the adjusted left channel signal with the delayed phase inverted adjusted right channel signal to generate the left output channel signal; and combining the adjusted right channel signal with the delayed phase inverted adjusted left channel signal to generate the right output channel signal.
- FIG. 1 illustrates the conventional Dolby® Surround Sound configuration, with components for Dolby 5.1, 6.1, or 7.1 configurations.
- FIG. 2 illustrates an example encoding and decoding process in accordance with various embodiments of the technology described herein.
- FIG. 3 is a flow diagram of the method of creating a parametric audio signal from a previously signal encoded for use in a conventional surround sound system in accordance with various embodiments of the technology described herein.
- FIG. 4 is a flow diagram of the method of encoding an audio component to produce a parametric audio signal in accordance with various embodiments of the technology described herein.
- FIG. 5A is a diagram illustrating example circuitry of a parametric encoder that may be implemented to encode a sound channel into left and right ultrasonic frequency modulated output channel signals in accordance with various embodiments of the technology described herein.
- FIG. 5B is an operational flow diagram illustrating an example method of encoding a sound channel that may be implemented with the parametric encoder circuitry of FIG. 5A .
- FIG. 6A illustrates an example embodiment of the invention where ultrasonic emitters direct the parametric audio signal directly towards either the left or right sides of a particular listening position.
- FIG. 6B illustrates an example embodiment of the invention where ultrasonic emitters reflect the parametric audio signal off a wall, ceiling, and/or floor.
- FIG. 7 illustrates an example embodiment of a hybrid embodiment where the method of parametric audio production and ultrasonic emitters in accordance with embodiments of the invention is combined with a conventional surround sound configuration.
- FIG. 8 illustrates an example computing module that may be used in implementing various features of embodiments of the technology described herein.
- Embodiments of the systems and methods described herein provide multidimensional audio or a surround sound listening experience using as few as two emitters.
- Non-linear transduction such as a parametric array in air
- Non-linear transduction results from the introduction of audio-modulated ultrasonic signals into an air column.
- Self-demodulation, or down-conversion occurs along the air column resulting in the production of an audible acoustic signal.
- This process occurs because of the known physical principle that when two sound waves of sufficient intensity with different frequencies are radiated simultaneously in the same medium, a modulated waveform including the sum and difference of the two frequencies is produced by the non-linear (parametric) interaction of the two sound waves.
- the two original sound waves are ultrasonic waves and the difference between them is selected to be an audio frequency
- an audible sound can be generated by the parametric interaction.
- various components of the audio signal can be processed such that the signal played through ultrasonic emitters creates a multi-dimensional sound effect.
- a three-dimensional effect can be created using only two channels of audio, thereby allowing as few as two emitters to achieve the effect.
- other quantities of channels and emitters are used.
- the ultrasonic transducers, or emitters, that emit the ultrasonic signal can be configured to be highly directional. Accordingly, a pair of properly spaced emitters can be positioned such that one of the pair of emitters targets one ear of the listener or a group of listeners, and the other of the pair of emitters targets the other ear of the listener or group of listeners.
- the targeting can but need not be exclusive. In other words, sound created from an emitter directed at one ear of the listener or group of listeners can ‘bleed’ over into the other ear of the listener or group of listeners.
- the audio can be generated by demodulation of the ultrasonic carrier in the air between the ultrasonic emitter and the listener (sometimes referred to as the air column).
- the actual sound is created at what is effectively an infinite number of points in the air between the emitter and the listener and beyond the listener. Therefore, in various embodiments these parameters are adjusted to emphasize an apparent sound generated at a chosen location in space. For example, the sound created (e.g., for a component of the audio signal) at a desired location can be made to appear to be emphasized over the sound created at other locations. Accordingly, with just one pair of emitters (e.g., a left and right channel), the sound can be made to appear to be generated at a point along one of the paths from the emitter to the listener at a point closer to or farther from the listener, whether in front of or behind the listener.
- just one pair of emitters e.g., a left and right channel
- the parameters can also be adjusted so that sound appears to come from the left or right directions at a predetermined distance from the listener. Accordingly, two channels can provide a full 360 degree placement of a source of sound around a listener, and at a chosen distance from the listener. As also described herein, different audio components or elements can be processed differently, to allow controlled placement of these audio components at their respective desired locations within the channel.
- Adjusting the audio on two or more channels relative to each other allows the audio reproduction of that signal or signal component to appear to be positioned in space about the listener(s).
- Such adjustments can be made on a component or group of components (e.g., Dolby or other like channel, audio component, etc.) or on a frequency-specific basis.
- adjusting phase, gain, delay, reverb, and echo, or other audio processing of a single signal component can also allow the audio reproduction of that signal component to appear to be positioned in a predetermined location in space about the listener(s). This can include apparent placement in front of or behind the listener.
- Additional auditory characteristics such as, for example, sounds captured from auditorium microphones placed in the recording environment (e.g., to capture hall or ambient effects), may be processed and included in the audio signal (e.g., blending with one or more components) to provide more realism to the three-dimensional sound.
- the parameters can be adjusted based on frequency components.
- various audio components are created with a relative phase, delay, gain, echo and reverb or other effects built into the audio component such that can be placed in spatial relation to the listening position upon playback.
- computer-synthesized or computer-generated audio components can be created with or modified to have signal characteristics to allow placement of various audio components and their desired respective positions in the listening environment.
- the Dolby (or other like) components can be modified to have signal characteristics to allow apparent placement of various audio components and their desired respective positions in the listening environment.
- a computer-generated audio/video experience such as a videogame.
- the user is typically immersed into a world with the gaming action occurring around the user in that world in three dimensions.
- the gamer may be in a battlefield environment that includes aircraft flying overhead, vehicles approaching from or departing to locations around the user, other characters sneaking up on the gamer from behind or from the side, gunfire at various locations around the player, and so on.
- the gamer is in the cockpit of the vehicle. He or she may hear engine noise from the front, exhaust noise from the rear, tires squealing from the front or rear, the sounds of other vehicles behind, to the side and front of the gamer's vehicle, and so on.
- volume alone is not the only factor used to judge distance.
- the character of a given sound beyond its volume changes as the source of the given sound moves farther away.
- the effects of the environment are more pronounced, for example.
- the user can be immersed in a three-dimensional audio experience using only two “speakers” or emitters. For example, increasing the gain of an audio component on the left channel relative to the right, and at the same time adding a phase delay on that audio component for the right channel relative to the left, will make that audio component appear to be positioned to the left of the user. Increasing the gain or phase differential (or both) will cause the audio component to appear as if it is coming from a position farther to the left of the user.
- each footstep of that character may be encoded differently to reflect that footstep's position relative to the prior or subsequent footsteps of that character.
- the footsteps can be made to sound like they are moving toward the gamer from a predetermined location or moving away from the gamer to a predetermined position.
- the volume of the footstep sound components can be likewise adjusted to reflect the relative distance of the footsteps as they approach or move away from the user.
- a sequence of audio components that make up an event can be created with the appropriate phase, gain, or other difference to reflect relative movement.
- the audio characteristics of a given audio component can be altered to reflect the changing position of the audio component.
- the engine sound of the overtaking vehicle can be modified as the vehicle overtakes the gamer to position sound properly in the 3-D environment of the game. This can be in addition to any other alteration of the sound such as, for example, to add Doppler effects for additional realism.
- additional echo can be added for sounds that are farther away, because as an object gets closer, its sound tends to drown out its echo.
- stereo separation can be used to simulate the perception of distance by mixing an audio component between two audio channels so that the audio component is heard by both ears of the listener.
- a two-channel audio signal that has been encoded with surround sound components can be decoded to its constituent parts, the constituent parts can be re-encoded according to the systems and methods described herein to provide correct spatial placement of the audio components and recombined into a two-channel audio signal for playback using two ultrasonic emitters.
- FIG. 2 is a diagram illustrating an example of a system for generating two-channel, multidimensional audio from a surround-sound encoded signal in accordance with one embodiment of the systems and methods described herein.
- the example audio system includes an audio encoding system 111 and an example audio playback system 113 .
- the example audio encoding system 111 includes a plurality of microphones 112 , an audio encoder 132 and a storage medium 124 .
- the plurality of microphones 112 can be used to capture audio content as it is occurring.
- a plurality of microphones can be placed about a sound environment to be recorded.
- Audio encoder or surround sound encoder 132 processes the audio received from the different microphone input channels to create a two channel audio stream such as, for example, a left and right audio stream.
- This two-channel audio stream encoded with information for each of the tracks or microphone input channels can be stored on any of a number of different storage media 124 such as, for example, flash or other memory, magnetic or optical discs, or other suitable storage media.
- signal encoding from each microphone is performed on a track-by-track basis. That is, the location or position information of each microphone is preserved during the encoding process such that during subsequent decoding and re-encoding (described below) that location or position information affects the apparent position of the audio playback signal components.
- encoding performed by audio encoder 132 separates the audio information into tracks that are not necessarily tied to, or that do not necessarily correspond on a one-to-one basis with each of the individual microphones 112 .
- audio components can be separated into various channels such as center front, left front, right front, left surround, right surround, left back surround, right back surround, and so on based on content rather than based on which microphone was used to record the audio.
- An example of audio encoder is used to create multiple tracks of audio information encoded onto a two track audio stream is a Dolby Digital or Dolby surround sound processor.
- the audio recording generated by audio encoder 132 can store one storage medium 124 can be, for example, a Dolby 5.1 or 7.1 audio recording.
- the content can be synthesized and assembled using purely synthesized sound or a combination of synthesized and recorded sounds.
- a decoder 134 and parametric encoder 136 are provided in the reproduction system 113 .
- the encoded audio content in this case stored on media 124
- Decoder 134 is used to decode the encoded two-channel audio stream into the multiple different surround sound channels 141 that make up the audio content.
- coder 134 can re-create an audio channel 141 for each microphone channel 112 .
- coder 134 can be implemented as a Dolby decoder and the surround sound channels 141 are the re-created surround sound speaker channels (e.g., left front, center, right front, and so on).
- Parametric encoder 136 and be implemented as described above to split each surround sound channel 141 into a left and right channel, and to apply audio processing (in the digital or analog domain) to position the sound for each channel at the appropriate position in the listening environment. As described above, such positioning can be accomplished by adjusting the phase, delay, gain, echo, reverb and other parameters of the left channel relative to the right channel or of both channels simultaneously for a given surround sound effect.
- This parametric encoding for each channel can be performed on each of the surround sound channels 141 , and the left and right components of each of the surround sound channels 141 combined into a composite left and right channel for reproduction by ultrasonic emitters 144 . With such processing, the surround sound experience can be produced in a listening environment using only two emitters (i.e., speakers), rather than requiring 5-7 (or more) speakers placed about the listening environment.
- FIG. 3 is a diagram illustrating an example process for generating multi-dimensional audio content in accordance with one embodiment of the systems and methods described herein.
- surround sound encoded audio content is received, in the form of an audio bitstream.
- a two-channel Dolby encoded audio stream can be received from a program source such as, for example, a DVD, Blu-Ray Disk, or other program source.
- the surround-sound encoded audio stream is decoded, and the separate channels are available for processing. In various embodiments, this can be done using conventional Dolby decoding that separates an encoded audio stream into the various individual surround channels.
- the resulting audio streams for each channel can include digital or analog audio content.
- the desired location of these channels is identified or determined.
- the desired position for the audio for each of the left front, center front, right front, left surround, right surround, back left surround and back right surround channels is determined.
- a digitally encoded Dolby bitstream can be received, for example, from a program source such as DVD, BlueRay, other audio program source.
- the channels are processed to “place” each audio channel at the desired location in the listening field.
- each channel is divided into two channels (for example, a left and a right channel) the appropriate processing applied provide spatial context for the channel.
- this can involve adding a differential phase shift, gain, echo, reverb, and other audio parameter to each channel relative to the other for each of the surround channels to effectively place the audio content for that channel at the desired location in the listening field.
- no phase or gain differentials are applied to the left and right channels so that the audio appears to be coming from between the two emitters.
- the audio content is modulated to ultrasonic frequencies and played through the pair of parametric emitters.
- parametric processing is performed with the assumption that the pair of parametric emitters will be placed like conventional stereo speakers—i.e, in front of the listener and separated by distance to the left and right of the center line from the listener.
- processing can be performed to account for placement of the parametric emitters at various other predetermined locations in the listening environment. By adjusting parameters such as the phase and gain of the signal being sent to one emitter relative to the signal being sent to the other emitter, placement of the audio content can be achieved at desired locations given the actual emitter placement.
- FIG. 4 is a diagram illustrating an example process for generating and reproducing multidimensional audio content using parametric emitters in accordance with one embodiment of the systems and methods described herein.
- An example application for the process shown in the embodiment of FIG. 4 is an application in the video game environment.
- various audio objects are created with their positional or location information already built in or embedded such that when played through is a pair of parametric emitters, the sound of each audio object appears to be originating from the predetermined desired location.
- an audio object is created.
- an audio object can be any of a number of audio sounds or sound clips such as, for example, a footstep, a gunshot, a vehicle engine, or a voice or sound of another character, just to name a few.
- the developer determines the location of the audio object source relative to the listener position. For example at any given point in a war game, the game may generate the sound of gunfire (or other action) emanating from a particular location. For example, consider the case of gunfire originating from behind and to the left of the gamer's current position.
- the audio object (gunfire in this example) is encoded with the location information such that when it is played to the gamer using the parametric emitters, the sound appears to emanate from behind and to the left of the gamer. Accordingly, when the audio object is created, it can be created as an audio object having two channels (e.g., left and right channels) with the appropriate phase and gain differentials, and other audio characteristics, to cause the sound to appear to be emanating from the desired locations.
- the sound can be prestored as library objects with the location information or characteristics already embedded or encoded therein such that they can be called from the library and used as is.
- generic library objects are stored for use, and when called for application in a particular scenario are processed to apply the position information to the generic object.
- gunfire sounds from a particular weapon can be stored in a library and, when called, processed to add the location information to the sound based on where the gunfire is to occur relative to the gamer's position.
- the audio components with the location information are combined to create the composite audio content, and at step 333 the composite audio content is played to the user using the pair of parametric emitters.
- FIG. 5A is a diagram illustrating an example processing module of a parametric encoder that may be implemented to encode a sound channel 410 baseband audio signal into left and right ultrasonic frequency modulated output channel signals for processing and transmission by ultrasonic processors/emitters 450 A and 450 B.
- the system may receive left and right channels for processing such as, for example, in a stereo sound environment.
- a sound channel 410 can be divided into two component channels (left and right) for processing.
- circuitry 400 comprises channel processors 420 A and 420 B for processing the left and right channels relative to each other to effectively place the audio content of the sound channels at the desired location in the listening field.
- channel processors 420 A and 420 B comprise head-related transfer function (HRTF) filters for encoding the sound channel in three dimensional space based on the expected response of a listener who is listening to the sound emitted from a plurality of ultrasonic emitters.
- Circuitry 400 may also include combiners 430 A, 430 B and ultrasonic modulators 440 A, 440 B.
- the combiners may be included to cancel some or all of the acoustic crosstalk that may occur between ultrasonic emitters 450 A and 450 B.
- the ultrasonic modulators modulate each output left and output right channel audio signal 405 A and 405 B onto an ultrasonic carrier at ultrasonic frequencies.
- a head-related transfer function Prior to encoding, a head-related transfer function (HRTF) is calibrated for the left and right ears of the listener of the audio content to more accurately synthesize the 3D sound source. Because different individual listeners have different geometries (e.g. torso, head, and pinnae) with different sound reflection and diffraction properties, their ears will respond differently to sound received from the same point in space.
- the calibrated HRTF estimates the response of a listener's ears relative to a sound source's (e.g. ultrasonic emitter) frequency, position in space, and delay.
- the HRTF is a function of the sound source's frequency, delay, distance from the listener, azimuth angle relative to the listener, and elevation angle relative to the listener.
- the HRTF in some embodiments can be implemented to specify a plurality of finite impulse response (FIR) filter pairs, one for the left ear and one for the right ear, each filter pair placing a sound source at a specific position
- the HRTF is calibrated for the listener by selecting a HRTF profile from a predetermined set of HRTF profiles stored on a computer readable medium.
- each predetermined HRTF profile may be based on a model listener's geometry, for example, the model listener's head, pinnae, and torso measurements.
- the listener's geometry may be compared against the geometry of each of the HRTF profiles.
- a HRTF profile may be automatically selected from a HRTF profile whose model listener's geometry most closely resembles the listener's own geometry.
- the HRTF profile may be manually selected from the predetermined set of HRTF profiles.
- the listener may store a custom HRTF profile on the computer readable medium.
- an optical imaging system is used to determine the geometry (e.g. head, pinnae, and torso) of the listener for comparison against the predetermined set of HRTF profiles.
- the optical imaging system may include an optical profilometer with a digital camera and scanning light source. The scanning light source scans the listener's head, pinnae, and torso at a predetermined frequency for a predetermined amount of time, thereby generating approximate measurements of the listener's geometry (e.g. head, pinnae, and torso).
- the optical imaging system may be based on other known dynamic 3D body scanning technologies.
- the optical imaging system may include a depth sensor such as a stereoscopic vision-based or structured light-based sensor. The depth sensor measures the listener's position relative to the ultrasonic emitters.
- the selected HRTF profile may be further refined by using the ultrasonic emitters to play a plurality of sound samples.
- the optical imaging system may record the listener's position relative to the left and right ultrasonic emitters.
- the listener is asked to select the perceived location (relative to the listener) of the sound sample. Based on the listener's selections, and the listener's corresponding recorded positions, the parameters of the selected HRTF profile may be refined.
- the listener wears headphones connected to the parametric audio system.
- the left and right earpieces of the headphones each include one or more microphones.
- the ultrasonic emitters play a plurality of sound samples at a plurality of different virtual locations.
- the headphones record the sound samples at the listener's ears.
- the recorded sound samples are compared with the original sounds. Based on this comparison and the listener's recorded positions, the HRTF may be calibrated.
- the parametric audio system may save the listener's HRTF profile for subsequent uses. For example, when the listener subsequently initiates the system, use of the system may only require the listener's selection of the saved HRTF profile.
- the parametric audio system may comprise a biometric sensor, an imaging sensor (e.g. a camera of an optical imaging system), or other sensing apparatus, that automatically detects the identity of the listener and loads the saved HRTF for that listener.
- Parametric encoder circuitry 400 will now be described with respect to FIG. 5B , which is an operational flow diagram illustrating an example method of encoding a sound channel 410 .
- the example encoding process may be applied to a plurality of surround sound channels 410 that make up an original audio content. For example, where multiple microphones were used to record multiple channels of audio content, coder 400 re-creates a sound channel 410 for each microphone channel and encodes it into a left and right channel for producing a three dimensional sound effect for the listener of the audio content.
- parametric encoder 400 is included in this example to divide the sound channel into left and right input channel signal components 401 A and 401 B.
- the right and left channel signals are encoded with location information that specifies a desired location (azimuthal, elevation, and distance) in the listening field environment of the listener. This can be done, for example, using the techniques described above with reference to FIG. 2 .
- location information e.g. HRTF filters
- processing functions e.g. HRTF filters
- the audio content is provided as a stereo or other 2-channel signal and there is no need to split a sound channel into left and right channels. Accordingly, block 410 may be omitted in various embodiments. Although one advantage that may be obtained in various two-channel embodiments is the ability to achieve a multi-dimensional sound effect with only two audio channels, other embodiments can be implemented for audio content one greater than two channels.
- channel processors 420 A and 420 B apply the calibrated HRTF filters to channel signals 401 A and 401 B, respectively, based on the desired 3D sound location (azimuthal, elevation, and distance) in the listening field environment of the listener, thereby generating adjusted left channel signal 403 A and adjusted right channel signal 403 B.
- left channel processor 420 A and right channel processor 420 B may apply additional filters to the left and right channel signals to further enhance the 3D sound effect.
- the system can be configured to adjust parameters such as the phase, delay, gain, reverb, echo, or other audio parameters, as described above, to enhance the 3D sound effect.
- additional filters may be applied based on characteristics of the listening environment such as the listening environment's physical configuration, the listening environment's background noise, etc.
- acoustic crosstalk cancellation filters are applied to the adjusted left and right channel signals to generate left and right output channel signals.
- FIG. 5A illustrates one specific example implementation of these filters for audio modulated ultrasonic signal.
- the phase, frequency, and amplitude of the output beams can be assumed approximately constant.
- the audio signal for one of the two channels is inverted and the delay adjusted for one of the channels relative to the other.
- left combiner 430 A performs a phase inversion of signal 403 B, delays the inverted signal 403 B, and combines it with signal 403 A, resulting in output left channel signal 405 A.
- phase inversion and delay can be performed by processing blocks other than the combiner. For example, phase delay and inversion can be performed by left or right channel processors 420 A, 420 B.
- the left channel audio is cancelled out via destructive interference and does not become audible when the beams intersect.
- right channel audio may be cancelled out if right channel combiner 430 B phase inverts signal 403 A, delays the inverted signal 403 A, and combines it with signal 403 B, thereby generating output signal 405 B.
- the reflection and filtering properties of the listening environment may also be considered as filter parameters for combiners 430 A and 430 B.
- the left and right output channel audio signal frequencies are modulated or upconverted to ultrasonic frequencies using left ultrasonic modulator 440 A and right ultrasonic modulator 440 B.
- the ultrasonic-frequency modulated output signals may subsequently be played by ultrasonic emitters.
- the modulated left output channel signal is received by left ultrasonic processor/emitter 450 A and the modulated right output channel signal is received by right ultrasonic processor/emitter 450 B.
- the ultrasonic processors respectively convert the received signals to ultrasonic beams for output by the emitters, thereby generating a realistic and substantially noise-free 3D sound effect in the listening field environment of the listener.
- ultrasonic processors/emitters 450 A, 450 B can comprise an amplifier and an ultrasonic emitter such as, for example, a conventional piezo or electrostatic emitter. Examples of filtering, modulation and amplification, as well as example emitter configurations are described in U.S. Pat. No. 8,718,297, titled Parametric Transducer and Related Methods, which is incorporated herein by reference in its entirety.
- HRTF filters for 3D sound production and 2) acoustic crosstalk cancellation filters is made effective by the disclosed ultrasonic emitters, which emit focused sound beams (e.g., audio modulated ultrasonic signals) with approximately constant amplitude, phase, and frequency components as the beams propagate through and demodulate in the listening environment.
- focused sound beams e.g., audio modulated ultrasonic signals
- This provides two key benefits over conventional speaker systems.
- one of ordinary skill in the art would not apply HRTF filters with conventional audio speakers for producing 3D sound effects.
- One reason for this is that the sound pressure waves generated by conventional acoustic audio speakers rapidly change as they propagate through the free space of a listening environment toward the listener's ear.
- the disclosed ultrasonic emitter system in various embodiments provides the benefit of employing a headphone-type HRTF function in tandem with the acoustic crosstalk cancellation filters used in speakers.
- FIGS. 6A and 6B are diagrams illustrating example implementations of the multidimensional audio system in accordance with embodiments of the systems and methods described herein.
- two parametric emitters are illustrated as being included in the system, left front and right front ultrasonic emitters, LF and RF, respectively.
- other quantities of emitters or channels can be used.
- the left and right emitters are placed such that the sound is directed toward the left and right ears, respectively, of the listener or listeners of the video game or other program content.
- Alternative emitter positions can be used, but positions that direct the sound from each ultrasonic emitter LF, RF, to the respective ear of the listener(s) allow spatial imagery as described herein.
- the ultrasonic emitters LF, RF are placed such that the ultrasonic frequency emissions are directed at the walls (or other reflective structure including the ceiling or floor) of the listening environment.
- the parametric sound column is reflected from the wall or other surface, a virtual speaker or sound source is created.
- the resultant audio waves are directed toward the ears of the listener(s) at the determined seating position.
- the ultrasonic emitters can be combined with conventional speakers in stereo, surround sound or other configurations.
- FIG. 7 is a diagram illustrating an example implementation of the multidimensional audio system in accordance with another embodiment of the systems and methods described herein. Referring now to FIG. 7 , in this example, the ultrasonic emitter configuration of FIG. 5B is combined with a conventional 7.1 surround sound system. As would be apparent to one of ordinary skill in the art after reading this description, the configuration of FIG. 5A can also be combined with a conventional 7.1 surround sound system. Although not illustrated, in another example, an additional pair of ultrasonic emitters can be placed to reflect a ultrasonic carrier audio signal from the back wall of the environment, replacing the conventional rear speakers.
- the emitters can be aimed to be targeted to a given individual listener's ears in a specific listening position in the room. This can be useful to enhance the effects of the system. Also, consider an application where one individual listener of a group of listeners is hard of hearing. Implementing hybrid embodiments (such as the example of FIG. 6 ) can allow the emitters to be targeted to the hearing impaired listener. As such, the volume of the audio from the ultrasonic emitters can be adjusted to that listener's elevated needs without needing to alter the volume of the conventional audio system. Where a highly directional audio beam is used from the ultrasonic emitters and targeted at the hearing impaired listener's ears, the increased volume from the ultrasonic emitters is not heard (or is only detected at low levels) by listeners who are not in the targeted listening position.
- the ultrasonic emitters can be combined with conventional surround sound configurations to replace some of the conventional speakers normally used.
- the ultrasonic emitters in FIG. 6 can be used as the LS, RS speaker pair in a Dolby 5.1, 6.1, or 7.1 surround sound system, while conventional speakers are used for the remaining channels.
- the ultrasonic emitters may also be used as the back speakers BSC, BSL, BSR in a Dolby 6.1 or 7.1 configuration.
- computing module 500 may represent, for example, computing or processing capabilities found within desktop, laptop and notebook computers; hand-held computing devices (PDA's, smart phones, cell phones, palmtops, etc.); mainframes, supercomputers, workstations or servers; or any other type of special-purpose or general-purpose computing devices as may be desirable or appropriate for a given application or environment.
- Computing module 500 might also represent computing capabilities embedded within or otherwise available to a given device.
- a computing module might be found in other electronic devices such as, for example, digital cameras, navigation systems, cellular telephones, portable computing devices, modems, routers, WAPs, terminals and other electronic devices that might include some form of processing capability.
- Computing module 500 might include, for example, one or more processors, controllers, control modules, or other processing devices, such as a processor 504 .
- Processor 504 might be implemented using a general-purpose or special-purpose processing engine such as, for example, a microprocessor, controller, or other control logic.
- processor 504 is connected to a bus 502 , although any communication medium can be used to facilitate interaction with other components of computing module 500 or to communicate externally.
- Computing module 500 might also include one or more memory modules, simply referred to herein as main memory 508 .
- main memory 508 preferably random access memory (RAM) or other dynamic memory, might be used for storing information and instructions to be executed by processor 504 .
- Main memory 508 might also be used for storing temporary variables or other intermediate information during execution of instructions to be executed by processor 504 .
- Computing module 500 might likewise include a read only memory (“ROM”) or other static storage device coupled to bus 502 for storing static information and instructions for processor 504 .
- ROM read only memory
- the computing module 500 might also include one or more various forms of information storage mechanism 510 , which might include, for example, a media drive 512 and a storage unit interface 520 .
- the media drive 512 might include a drive or other mechanism to support fixed or removable storage media 514 .
- a hard disk drive, a floppy disk drive, a magnetic tape drive, an optical disk drive, a CD or DVD drive (R or RW), or other removable or fixed media drive might be provided.
- storage media 514 might include, for example, a hard disk, a floppy disk, magnetic tape, cartridge, optical disk, a CD or DVD, or other fixed or removable medium that is read by, written to or accessed by media drive 512 .
- the storage media 514 can include a computer usable storage medium having stored therein computer software or data.
- information storage mechanism 510 might include other similar instrumentalities for allowing computer programs or other instructions or data to be loaded into computing module 500 .
- Such instrumentalities might include, for example, a fixed or removable storage unit 522 and an interface 520 .
- Examples of such storage units 522 and interfaces 520 can include a program cartridge and cartridge interface, a removable memory (for example, a flash memory or other removable memory module) and memory slot, a PCMCIA slot and card, and other fixed or removable storage units 522 and interfaces 520 that allow software and data to be transferred from the storage unit 522 to computing module 500 .
- Computing module 500 might also include a communications interface 524 .
- Communications interface 524 might be used to allow software and data to be transferred between computing module 500 and external devices.
- Examples of communications interface 524 might include a modem or softmodem, a network interface (such as an Ethernet, network interface card, WiMedia, IEEE 802.XX or other interface), a communications port (such as for example, a USB port, IR port, RS232 port Bluetooth® interface, or other port), or other communications interface.
- Software and data transferred via communications interface 524 might typically be carried on signals, which can be electronic, electromagnetic (which includes optical) or other signals capable of being exchanged by a given communications interface 524 . These signals might be provided to communications interface 524 via a channel 528 .
- This channel 528 might carry signals and might be implemented using a wired or wireless communication medium.
- Some examples of a channel might include a phone line, a cellular link, an RF link, an optical link, a network interface, a local or wide area network, and other wired or wireless communications channels.
- computer program medium and “computer usable medium” are used to generally refer to media such as, for example, memory 508 , and storage devices such as storage unit 520 , and media 514 . These and other various forms of computer program media or computer usable media may be involved in carrying one or more sequences of one or more instructions to a processing device for execution. Such instructions embodied on the medium, are generally referred to as “computer program code” or a “computer program product” (which may be grouped in the form of computer programs or other groupings). When executed, such instructions might enable the computing module 500 to perform features or functions of the present invention as discussed herein.
- module does not imply that the components or functionality described or claimed as part of the module are all configured in a common package. Indeed, any or all of the various components of a module, whether control logic or other components, can be combined in a single package or separately maintained and can further be distributed in multiple groupings or packages or across multiple locations.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Signal Processing (AREA)
- Acoustics & Sound (AREA)
- Multimedia (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Stereophonic System (AREA)
- Circuit For Audible Band Transducer (AREA)
Abstract
Description
Claims (22)
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/US2014/050759 WO2015023685A1 (en) | 2013-08-12 | 2014-08-12 | Multi-dimensional parametric audio system and method |
US14/457,588 US9271102B2 (en) | 2012-08-16 | 2014-08-12 | Multi-dimensional parametric audio system and method |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261684028P | 2012-08-16 | 2012-08-16 | |
US201361864757P | 2013-08-12 | 2013-08-12 | |
US13/969,292 US20140050325A1 (en) | 2012-08-16 | 2013-08-16 | Multi-dimensional parametric audio system and method |
US14/457,588 US9271102B2 (en) | 2012-08-16 | 2014-08-12 | Multi-dimensional parametric audio system and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/969,292 Continuation-In-Part US20140050325A1 (en) | 2012-08-16 | 2013-08-16 | Multi-dimensional parametric audio system and method |
Publications (2)
Publication Number | Publication Date |
---|---|
US20140355765A1 US20140355765A1 (en) | 2014-12-04 |
US9271102B2 true US9271102B2 (en) | 2016-02-23 |
Family
ID=51985120
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/457,588 Active US9271102B2 (en) | 2012-08-16 | 2014-08-12 | Multi-dimensional parametric audio system and method |
Country Status (1)
Country | Link |
---|---|
US (1) | US9271102B2 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
Families Citing this family (35)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9363597B1 (en) * | 2013-08-21 | 2016-06-07 | Turtle Beach Corporation | Distance-based audio processing for parametric speaker system |
CN108462936A (en) * | 2013-12-13 | 2018-08-28 | 无比的优声音科技公司 | Device and method for sound field enhancing |
US9866986B2 (en) | 2014-01-24 | 2018-01-09 | Sony Corporation | Audio speaker system with virtual music performance |
US9900722B2 (en) | 2014-04-29 | 2018-02-20 | Microsoft Technology Licensing, Llc | HRTF personalization based on anthropometric features |
US9743187B2 (en) * | 2014-12-19 | 2017-08-22 | Lee F. Bender | Digital audio processing systems and methods |
US10134416B2 (en) * | 2015-05-11 | 2018-11-20 | Microsoft Technology Licensing, Llc | Privacy-preserving energy-efficient speakers for personal sound |
CN105357613B (en) * | 2015-11-03 | 2018-06-29 | 广东欧珀移动通信有限公司 | The method of adjustment and device of audio output apparatus play parameter |
US20170223474A1 (en) * | 2015-11-10 | 2017-08-03 | Bender Technologies, Inc. | Digital audio processing systems and methods |
US20170164099A1 (en) * | 2015-12-08 | 2017-06-08 | Sony Corporation | Gimbal-mounted ultrasonic speaker for audio spatial effect |
US9648438B1 (en) | 2015-12-16 | 2017-05-09 | Oculus Vr, Llc | Head-related transfer function recording using positional tracking |
US10531216B2 (en) | 2016-01-19 | 2020-01-07 | Sphereo Sound Ltd. | Synthesis of signals for immersive audio playback |
US9826332B2 (en) | 2016-02-09 | 2017-11-21 | Sony Corporation | Centralized wireless speaker system |
US9924291B2 (en) | 2016-02-16 | 2018-03-20 | Sony Corporation | Distributed wireless speaker system |
US9826330B2 (en) | 2016-03-14 | 2017-11-21 | Sony Corporation | Gimbal-mounted linear ultrasonic speaker assembly |
FI20165211A (en) | 2016-03-15 | 2017-09-16 | Ownsurround Ltd | Arrangements for the production of HRTF filters |
US9794724B1 (en) | 2016-07-20 | 2017-10-17 | Sony Corporation | Ultrasonic speaker assembly using variable carrier frequency to establish third dimension sound locating |
US10057681B2 (en) * | 2016-08-01 | 2018-08-21 | Bose Corporation | Entertainment audio processing |
JP6790654B2 (en) * | 2016-09-23 | 2020-11-25 | 株式会社Jvcケンウッド | Filter generator, filter generator, and program |
CN110192396A (en) * | 2016-11-04 | 2019-08-30 | 迪拉克研究公司 | For the method and system based on the determination of head tracking data and/or use tone filter |
JP6638663B2 (en) * | 2017-02-01 | 2020-01-29 | 株式会社デンソー | Ultrasonic output device |
US10028070B1 (en) | 2017-03-06 | 2018-07-17 | Microsoft Technology Licensing, Llc | Systems and methods for HRTF personalization |
US10278002B2 (en) * | 2017-03-20 | 2019-04-30 | Microsoft Technology Licensing, Llc | Systems and methods for non-parametric processing of head geometry for HRTF personalization |
FI20185300A1 (en) | 2018-03-29 | 2019-09-30 | Ownsurround Ltd | An arrangement for generating head related transfer function filters |
WO2020016685A1 (en) | 2018-07-18 | 2020-01-23 | Sphereo Sound Ltd. | Detection of audio panning and synthesis of 3d audio from limited-channel surround sound |
US11205443B2 (en) | 2018-07-27 | 2021-12-21 | Microsoft Technology Licensing, Llc | Systems, methods, and computer-readable media for improved audio feature discovery using a neural network |
CN109714697A (en) * | 2018-08-06 | 2019-05-03 | 上海头趣科技有限公司 | The emulation mode and analogue system of three-dimensional sound field Doppler's audio |
US11026039B2 (en) | 2018-08-13 | 2021-06-01 | Ownsurround Oy | Arrangement for distributing head related transfer function filters |
CN114205730A (en) | 2018-08-20 | 2022-03-18 | 华为技术有限公司 | Audio processing method and device |
US10652687B2 (en) * | 2018-09-10 | 2020-05-12 | Apple Inc. | Methods and devices for user detection based spatial audio playback |
CN110297543B (en) * | 2019-06-28 | 2022-05-20 | 维沃移动通信有限公司 | Audio playing method and terminal equipment |
US11443737B2 (en) | 2020-01-14 | 2022-09-13 | Sony Corporation | Audio video translation into multiple languages for respective listeners |
CN111654806B (en) * | 2020-05-29 | 2022-01-07 | Oppo广东移动通信有限公司 | Audio playing method and device, storage medium and electronic equipment |
US11256878B1 (en) * | 2020-12-04 | 2022-02-22 | Zaps Labs, Inc. | Directed sound transmission systems and methods |
DE102021203640B4 (en) * | 2021-04-13 | 2023-02-16 | Kaetel Systems Gmbh | Loudspeaker system with a device and method for generating a first control signal and a second control signal using linearization and/or bandwidth expansion |
WO2023225026A1 (en) * | 2022-05-16 | 2023-11-23 | Turtle Beach Corporation | Improved parametric signal processing systems and methods |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6243476B1 (en) * | 1997-06-18 | 2001-06-05 | Massachusetts Institute Of Technology | Method and apparatus for producing binaural audio for a moving listener |
US6449368B1 (en) * | 1997-03-14 | 2002-09-10 | Dolby Laboratories Licensing Corporation | Multidirectional audio decoding |
US20030147543A1 (en) | 2002-02-04 | 2003-08-07 | Yamaha Corporation | Audio amplifier unit |
US20080159571A1 (en) * | 2004-07-13 | 2008-07-03 | 1...Limited | Miniature Surround-Sound Loudspeaker |
US20120093320A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | System and method for high-precision 3-dimensional audio for augmented reality |
US20120201405A1 (en) * | 2007-02-02 | 2012-08-09 | Logitech Europe S.A. | Virtual surround for headphones and earbuds headphone externalization system |
US20120314872A1 (en) | 2010-01-19 | 2012-12-13 | Ee Leng Tan | System and method for processing an input signal to produce 3d audio effects |
US20130194107A1 (en) | 2012-01-27 | 2013-08-01 | Denso Corporation | Sound field control apparatus and program |
-
2014
- 2014-08-12 US US14/457,588 patent/US9271102B2/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6449368B1 (en) * | 1997-03-14 | 2002-09-10 | Dolby Laboratories Licensing Corporation | Multidirectional audio decoding |
US6243476B1 (en) * | 1997-06-18 | 2001-06-05 | Massachusetts Institute Of Technology | Method and apparatus for producing binaural audio for a moving listener |
US20030147543A1 (en) | 2002-02-04 | 2003-08-07 | Yamaha Corporation | Audio amplifier unit |
US20080159571A1 (en) * | 2004-07-13 | 2008-07-03 | 1...Limited | Miniature Surround-Sound Loudspeaker |
US20120201405A1 (en) * | 2007-02-02 | 2012-08-09 | Logitech Europe S.A. | Virtual surround for headphones and earbuds headphone externalization system |
US20120314872A1 (en) | 2010-01-19 | 2012-12-13 | Ee Leng Tan | System and method for processing an input signal to produce 3d audio effects |
US20120093320A1 (en) * | 2010-10-13 | 2012-04-19 | Microsoft Corporation | System and method for high-precision 3-dimensional audio for augmented reality |
US20130194107A1 (en) | 2012-01-27 | 2013-08-01 | Denso Corporation | Sound field control apparatus and program |
Non-Patent Citations (1)
Title |
---|
International Search Report and the Written Opinion for International App No. PCT/US2014/050759, mailed Nov. 19, 2014, Authorized Officer: De Jong, Coen. |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9820073B1 (en) | 2017-05-10 | 2017-11-14 | Tls Corp. | Extracting a common signal from multiple audio signals |
Also Published As
Publication number | Publication date |
---|---|
US20140355765A1 (en) | 2014-12-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9271102B2 (en) | Multi-dimensional parametric audio system and method | |
US20140050325A1 (en) | Multi-dimensional parametric audio system and method | |
US10021507B2 (en) | Arrangement and method for reproducing audio data of an acoustic scene | |
KR100608025B1 (en) | Method and apparatus for simulating virtual sound for two-channel headphones | |
US9154896B2 (en) | Audio spatialization and environment simulation | |
US8358091B2 (en) | Apparatus and method for generating a number of loudspeaker signals for a loudspeaker array which defines a reproduction space | |
US9769589B2 (en) | Method of improving externalization of virtual surround sound | |
CA3101903C (en) | Method and apparatus for rendering acoustic signal, and computer-readable recording medium | |
KR100636252B1 (en) | Method and apparatus for spatial stereo sound | |
US11516616B2 (en) | System for and method of generating an audio image | |
KR100677629B1 (en) | Method and apparatus for simulating 2-channel virtualized sound for multi-channel sounds | |
WO2012042905A1 (en) | Sound reproduction device and sound reproduction method | |
US8867749B2 (en) | Acoustic spatial projector | |
CN105308988A (en) | Audio decoder configured to convert audio input channels for headphone listening | |
US9467792B2 (en) | Method for processing of sound signals | |
JP5757945B2 (en) | Loudspeaker system for reproducing multi-channel sound with improved sound image | |
CN108737930B (en) | Audible prompts in a vehicle navigation system | |
JP2018515032A (en) | Acoustic system | |
KR102357293B1 (en) | Stereophonic sound reproduction method and apparatus | |
JP6434165B2 (en) | Apparatus and method for processing stereo signals for in-car reproduction, achieving individual three-dimensional sound with front loudspeakers | |
US10440495B2 (en) | Virtual localization of sound | |
WO2015023685A1 (en) | Multi-dimensional parametric audio system and method | |
JP2006148936A (en) | Apparatus and method to generate virtual 3d sound using asymmetry and recording medium storing program to perform the method | |
KR20080098307A (en) | Apparatus and method for surround soundfield reproductioin for reproducing reflection | |
US20230011591A1 (en) | System and method for virtual sound effect with invisible loudspeaker(s) |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: TURTLE BEACH CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:KULAVIK, RICHARD JOSEPH;NORRIS, ELWOOD GRANT;KAPPUS, BRIAN ALAN;REEL/FRAME:033586/0005 Effective date: 20140812 |
|
AS | Assignment |
Owner name: CRYSTAL FINANCIAL LLC, AS AGENT, MASSACHUSETTS Free format text: SECURITY INTEREST;ASSIGNOR:TURTLE BEACH CORPORATION;REEL/FRAME:036159/0952 Effective date: 20150722 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS AGENT, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNORS:TURTLE BEACH CORPORATION;VOYETRA TURTLE BEACH, INC.;REEL/FRAME:036189/0326 Effective date: 20150722 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
AS | Assignment |
Owner name: CRYSTAL FINANCIAL LLC, AS AGENT, MASSACHUSETTS Free format text: SECURITY INTEREST;ASSIGNOR:TURTLE BEACH CORPORATION;REEL/FRAME:045573/0722 Effective date: 20180305 |
|
AS | Assignment |
Owner name: BANK OF AMERICA, N.A., AS AGENT, CALIFORNIA Free format text: SECURITY INTEREST;ASSIGNORS:TURTLE BEACH CORPORATION;VOYETRA TURTLE BEACH, INC.;REEL/FRAME:045776/0648 Effective date: 20180305 |
|
AS | Assignment |
Owner name: TURTLE BEACH CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF INTELLECTUAL PROPERTY SECURITY AGREEMENTS;ASSIGNOR:CRYSTAL FINANCIAL LLC;REEL/FRAME:048965/0001 Effective date: 20181217 Owner name: TURTLE BEACH CORPORATION, CALIFORNIA Free format text: TERMINATION AND RELEASE OF INTELLECTUAL PROPERTY SECURITY AGREEMENTS;ASSIGNOR:CRYSTAL FINANCIAL LLC;REEL/FRAME:047954/0007 Effective date: 20181217 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2551); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YR, SMALL ENTITY (ORIGINAL EVENT CODE: M2552); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY Year of fee payment: 8 |
|
AS | Assignment |
Owner name: BLUE TORCH FINANCE LLC, AS THE COLLATERAL AGENT, NEW YORK Free format text: SECURITY INTEREST;ASSIGNORS:VOYETRA TURTLE BEACH, INC.;TURTLE BEACH CORPORATION;PERFORMANCE DESIGNED PRODUCTS LLC;REEL/FRAME:066797/0517 Effective date: 20240313 |