US12061835B2 - Binaural rendering for headphones using metadata processing - Google Patents

Binaural rendering for headphones using metadata processing Download PDF

Info

Publication number
US12061835B2
US12061835B2 US18/305,618 US202318305618A US12061835B2 US 12061835 B2 US12061835 B2 US 12061835B2 US 202318305618 A US202318305618 A US 202318305618A US 12061835 B2 US12061835 B2 US 12061835B2
Authority
US
United States
Prior art keywords
audio
metadata
rendering
audio object
content
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
US18/305,618
Other versions
US20230385013A1 (en
Inventor
Nicolas R. Tsingos
Rhonda Wilson
Sunil Bharitkar
C. Phillip Brown
Alan J. Seefeldt
Remi Audfray
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Priority to US18/305,618 priority Critical patent/US12061835B2/en
Assigned to DOLBY LABORATORIES LICENSING CORPORATION reassignment DOLBY LABORATORIES LICENSING CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: BHARITKAR, SUNIL, SEEFELDT, ALAN J, TSINGOS, NICOLAS R., AUDFRAY, Remi, BROWN, C. PHILLIP, WILSON, RHONDA
Publication of US20230385013A1 publication Critical patent/US20230385013A1/en
Application granted granted Critical
Publication of US12061835B2 publication Critical patent/US12061835B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/162Interface to dedicated audio devices, e.g. audio drivers, interface to CODECs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/16Sound input; Sound output
    • G06F3/165Management of the audio stream, e.g. setting of volume, audio stream path
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04RLOUDSPEAKERS, MICROPHONES, GRAMOPHONE PICK-UPS OR LIKE ACOUSTIC ELECTROMECHANICAL TRANSDUCERS; DEAF-AID SETS; PUBLIC ADDRESS SYSTEMS
    • H04R5/00Stereophonic arrangements
    • H04R5/04Circuit arrangements, e.g. for selective connection of amplifier inputs/outputs to loudspeakers, for loudspeaker detection, or for adaptation of settings to personal preferences or hearing impairments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S1/00Two-channel systems
    • H04S1/007Two-channel systems in which the audio signals are in digital form
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/302Electronic adaptation of stereophonic sound system to listener position or orientation
    • H04S7/303Tracking of listener position or orientation
    • H04S7/304For headphones
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S7/00Indicating arrangements; Control arrangements, e.g. balance control
    • H04S7/30Control circuits for electronic adaptation of the sound field
    • H04S7/305Electronic adaptation of stereophonic audio signals to reverberation of the listening space
    • H04S7/306For headphones
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/03Aspects of down-mixing multi-channel audio to configurations with lower numbers of playback channels, e.g. 7.1 -> 5.1
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2400/00Details of stereophonic systems covered by H04S but not provided for in its groups
    • H04S2400/13Aspects of volume control, not necessarily automatic, in stereophonic sound systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04SSTEREOPHONIC SYSTEMS 
    • H04S2420/00Techniques used stereophonic systems covered by H04S but not provided for in its groups
    • H04S2420/01Enhancing the perception of the sound image or of the spatial distribution using head related transfer functions [HRTF's] or equivalents thereof, e.g. interaural time difference [ITD] or interaural level difference [ILD]

Definitions

  • One or more implementations relate generally to audio signal processing, and more specifically to binaural rendering of channel and object-based audio for headphone playback.
  • Virtual rendering of spatial audio over a pair of speakers commonly involves the creation of a stereo binaural signal that represents the desired sound arriving at the listener's left and right ears and is synthesized to simulate a particular audio scene in three-dimensional (3D) space, containing possibly a multitude of sources at different locations.
  • binaural processing or rendering can be defined as a set of signal processing operations aimed at reproducing the intended 3D location of a sound source over headphones by emulating the natural spatial listening cues of human subjects.
  • Typical core components of a binaural renderer are head-related filtering to reproduce direction dependent cues as well as distance cues processing, which may involve modeling the influence of a real or virtual listening room or environment.
  • binaural renderer processes each of the 5 or 7 channels of a 5.1 or 7.1 surround in a channel-based audio presentation to 5/7 virtual sound sources in 2D space around the listener.
  • Binaural rendering is also commonly found in games or gaming audio hardware, in which case the processing can be applied to individual audio objects in the game based on their individual 3D position.
  • binaural rendering is a form of blind post-processing applied to multichannel or object-based audio content.
  • Some of the processing involved in binaural rendering can have undesirable and negative effects on the timbre of the content, such as smoothing of transients or excessive reverberation added to dialog or some effects and music elements.
  • object-based content such as the Dolby® AtmosTM system
  • Present systems do not feature this capability, nor do they allow such metadata to be transported as part of an additional specific headphone payload in the codecs.
  • a binaural renderer running on the playback device that combines authoring metadata with real-time locally generated metadata to provide the best possible experience to the end user when listening to channel and object-based audio through headphones. Furthermore, for channel-based content it is generally required that the artistic intent be retained by incorporating audio segmentation analysis.
  • Embodiments are described for systems and methods of virtual rendering object-based audio content and improved equalization in headphone-based playback systems.
  • Embodiments include a method for rendering audio for playback through headphones comprising receiving digital audio content, receiving binaural rendering metadata generated by an authoring tool, processing the received digital audio content, receiving playback metadata generated by a playback device, and combining the binaural rendering metadata and playback metadata to optimize playback of the digital audio content through the headphones.
  • the digital audio content may comprise channel-based audio and object-based audio including positional information for reproducing an intended location of a corresponding sound source in three-dimensional space relative to a listener.
  • the method further comprises separating the digital audio content into one or more components based on content type, and wherein the content type is selected from the group consisting of: dialog, music, audio effects, transient signals, and ambient signals.
  • the binaural rendering metadata controls a plurality of channel and object characteristics including: position, size, gain adjustment, and content dependent settings or processing presets; and the playback metadata controls a plurality of listener specific characteristics including head position, head orientation, head size, listening room noise levels, listening room properties, and playback device or screen position relative to the listener.
  • the method may further include receiving one or more user input commands modifying the binaural rendering metadata, the user input commands controlling one or more characteristics including: elevation emphasis where elevated objects and channels could receive a gain boost, preferred 1D (one-dimensional) sound radius or 3D scaling factors for object or channel positioning, and processing mode enablement (e.g., to toggle between traditional stereo or full processing of content).
  • the playback metadata may be generated in response to sensor data provided by an enabled headset housing a plurality of sensors, the enabled headset comprising part of the playback device.
  • the method may further comprise separating the input audio into separate sub-signals, e.g. by content type or unmixing the input audio (channel-based and object-based) into constituent direct content and diffuse content, wherein the diffuse content comprises reverberated or reflected sound elements, and performing binaural rendering on the separate sub-signals independently.
  • Embodiments are also directed to a method for rendering audio for playback through headphones by receiving content dependent metadata dictating how content elements are rendered through the headphones, receiving sensor data from at least one of a playback device coupled to the headphones and an enabled headset including the headphones, and modifying the content dependent metadata with the sensor data to optimize the rendered audio with respect to one or more playback and user characteristics.
  • the content dependent metadata may be generated by an authoring tool operated by a content creator, and wherein the content dependent metadata dictates the rendering of an audio signal containing audio channels and audio objects.
  • the content dependent metadata controls a plurality of channel and object characteristics selected from the group consisting of: position, size, gain adjustment, elevation emphasis, stereo/full toggling, 3D scaling factors, content dependent settings, and other spatial and timbre properties of the rendered sound-field.
  • the method may further comprise formatting the sensor data into a metadata format compatible with the content dependent metadata to produce playback metadata.
  • the playback metadata controls a plurality of listener specific characteristics selected from the group consisting of: head position, head orientation, head size, listening room noise levels, listening room properties, and sound source device position.
  • the metadata format comprises a container including one or more payload packets conforming to a defined syntax and encoding digital audio definitions for corresponding audio content elements.
  • the method further comprises encoding the combined playback metadata and the content dependent metadata with source audio content into a bitstream for processing in a rendering system; and decoding the encoded bitstream to extract one or more parameters derived from the content dependent metadata and the playback metadata to generate a control signal modifying the source audio content for playback through the headphones.
  • the method may further comprise performing one or more post-processing functions on the source audio content prior to playback through headphones; wherein the post-processing functions comprise at least one of: downmixing from a plurality of surround sound channels to one of a binaural mix or a stereo mix, level management, equalization, timbre correction, and noise cancellation.
  • Embodiments are further directed to systems and articles of manufacture that perform or embody processing commands that perform or implement the above-described method acts.
  • FIG. 1 illustrates an overall system that incorporates embodiments of a content creation, rendering and playback system, under some embodiments.
  • FIG. 2 A is a block diagram of an authoring tool used in an object-based headphone rendering system, under an embodiment.
  • FIG. 2 B is a block diagram of an authoring tool used in an object-based headphone rendering system, under an alternative embodiment
  • FIG. 3 A is a block diagram of a rendering component used in an object-based headphone rendering system, under an embodiment.
  • FIG. 3 B is a block diagram of a rendering component used in an object-based headphone rendering system, under an alternative embodiment.
  • FIG. 4 is a block diagram that provides an overview of the dual-ended binaural rendering system, under an embodiment.
  • FIG. 5 illustrates an authoring tool GUI that may be used with embodiments of a headphone rendering system, under an embodiment.
  • FIG. 6 illustrates an enabled headphone that comprises one or more sensors that sense playback conditions for encoding as metadata used in a headphone rendering system, under an embodiment.
  • FIG. 7 illustrates the connection between a headphone and device including a headphone sensor processor, under an embodiment.
  • FIG. 8 is a block diagram illustrating the different metadata components that may be used in a headphone rendering system, under an embodiment.
  • FIG. 9 illustrates functional components of a binaural rendering component for headphone processing, under an embodiment.
  • FIG. 10 illustrates a binaural rendering system for rendering audio objects in a headphone rendering system, under an embodiment.
  • FIG. 11 illustrates a more detailed representation of the binaural rendering system of FIG. 10 , under an embodiment.
  • FIG. 12 is a system diagram showing the different tools used in an HRTF modeling system used in a headphone rendering system, under an embodiment.
  • FIG. 13 illustrates a data structure that enables delivery of metadata for a headphone rendering system, under an embodiment.
  • FIG. 14 illustrates an example case of three impulse response measurements for each ear, in an embodiment of a headphone equalization process.
  • FIG. 15 A illustrates a circuit for calculating the free-field sound transmission, under an embodiment.
  • FIG. 15 B illustrates a circuit for calculating the headphone sound transmission, under an embodiment.
  • Embodiments are directed to an audio content production and playback system that optimizes the rendering and playback of object and/or channel-based audio over headphones.
  • FIG. 1 illustrates an overall system that incorporates embodiments of a content creation, rendering and playback system, under some embodiments.
  • an authoring tool 102 is used by a creator to generate audio content for playback through one or more devices 104 for a user to listen to through headphones 116 or 118 .
  • the device 104 is generally a portable audio or music player or small computer or mobile telecommunication device that runs applications that allow for the playback of audio content.
  • Such a device may be a mobile phone or audio (e.g., MP3) player 106 , a tablet computer (e.g., Apple iPad or similar device) 108 , music console 110 , a notebook computer 111 , or any similar audio playback device.
  • the audio may comprise music, dialog, effects, or any digital audio that may be desired to be listened to over headphones, and such audio may be streamed wirelessly from a content source, played back locally from storage media (e.g., disk, flash drive, etc.), or generated locally.
  • headphone usually refers specifically to a close-coupled playback device worn by the user directly over his or her ears or in-ear listening devices; it may also refer generally to at least some of the processing performed to render signals intended for playback on headphones as an alternative to the terms “headphone processing” or “headphone rendering.”
  • the audio processed by the system may comprise channel-based audio, object-based audio or object and channel-based audio (e.g., hybrid or adaptive audio).
  • the audio comprises or is associated with metadata that dictates how the audio is rendered for playback on specific endpoint devices and listening environments.
  • Channel-based audio generally refers to an audio signal plus metadata in which the position is coded as a channel identifier, where the audio is formatted for playback through a pre-defined set of speaker zones with associated nominal surround-sound locations, e.g., 5.1, 7.1, and so on; and object-based means one or more audio channels with a parametric source description, such as apparent source position (e.g., 3D coordinates), apparent source width, etc.
  • adaptive audio may be used to mean channel-based and/or object-based audio signals plus metadata that renders the audio signals based on the playback environment using an audio stream plus metadata in which the position is coded as a 3D position in space.
  • the listening environment may be any open, partially enclosed, or fully enclosed area, such as a room, but embodiments described herein are generally directed to playback through headphones or other close proximity endpoint devices.
  • Audio objects can be considered as groups of sound elements that may be perceived to emanate from a particular physical location or locations in the environment, and such objects can be static or dynamic.
  • the audio objects are controlled by metadata, which among other things, details the position of the sound at a given point in time, and upon playback they are rendered according to the positional metadata.
  • channel-based content e.g., ‘beds’
  • beds are effectively channel-based sub-mixes or stems.
  • These can be delivered for final playback (rendering) and can be created in different channel-based configurations such as 5.1, 7.1.
  • the headphone utilized by the user may be a legacy or passive headphone 118 that only includes non-powered transducers that simply recreate the audio signal, or it may be an enabled headphone 118 that includes sensors and other components (powered or non-powered) that provide certain operational parameters back to the renderer for further processing and optimization of the audio content.
  • Headphones 116 or 118 may be embodied in any appropriate close-ear device, such as open or closed headphones, over-ear or in-ear headphones, earbuds, earpads, noise-cancelling, isolation, or other type of headphone device.
  • Such headphones may be wired or wireless with regard to its connection to the sound source or device 104 .
  • the audio content from authoring tool 102 includes stereo or channel based audio (e.g., 5.1 or 7.1 surround sound) in addition to object-based audio.
  • a renderer 112 receives the audio content from the authoring tool and provides certain functions that optimize the audio content for playback through device 104 and headphones 116 or 118 .
  • the renderer 112 includes a pre-processing stage 113 , a binaural rendering stage 114 , and a post-processing stage 115 .
  • the pre-processing stage 113 generally performs certain segmentation operations on the input audio, such as segmenting the audio based on its content type, among other functions;
  • the binaural rendering stage 114 generally combines and processes the metadata associated with the channel and object components of the audio and generates a binaural stereo or multi-channel audio output with binaural stereo and additional low frequency outputs;
  • the post-processing component 115 generally performs downmixing, equalization, gain/loudness/dynamic range control, and other functions prior to transmission of the audio signal to the device 104 .
  • the renderer will likely generate two-channel signals in most cases, it could be configured to provide more than two channels of input to specific enabled headphones, for instance to deliver separate bass channels (similar to LFE 0.1 channel in traditional surround sound).
  • the enabled headphone may have specific sets of drivers to reproduce bass components separately from the mid to higher frequency sound.
  • FIG. 1 generally represent the main functional blocks of the audio generation, rendering, and playback systems, and that certain functions may be incorporated as part of one or more other components.
  • the renderer 112 may be incorporated in part or in whole in the device 104 .
  • the audio player or tablet (or other device) may include a renderer component integrated within the device.
  • the enabled headphone 116 may include at least some functions associated with the playback device and/or renderer.
  • a fully integrated headphone may include an integrated playback device (e.g., built-in content decoder, e.g. MP3 player) as well as an integrated rendering component.
  • one or more components of the renderer 112 such as the pre-processing component 113 may be implemented at least in part in the authoring tool, or as part of a separate pre-processing component.
  • FIG. 2 A is a block diagram of an authoring tool used in an object-based headphone rendering system, under an embodiment.
  • input audio 202 from an audio source e.g., live source, recording, etc.
  • DAW digital audio workstation
  • the input audio 201 is typically in digital form, and if analog audio is used, an A/D (analog-to-digital) conversion step (not shown) is required.
  • This audio typically comprises object and channel-based content, such as may be used in an adaptive audio system (e.g., Dolby Atmos), and often includes several different types of content.
  • an adaptive audio system e.g., Dolby Atmos
  • the input audio may be segmented through an (optional) audio segmentation pre-process, 204 that separates (or segments) the audio based on its content type so that different types of audio may be rendered differently.
  • dialog may be rendered differently than transient signals or ambient signals.
  • the DAW 204 may be implemented as a workstation for editing and processing the segmented or unsegmented digital audio 202 , and may include a mixing console, control surface, audio converter, data storage and other appropriate elements.
  • DAW is a processing platform that runs digital audio software that provides comprehensive editing functionality as well as an interface for one or more plug-in programs, such as a panner plug-in, among other functions, such as equalizers, synthesizers, effects, and so on.
  • the panner plug-in shown in DAW 204 performs a panning function configured to distribute each object signal to specific speaker pairs or locations in 2D/3D space in a manner that conveys the desired position of each respective object signal to the listener.
  • the processed audio from DAW 204 is input to a binaural rendering component 206 .
  • This component includes an audio processing function that produces binaural audio output 210 as well as binaural rendering metadata 208 and spatial media type metadata 212 .
  • the audio 210 and metadata components 208 and 212 form a coded audio bitstream with binaural metadata payload 214 .
  • the audio component 210 comprises channel and object-based audio that is passed to the bitstream 214 with the metadata components 208 and 212 ; however, it should be noted that the audio component 210 may be standard multi-channel audio, binaurally rendered audio, or a combination of these two audio types.
  • a binaural rendering component 206 also includes a binaural metadata input function that directly produces a headphone output 216 for direct connection to the headphones.
  • the metadata for binaural rendering is generated at mixing time within the authoring tool 102 a .
  • the metadata may be generated at encoding time, as shown with reference to FIG. 2 B .
  • a mixer 203 uses an application or tool to create audio data and the binaural and spatial metadata. The mixer 203 provides inputs to the DAW 204 . Alternatively, it may also provide inputs directly to the binaural rendering process 206 .
  • the mixer receives the headphone audio output 216 so that the mixer may monitor the effect of the audio and metadata input. This effectively constitutes a feedback loop in which the mixer receives the headphone rendered audio output through headphones to determine if any input changes are needed.
  • the mixer 203 may be a person operating equipment, such as a mixing console or computer, or it may be an automated process that is remotely controlled or pre-programmed.
  • FIG. 2 B is a block diagram of an authoring tool used in an object-based headphone rendering system, under an alternative embodiment.
  • the metadata for binaural rendering is generated at encoding time, and the encoder runs a content classifier and metadata generator to generate additional metadata from legacy channel-based content.
  • legacy multichannel content 220 which does not include any audio objects, but channel-based audio only is input to an encoding tool and rendering headphone emulation component 226 .
  • the object-based content 222 is separately input to this component as well.
  • the channel-based legacy content 220 may first be input to an optional audio segmentation pre-processor 224 for separation of different content types for individual rendering.
  • the binaural rendering component 226 includes a headphone emulation function that produces binaural audio output 230 as well as binaural rendering metadata 228 and spatial media type metadata 232 .
  • the audio 230 and metadata components 228 and 232 form a coded audio bitstream with binaural metadata payload 236 .
  • the audio component 230 usually comprises channel and object-based audio that is passed to the bitstream 236 with the metadata components 228 and 232 ; however, it should be noted that the audio component 230 may be standard multi-channel audio, binaurally rendered audio, or a combination of these two audio types.
  • the binaural rendering component 226 also includes a binaural metadata input function that directly produces a headphone output 234 for direct connection to the headphones. As shown in FIG. 2 B , an optional mixer (person or process) 223 may be included to monitor the headphone output 234 to input and modify audio data and metadata inputs that may be provided directly to the rendering process 226 .
  • audio is generally classified into one of a number of defined content types, such as dialog, music, ambience, special effects, and so on.
  • An object may change content type throughout its duration, but at any specific point in time it is generally only one type of content.
  • the content type is expressed as a probability that the object is a particular type of content at any point in time.
  • a constant dialog object would be expressed as a one-hundred percent probability dialog object, while an object that transforms from dialog to music may be expressed as fifty percent dialog/fifty percent music.
  • Processing objects that have different content types could be performed by averaging their respective probabilities for each content type, selecting the content type probabilities for the most dominant object within a group of objects, or a single object over time, or some other logical combination of content type measures.
  • the content type may also be expressed as an n-dimensional vector (where n is the total number of different content types, e.g., four, in the case of dialog/music/ambience/effects).
  • the content type metadata may be embodied as a combined content type metadata definition, where a combination of content types reflects the probability distributions that are combined (e.g., a vector of probabilities of music, speech, and so on).
  • audio content can be separated into different sub-signals corresponding to the different content types. This is accomplished, for example, by sending some percentage of the original signal to each sub-signal (either on a wide-band basis or on a per frequency sub-band basis), in a proportion driven by the computed media type probabilities.
  • FIG. 3 A is a block diagram of a rendering component 112 a used in an object-based headphone rendering system, under an embodiment.
  • FIG. 3 A illustrates the pre-processing 113 , binaural rendering 114 , and post-processing 115 sub-components of renderer 112 in greater detail.
  • the metadata and audio are input into processing or pre-processing components in the form of a coded audio bitstream 301 .
  • the metadata 302 is input to a metadata processing component 306
  • the audio 304 is input to an optional audio segmentation pro-processor 308 .
  • audio segmentation may be performed by the authoring tool through pre-processors 202 or 224 . If such audio segmentation is not performed by the authoring tool, the renderer may perform this task through pre-processor 308 .
  • the processed metadata and segmented audio is then input to a binaural rendering component 310 . This component performs certain headphone specific rendering functions, such as 3D positioning, distance control, head size processing, and so on.
  • the binaural rendered audio is then input to audio post-processor 314 , which applies certain audio operations, such as level management, equalization, noise compensation or cancellation, and so on.
  • the post-processed audio is then output 312 for playback through headphones 116 or 118 .
  • the microphone and sensor data 316 is input back to at least one of the metadata processing component 306 , the binaural rendering component 310 or the audio post-processing component 314 .
  • headtracking could be replaced by a simpler pseudo-randomly generated head ‘jitter’ that mimics continuously changing small head movements. This allows any relevant environmental or operational data at the point of playback to be used by the rendering system to further modify the audio to counteract or enhance certain playback conditions.
  • FIG. 3 B is a block diagram of a rendering component used in an object-based headphone rendering system, under this alternative embodiment.
  • coded audio bitstream 321 from the authoring tool is provided in its constituent parts of metadata 322 input to metadata processing component 326 , and audio 324 to binaural rendering component 330 .
  • the audio is pre-segmented by an audio pre-segmentation process 202 or 224 in the appropriate authoring tool.
  • the binaural rendering component 330 performs certain headphone specific rendering functions, such as 3D positioning, distance control, head size processing, and so on.
  • the binaural rendered audio is then input to audio post-processor 334 , which applies certain audio operations, such as level management, equalization, noise compensation or cancellation, and so on.
  • the post-processed audio is then output 332 for playback through headphones 116 or 118 .
  • the microphone and sensor data 336 is input back to at least one of the metadata processing component 326 , the binaural rendering component 330 or the audio post-processing component 334 .
  • authoring tool 102 represents a workstation or computer application that allows a content creator (author) to select or create audio content for playback and define certain characteristics for each of the channels and/or objects that make up the audio content.
  • the authoring tool may include a mixer type console interface or a graphical user interface (GUI) representation of a mixing console.
  • FIG. 5 illustrates an authoring tool GUI that may be used with embodiments of a headphone rendering system, under an embodiment.
  • GUI display 500 a number of different characteristics are allowed to be set by the author such as gain levels, low frequency characteristics, equalization, panning, object position and density, delays, fades, and so on.
  • user input is facilitated by the use of virtual sliders for the author to specify setting values, though other virtualized or direct input means are also possible, such as direct text entry, potentiometer settings, rotary dials, and so on.
  • At least some of the parameter settings entered by the user are encoded as metadata that is associated with the relevant channels or audio objects for transport with the audio content.
  • the metadata may be packaged as part of an additional specific headphone payload in the codec (coder/decoder) circuits in the audio system.
  • real-time metadata that encodes certain operational and environmental conditions (e.g., head tracking, head-size sensing, room sensing, ambient conditions, noise levels, etc.) can be provided live to the binaural renderer.
  • the binaural renderer combines the authored metadata content and the real-time locally generated metadata to provide an optimized listening experience for the user.
  • the object controls provided by the authoring tools and user input interfaces allow the user to control certain important headphone-specific parameters, such as binaural and stereo-bypass dynamic rendering modes, LFE (low-frequency element) gain and object gains, media intelligence and content-dependent controls.
  • rendering mode could be selected on a content-type basis or object basis between stereo (Lo/Ro), matrixed stereo (Lt/Rt), using a combination of interaural time delays and stereo amplitude or intensity panning, or full binaural rendering (i.e. combination of interaural time delays and levels as well as frequency dependent spectral cues).
  • a frequency cross over point can be specified to revert to stereo processing below a given frequency.
  • Low frequency gains can also be specified to attenuate low frequency components or LFE content. Low frequency content could also be transported separately to enabled headphones, as described in greater detail below.
  • Metadata can be specified on a per-content type or per-channel/object basis such as room model generally described by a direct/reverberant gain and a frequency dependent reverberation time and interaural target cross-correlation. It could also include other more detailed modeling of the room (e.g., early reflections positions, gains and late reverberation gain). It could also include directly specified filters modeling a particular room response.
  • Other metadata includes warp to screen flags (that controls how objects are remapped to fit screen aspect ratio and viewing angle as function of distance).
  • a listener relative flag i.e., to apply headtracking information or not
  • preferred scaling specifies a default size/aspect ratio of ‘virtual room’ for rendering the content used to scale the object positions as well as remap to screen (as a function of device screen size and distance to device)
  • distance model exponent that controls distance attenuation law (e.g., 1/(1+r ⁇ ))
  • different types of content may be processed differently based on the intent of the author and the optimum rendering configuration. Separation of content based on type or other salient characteristic can be achieved a priori during authoring, e.g. by manually keeping dialog separated in their own set of tracks or objects, or a posteriori live prior to rendering in the receiving device.
  • Additional media intelligence tools can be used during authoring to classify content according to different characteristics and generate additional channels or objects that may carry different sets of rendering metadata.
  • media classifiers could be trained for the content creation process to develop a model to identify different stem mix proportions.
  • An associated source separation technique could be employed to extract the approximate stems using weighting functions derived from the media classifier.
  • binaural parameters that would be encoded as metadata may be applied during authoring.
  • a mirrored process is applied in the end-user device whereby using the decoded metadata parameters would create a substantially similar experience as during content creation.
  • extensions to existing studio authoring tools include binaural monitoring and metadata recording.
  • Typical metadata captured at authoring time include: channel/object position/size information for each channel and audio object, channel/object gain adjustment, content dependent metadata (can vary based on content type), bypass flags to indicate settings, such as stereo/left/right rendering should be used instead of binaural, crossover points and levels indicating that bass frequency below cross over point must be bypassed and/or attenuated, and room model information to describe a direct/reverberant gain and a frequency dependent reverberation time or other characteristics, such as early reflections and late reverberation gain.
  • Other content dependent metadata could provide warp to screen functionality that remaps images to fit screen aspect ratio or change the viewing angle as a function of distance.
  • Head tracking information can be applied to provide a listener relative experience.
  • Metadata could also be used that implements a distance model exponent that controls distance as a function of attenuation law (e.g., 1/(1+r ⁇ ). These represent only certain characteristics that may be encoded by the metadata, and other characteristics may also be encoded.
  • FIG. 4 is a block diagram that provides an overview of the dual-ended binaural rendering system, under an embodiment.
  • system 400 provides content-dependent metadata and rendering settings that affect how different types of audio content are to be rendered.
  • the original audio content may comprise different audio elements, such as dialog, music, effects, ambient sounds, transients, and so on. Each of these elements may be optimally rendered in different ways, instead of limiting them to be rendered all in only one way.
  • audio input 401 comprises a multi-channel signal, object-based channel or hybrid audio of channel plus objects.
  • the audio is input to an encoder 402 that adds or modifies metadata associated with the audio objects and channels.
  • the audio is input to a headphone monitoring component 410 that applies user adjustable parametric tools to control headphone processing, equalization, downmix, and other characteristics appropriate for headphone playback.
  • the user-optimized parameter set (M) is then embedded as metadata or additional metadata by the encoder 402 to form a bitstream that is transmitted to decoder 404 .
  • the decoder 404 decodes the metadata and the parameter set M of the object and channel-based audio for controlling the headphone processing and downmix component 406 , which produces headphone optimized and downmixed (e.g., 5.1 to stereo) audio output 408 to the headphones.
  • headphone optimized and downmixed e.g., 5.1 to stereo
  • certain metadata may be provided by a headphone monitoring component 410 that provides specific user adjustable parametric tools to control headphone-specific playback.
  • a headphone monitoring component 410 may be configured to provide a user some degree of control over headphone rendering for legacy headphones 118 that passively playback transmitted audio content.
  • the endpoint device may be an enabled headphone 116 that includes sensors and/or some degree of processing capability to generate metadata or signal data that can be encoded as compatible metadata to further modify the authored metadata to optimize the audio content for rendering over headphones.
  • rendering is performed live and can account for locally generated sensor array data which can be generated either by a headset or an actual mobile device 104 to which headsets are attached, and such hardware-generated metadata can be further combined with the metadata created by the content creator at authoring time to enhance the binaural rendering experience.
  • low frequency content may be transported separately to enabled headphones allowing more than stereo input (typically 3 or 4 audio inputs), or encoded and modulated into the higher frequencies of the main stereo waveforms carried to a headset with only stereo input. This would allow further low frequency processing to occur in the headphones (e.g. routing to specific drivers optimized for low frequencies).
  • Such headphones may include low frequency specific drivers and/or filter plus crossover and amplification circuitry to optimize playback of low frequency signals.
  • a link from the headphones to the headphone processing component is provided on the playback side to enable manual identification of the headphones for automatic headphone preset loading or other configuration of the headphones.
  • a link may be implemented as a wireless or wired link from the headphones to headphone process 406 in FIG. 4 , for example.
  • the identification may be used to configure the target headphones or to send specific content or specifically rendered content to a specific set of headphones if multiple target headphones are being used.
  • the headphone identifier may be embodied in any appropriate alphanumeric or binary code that is processed by the rendering process as either part of the metadata or a separate data processing operation.
  • FIG. 6 illustrates an enabled headphone that comprises one or more sensors that sense playback conditions for encoding as metadata used in a headphone rendering system, under an embodiment.
  • the various sensors may be arranged in a sensor array that can be used to provide live metadata to the renderer at render time.
  • the sensors include a range sensor (such as infrared IR or time-of-flight TOF camera) 602 , tension/headsize sensor 604 , gyroscopic sensor 606 , external microphone (or pair) 610 , ambient noise cancellation processor 608 , internal microphone (or pair) 612 , among other appropriate sensors.
  • a range sensor such as infrared IR or time-of-flight TOF camera
  • the sensor array can comprise both audio sensors (i.e., microphones) as well as data sensors (e.g., orientation, size, tension/stress, and range sensors).
  • orientation data can be used to ‘lock’ or rotate the spatial audio object according to the listener's head motion
  • tension sensors or external microphones can be used to infer the size of the listener's head (e.g., by monitoring audio cross correlation at two external microphones located on the earcups) and adjust relevant binaural rendering parameters (e.g., interaural time delays, shoulder reflection timing, etc.).
  • Range sensors 602 can be used to evaluate distance to the display in case of a mobile A/V playback and correct the location of on-screen objects to account for the distance-dependent viewing angle (i.e. render objects wider as the screen is brought closer to the listener) or adjust global gain and room model to convey appropriate distance rendering.
  • Such a sensor function is useful if the audio content is part of A/V content that is played back on devices that may range from small mobile phones (e.g., 2-4′′ screen size) to tablets (e.g., 7-10′′ screen size) to laptop computers (e.g., 15-17′′ screen size).
  • sensors can also be used to automatically detect and set the routing of the left and right audio outputs to the correct transducers, not requiring a specific a priori orientation or explicit “Left/Right” markings on the headphones.
  • the audio or A/V content transmitted to the headphones 116 or 118 may be provided through a handheld or portable device 104 .
  • the device 104 itself may include one or more sensors.
  • certain gyro sensors and accelerometers may be provided to track object movement and position.
  • the device 104 to which the headset is connected can also provide additional sensor data such as orientation, head size, camera, etc., as device metadata.
  • FIG. 7 illustrates the connection between a headphone and device 104 including a headphone sensor processor 702 , under an embodiment.
  • headphone 600 transmits certain sensor, audio and microphone data 701 over a wired or wireless link to headphone sensor processor 702 .
  • the processed data from processor 702 may comprise analog audio with metadata 704 or spatial audio output 706 .
  • each of the connections comprises a bi-directional link between the headphone, processor, and outputs.
  • user controls can also be provided to complement or generate appropriate metadata if not available through hardware sensor arrays.
  • Example user controls can include: elevation emphasis, binaural on/off switch, preferred sound radius or size, and other similar characteristics.
  • Such user controls may be provided through hardware or software interface elements associated with the headphone processor component, playback device, and/or headphones.
  • FIG. 8 is a block diagram illustrating the different metadata components that may be used in a headphone rendering system, under an embodiment.
  • the metadata processed by the headphone processor 806 comprises authored metadata, such as that produced by authoring tool 102 and mixing console 500 , and hardware generated metadata 804 .
  • the hardware generated metadata 804 may include user input metadata, device-side metadata provided by or generated from data sent from device 808 , and/or headphone-side metadata provided by or generated from data sent from headphone 810 .
  • the authored 802 and/or hardware-generated 804 metadata is processed in a binaural rendering component 114 of renderer 112 .
  • the metadata provides control over specific audio channels and/or objects to optimize playback over headphones 116 or 118 .
  • FIG. 9 illustrates functional components of a binaural rendering component for headphone processing, under an embodiment.
  • decoder 902 outputs the multi-channel signal or the channel plus object tracks along with the decoded parameter set, M, for controlling the headphone processing performed by headphone processor 904 .
  • the headphone processor 904 also receives certain spatial parameter updates 906 from camera-based or sensor-based tracking device 910 .
  • Tracking device 910 is a face-tracking or head-tracking device that measures certain angular and positional parameters (r, ⁇ , ⁇ ) associated with the user's head.
  • the spatial parameters may correspond to distance and certain orientation angles, such as yaw, pitch, and roll.
  • An original set of spatial parameters, x may be updated as the sensor data 910 is processed.
  • These spatial parameters updates Y are then passed to the headphone processor 904 for further modification of the parameter set M.
  • the processed audio data is then transmitted to a post-processing stage 908 that performs certain audio processing such as timbre-correction, filtering, downmixing, and other relevant processes.
  • the audio is then equalized by equalizer 912 and transmitted to the headphones.
  • the equalizer 912 may perform equalization with or without using a pressure-division-ratio (PDR) transform, as described in further detail in the description that follows.
  • PDR pressure-division-ratio
  • FIG. 10 illustrates a binaural rendering system for rendering audio objects in a headphone rendering system, under an embodiment.
  • FIG. 10 illustrates some of the signal components as they are processed through a binaural headphone processor.
  • object audio components are input to an unmixer 1002 that separates direct and diffuse components (e.g., direct from reverb path) of the audio.
  • the direct component is input to a downmix component 1006 that downmixes surround channels (e.g., 5.1 surround) to stereo with phase shift information.
  • the direct component is also input to a direct content binaural renderer 1008 . Both two-channel components are then input to a dynamic timbre equalizer 1012 .
  • the object position and user control signals are input to a virtualizer steerer component 1004 .
  • This generates a scaled object position that is input to the binaural renderer 1008 along with the direct component.
  • the diffuse component of the audio is input to a separate binaural renderer 1010 , and is combined with the rendered direct content by an adder circuit prior to output as two-channel output audio.
  • FIG. 11 illustrates a more detailed representation of the binaural rendering system of FIG. 10 , under an embodiment.
  • the multi-channel and object-based audio is input to unmixer 1102 for separation into direct and diffuse components.
  • the direct content is processed by direct binaural renderer 1118
  • the diffuse content is processed by diffuse binaural renderer 1120 .
  • After downmixing 1116 and timbre equalization 1124 of the direct content the diffuse and direct audio components are then combined through an adder circuit for post-processing, such as by headphone equalizer 1122 , and other possible circuits.
  • certain user input and feedback data are used to modify the binaural rendering of the diffuse content in diffuse binaural renderer 1120 .
  • playback environment sensor 1106 provides data regarding listening room properties and noise estimation (ambient sound levels), head/face tracking sensor 1108 provides head position, orientation, and size data, device tracking sensor 1110 provides device position data, and user input 1112 provides playback radius data.
  • This data may be provided by sensors located in the headphone 116 and/or device 104 .
  • the various sensor data and user input data is combined with content metadata, which provides object position and room parameter information in a virtualizer steerer component 1104 .
  • This component also receives direct and diffuse energy information from the unmixer 1102 .
  • the virtualizer steerer 1104 outputs data including object position, head position/orientation/size, room parameters, and other relevant information to the diffuse content binaural renderer 1120 . In this manner, the diffuse content of the input audio is adjusted to accommodate sensor and user input data.
  • While optimal performance of the virtualizer steerer is achieved when sensor data, user input data, and content metadata are received, it is possible to achieve beneficial performance of the virtualizer steerer even in the absence of one or more of these inputs.
  • legacy content e.g., encoded bitstreams which do not contain binaural rendering metadata
  • conventional headphones e.g., headphones which do not include various sensors, microphones, etc.
  • a beneficial result may still be obtained by providing the direct energy and diffuse energy outputs of the unmixer 1102 to the virtualizer steerer 1104 to generate control information for the diffuse content binaural renderer 1120 , even in the absence of one or more other inputs to the virtualizer steerer.
  • rendering system 1100 of FIG. 11 allows the binaural headphone renderer to efficiently provide individualization based on interaural time difference (ITD) and interaural level difference (ILD) and sensing of head size.
  • ILD and ITD are important cues for azimuth, which is the angle of an audio signal relative to the head when produced in the horizontal plane.
  • ITD is defined as the difference in arrival time of a sound between two ears, and the ILD effect uses differences in sound level entering the ears to provide localization cues. It is generally accepted that ITDs are used to localize low frequency sound and ILDs are used to localize high frequency sounds, while both are used for content that contains both high and low frequencies.
  • Rendering system 1100 also allows accommodation for source distance control and room model. It further allows for direct versus diffuse/reverb (dry/wet) content extraction and processing, optimization of room reflections, and timbral matching.
  • the metadata-based headphone processing system 100 may include certain HRTF modeling mechanisms. The foundation of such a system generally builds upon the structural model of the head and torso. This approach allows algorithms to be built upon the core model in a modular approach.
  • FIG. 12 is a system diagram showing the different tools used in an HRTF modeling system used in a headphone rendering system, under an embodiment. As shown in FIG.
  • filter stage 1204 may comprise a snowman filter model that consists of a spherical head on top of a spherical body and accounts for the contributions of the torso as well as the head to the HRTF.
  • Modeling stage 1204 computes the pinna and torso models and the left and right (l, r) components are post-processed 1206 for final output 1208 .
  • the audio content processed by the headphone playback system comprises channels, objects and associated metadata that provides the spatial and processing cues necessary to optimize rendering of the audio through headphones.
  • metadata can be generated as authored metadata from authoring tools as well as hardware generated metadata from one or more endpoint devices.
  • FIG. 13 illustrates a data structure that enables delivery of metadata for a headphone rendering system, under an embodiment.
  • the metadata structure of FIG. 13 is configured to supplement the metadata delivered in other portions of a bitstream that may be packaged in accordance with a known channel-based audio format, such as Dolby digital AC-3 or Enhanced AC-3 bit stream syntax.
  • the data structure consists of a container 1300 that contains one or more data payloads 1304 .
  • Each payload is identified in the container using a unique payload identifier value to provide an unambiguous indication of the type of data present in the payload.
  • the order of payloads within the container is undefined. Payloads can be stored in any order, and a parser must be able to parse the entire container to extract relevant payloads, and ignore payloads that are either not relevant or are unsupported. Protection data 1306 follows the final payload in the container that can be used by a decoding device to verify that the container and the payloads within the container are error-free.
  • a preliminary portion 1302 containing sync, version, and key-ID information precedes the first payload in the container.
  • the data structure supports extensibility through the use of versioning and identifiers for specific payload types.
  • the metadata payloads may be used to describe the nature or configuration of the audio program being delivered in an AC-3 or Enhanced AC-3 (or other type) bit stream, or may be used to control audio processing algorithms that are designed to further process the output of the decoding process.
  • Containers may be defined using different programming structures, based on implementation preferences.
  • the table below illustrates example syntax of a container, under an embodiment.
  • variable-bits a number of fields within the container structure and payload data are encoded using a method known as variable-bits. This method enables efficient coding of small field values with extensibility to be able to express arbitrarily large field values.
  • variable-bit coding the field is consists of one or more groups of n-bits, with each group followed by a 1-bit read-more field. At a minimum, coding of n bits requires n+1 bits to be transmitted. All fields coded using variable_bits are interpreted as unsigned integers.
  • Various other different coding aspects may be implemented according to practices and methods known to those of ordinary skill in the art.
  • FIG. 13 illustrate an example metadata structure, format, and program content. It should be noted that these are intended to represent one example embodiment of a metadata representation, and other metadata definitions and content are also possible.
  • certain post-processing functions 115 may be performed by the renderer 112 .
  • One such post-processing function comprises headphone equalization, as shown in element 912 of FIG. 9 .
  • equalization may be performed by obtaining blocked-ear canal impulse response measurements for different headphone placements for each ear.
  • FIG. 14 illustrates an example case of three impulse response measurements for each ear, in an embodiment of a headphone equalization process.
  • the equalization post-process compute the Fast Fourier Transform (FFT) of each response and performs an RMS (root-mean squared) averaging of the derived response.
  • the responses may be variable, octave smoothed, ERB smoothed, etc.
  • the process then computes the inversion,
  • the process determines the time-domain filter.
  • the post-process may also include a closed-to-open transform function.
  • This pressure-division-ratio (PDR) method involves designing a transform to match the acoustical impedance between eardrum and free-field for closed-back headphones with modifications in terms of how the measurements are obtained for free-field sound transmission as a function of direction of arrival first-arriving sound. This indirectly enables matching the eardrum pressure signals between closed-back headphones and free-field equivalent conditions without requiring complicated eardrum measurements.
  • FIG. 15 A illustrates a circuit for calculating the free-field sound transmission, under an embodiment.
  • Circuit 1500 is based on a free-field acoustical impedance model.
  • P 1 ( ⁇ ) is the Thevenin pressure measured at the entrance of the blocked ear canal with a loudspeaker at ⁇ degrees about the median plane (e.g., about 30 degrees to the left and front of the listener) involving extraction of direct sound from the measured impulse response.
  • Measurement P 1 ( ⁇ ) can be done at the entrance of the ear canal or at a certain distance inside (x mm) inside the ear canal (or at the eardrum) from the opening for the same loudspeaker at the same placement for measuring P 1 ( ⁇ ) involving extraction of direct sound from the measured impulse response.
  • P 2 ( ⁇ ) P 1 ( ⁇ ) Z eardrum ( ⁇ ) Z eardrum ( ⁇ ) + Z radiation ( ⁇ )
  • FIG. 15 B illustrates a circuit for calculating the headphone sound transmission, under an embodiment.
  • Circuit 1510 is based on a headphone acoustical impedance analog model.
  • P 4 is measured at the entrance of the blocked ear canal with headphone (RMS averaged) steady-state measurement
  • measure P 5 ( ⁇ ) is made at the entrance to the ear canal or at a distance inside the ear canal (or at the eardrum) from the opening for the same headphone placement used for measuring P 4 ( ⁇ ).
  • P 5 ( ⁇ ) P 4 ( ⁇ ) Z eardrum ( ⁇ ) Z eardrum ( ⁇ ) + Z headphone ( ⁇ )
  • the pressure-division-ratio (PDR) can then be calculated using the following formula:
  • Portions of the adaptive audio system may include one or more networks that comprise any desired number of individual machines, including one or more routers (not shown) that serve to buffer and route the data transmitted among the computers.
  • a network may be built on various different network protocols, and may be the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), or any combination thereof.
  • the network comprises the Internet
  • one or more machines may be configured to access the Internet through web browser programs.
  • One or more of the components, blocks, processes or other functional components may be implemented through a computer program that controls execution of a processor-based computing device of the system. It should also be noted that the various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics.
  • Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, physical (non-transitory), non-volatile storage media in various forms, such as optical, magnetic or semiconductor storage media.
  • the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Signal Processing (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Health & Medical Sciences (AREA)
  • Stereophonic System (AREA)

Abstract

Embodiments are described for a method of rendering audio for playback through headphones comprising receiving digital audio content, receiving binaural rendering metadata generated by an authoring tool processing the received digital audio content, receiving playback metadata generated by a playback device, and combining the binaural rendering metadata and playback metadata to optimize playback of the digital audio content through the headphones.

Description

CROSS REFERENCE TO RELATED APPLICATIONS
This application is a continuation of U.S. patent application Ser. No. 17/685,681, filed on Mar. 3, 2022, which is a continuation of U.S. patent application Ser. No. 17/098,268, filed on Nov. 13, 2020 (now U.S. Pat. No. 11,269,586, issued Mar. 8, 2022), which is a continuation of U.S. patent application Ser. No. 16/673,849, filed on Nov. 4, 2019 (now U.S. Pat. No. 10,838,684, issued Nov. 17, 2020), which is a continuation of U.S. patent application Ser. No. 16/352,607, filed on Mar. 13, 2019 (now U.S. Pat. No. 10,503,461, issued Dec. 10, 2019), which is a continuation of U.S. patent application Ser. No. 15/934,849, filed on Mar. 23, 2018 (now U.S. Pat. No. 10,255,027, issued Apr. 9, 2019), which is a continuation of U.S. patent application Ser. No. 15/031,953, filed on Apr. 25, 2016 (now U.S. Pat. No. 9,933,989, issued Apr. 3, 2018), which is the U.S. national stage entry of International Patent Application No. PCT/US2014/062705 filed on Oct. 28, 2014, which claims priority to U.S. Provisional Patent Application No. 61/898,365, filed on Oct. 31, 2013, each of which is hereby incorporated by reference in its entirety.
FIELD OF THE INVENTION
One or more implementations relate generally to audio signal processing, and more specifically to binaural rendering of channel and object-based audio for headphone playback.
BACKGROUND
Virtual rendering of spatial audio over a pair of speakers commonly involves the creation of a stereo binaural signal that represents the desired sound arriving at the listener's left and right ears and is synthesized to simulate a particular audio scene in three-dimensional (3D) space, containing possibly a multitude of sources at different locations. For playback through headphones rather than speakers, binaural processing or rendering can be defined as a set of signal processing operations aimed at reproducing the intended 3D location of a sound source over headphones by emulating the natural spatial listening cues of human subjects. Typical core components of a binaural renderer are head-related filtering to reproduce direction dependent cues as well as distance cues processing, which may involve modeling the influence of a real or virtual listening room or environment. One example of a present binaural renderer processes each of the 5 or 7 channels of a 5.1 or 7.1 surround in a channel-based audio presentation to 5/7 virtual sound sources in 2D space around the listener. Binaural rendering is also commonly found in games or gaming audio hardware, in which case the processing can be applied to individual audio objects in the game based on their individual 3D position.
Traditionally, binaural rendering is a form of blind post-processing applied to multichannel or object-based audio content. Some of the processing involved in binaural rendering can have undesirable and negative effects on the timbre of the content, such as smoothing of transients or excessive reverberation added to dialog or some effects and music elements. With the growing importance of headphone listening and the additional flexibility brought by object-based content (such as the Dolby® Atmos™ system), there is greater opportunity and need to have the mixers create and encode specific binaural rendering metadata at content creation time, for instance instructing the renderer to process parts of the content with different algorithms or with different settings. Present systems do not feature this capability, nor do they allow such metadata to be transported as part of an additional specific headphone payload in the codecs.
Current systems are also not optimized at the playback end of the pipeline, insofar as content is not configured to be received on a device with additional metadata that can be provided live to the binaural renderer. While real-time head-tracking has been previously implemented and shown to improve binaural rendering, this generally prevents other features such as automated continuous head-size sensing and room sensing, and other customization features that improve the quality of the binaural rendering to be effectively and efficiently implemented in headphone-based playback systems.
What is needed, therefore, is a binaural renderer running on the playback device that combines authoring metadata with real-time locally generated metadata to provide the best possible experience to the end user when listening to channel and object-based audio through headphones. Furthermore, for channel-based content it is generally required that the artistic intent be retained by incorporating audio segmentation analysis.
The subject matter discussed in the background section should not be assumed to be prior art merely as a result of its mention in the background section. Similarly, a problem mentioned in the background section or associated with the subject matter of the background section should not be assumed to have been previously recognized in the prior art. The subject matter in the background section merely represents different approaches, which in and of themselves may also be inventions.
BRIEF SUMMARY OF EMBODIMENTS
Embodiments are described for systems and methods of virtual rendering object-based audio content and improved equalization in headphone-based playback systems. Embodiments include a method for rendering audio for playback through headphones comprising receiving digital audio content, receiving binaural rendering metadata generated by an authoring tool, processing the received digital audio content, receiving playback metadata generated by a playback device, and combining the binaural rendering metadata and playback metadata to optimize playback of the digital audio content through the headphones. The digital audio content may comprise channel-based audio and object-based audio including positional information for reproducing an intended location of a corresponding sound source in three-dimensional space relative to a listener. The method further comprises separating the digital audio content into one or more components based on content type, and wherein the content type is selected from the group consisting of: dialog, music, audio effects, transient signals, and ambient signals. The binaural rendering metadata controls a plurality of channel and object characteristics including: position, size, gain adjustment, and content dependent settings or processing presets; and the playback metadata controls a plurality of listener specific characteristics including head position, head orientation, head size, listening room noise levels, listening room properties, and playback device or screen position relative to the listener. The method may further include receiving one or more user input commands modifying the binaural rendering metadata, the user input commands controlling one or more characteristics including: elevation emphasis where elevated objects and channels could receive a gain boost, preferred 1D (one-dimensional) sound radius or 3D scaling factors for object or channel positioning, and processing mode enablement (e.g., to toggle between traditional stereo or full processing of content). The playback metadata may be generated in response to sensor data provided by an enabled headset housing a plurality of sensors, the enabled headset comprising part of the playback device. The method may further comprise separating the input audio into separate sub-signals, e.g. by content type or unmixing the input audio (channel-based and object-based) into constituent direct content and diffuse content, wherein the diffuse content comprises reverberated or reflected sound elements, and performing binaural rendering on the separate sub-signals independently.
Embodiments are also directed to a method for rendering audio for playback through headphones by receiving content dependent metadata dictating how content elements are rendered through the headphones, receiving sensor data from at least one of a playback device coupled to the headphones and an enabled headset including the headphones, and modifying the content dependent metadata with the sensor data to optimize the rendered audio with respect to one or more playback and user characteristics. The content dependent metadata may be generated by an authoring tool operated by a content creator, and wherein the content dependent metadata dictates the rendering of an audio signal containing audio channels and audio objects. The content dependent metadata controls a plurality of channel and object characteristics selected from the group consisting of: position, size, gain adjustment, elevation emphasis, stereo/full toggling, 3D scaling factors, content dependent settings, and other spatial and timbre properties of the rendered sound-field. The method may further comprise formatting the sensor data into a metadata format compatible with the content dependent metadata to produce playback metadata. The playback metadata controls a plurality of listener specific characteristics selected from the group consisting of: head position, head orientation, head size, listening room noise levels, listening room properties, and sound source device position. In an embodiment, the metadata format comprises a container including one or more payload packets conforming to a defined syntax and encoding digital audio definitions for corresponding audio content elements. The method further comprises encoding the combined playback metadata and the content dependent metadata with source audio content into a bitstream for processing in a rendering system; and decoding the encoded bitstream to extract one or more parameters derived from the content dependent metadata and the playback metadata to generate a control signal modifying the source audio content for playback through the headphones.
The method may further comprise performing one or more post-processing functions on the source audio content prior to playback through headphones; wherein the post-processing functions comprise at least one of: downmixing from a plurality of surround sound channels to one of a binaural mix or a stereo mix, level management, equalization, timbre correction, and noise cancellation.
Embodiments are further directed to systems and articles of manufacture that perform or embody processing commands that perform or implement the above-described method acts.
INCORPORATION BY REFERENCE
Each publication, patent, and/or patent application mentioned in this specification is herein incorporated by reference in its entirety to the same extent as if each individual publication and/or patent application was specifically and individually indicated to be incorporated by reference.
BRIEF DESCRIPTION OF THE DRAWINGS
In the following drawings like reference numbers are used to refer to like elements. Although the following figures depict various examples, the one or more implementations are not limited to the examples depicted in the figures.
FIG. 1 illustrates an overall system that incorporates embodiments of a content creation, rendering and playback system, under some embodiments.
FIG. 2A is a block diagram of an authoring tool used in an object-based headphone rendering system, under an embodiment.
FIG. 2B is a block diagram of an authoring tool used in an object-based headphone rendering system, under an alternative embodiment
FIG. 3A is a block diagram of a rendering component used in an object-based headphone rendering system, under an embodiment.
FIG. 3B is a block diagram of a rendering component used in an object-based headphone rendering system, under an alternative embodiment.
FIG. 4 is a block diagram that provides an overview of the dual-ended binaural rendering system, under an embodiment.
FIG. 5 illustrates an authoring tool GUI that may be used with embodiments of a headphone rendering system, under an embodiment.
FIG. 6 illustrates an enabled headphone that comprises one or more sensors that sense playback conditions for encoding as metadata used in a headphone rendering system, under an embodiment.
FIG. 7 illustrates the connection between a headphone and device including a headphone sensor processor, under an embodiment.
FIG. 8 is a block diagram illustrating the different metadata components that may be used in a headphone rendering system, under an embodiment.
FIG. 9 illustrates functional components of a binaural rendering component for headphone processing, under an embodiment.
FIG. 10 illustrates a binaural rendering system for rendering audio objects in a headphone rendering system, under an embodiment.
FIG. 11 illustrates a more detailed representation of the binaural rendering system of FIG. 10 , under an embodiment.
FIG. 12 is a system diagram showing the different tools used in an HRTF modeling system used in a headphone rendering system, under an embodiment.
FIG. 13 illustrates a data structure that enables delivery of metadata for a headphone rendering system, under an embodiment.
FIG. 14 illustrates an example case of three impulse response measurements for each ear, in an embodiment of a headphone equalization process.
FIG. 15A illustrates a circuit for calculating the free-field sound transmission, under an embodiment.
FIG. 15B illustrates a circuit for calculating the headphone sound transmission, under an embodiment.
DETAILED DESCRIPTION
Systems and methods are described for virtual rendering of object-based content over headphones, and a metadata delivery and processing system for such virtual rendering, though applications are not so limited. Aspects of the one or more embodiments described herein may be implemented in an audio or audio-visual system that processes source audio information in a mixing, rendering and playback system that includes one or more computers or processing devices executing software instructions. Any of the described embodiments may be used alone or together with one another in any combination. Although various embodiments may have been motivated by various deficiencies with the prior art, which may be discussed or alluded to in one or more places in the specification, the embodiments do not necessarily address any of these deficiencies. In other words, different embodiments may address different deficiencies that may be discussed in the specification. Some embodiments may only partially address some deficiencies or just one deficiency that may be discussed in the specification, and some embodiments may not address any of these deficiencies.
Embodiments are directed to an audio content production and playback system that optimizes the rendering and playback of object and/or channel-based audio over headphones. FIG. 1 illustrates an overall system that incorporates embodiments of a content creation, rendering and playback system, under some embodiments. As shown in system 100, an authoring tool 102 is used by a creator to generate audio content for playback through one or more devices 104 for a user to listen to through headphones 116 or 118. The device 104 is generally a portable audio or music player or small computer or mobile telecommunication device that runs applications that allow for the playback of audio content. Such a device may be a mobile phone or audio (e.g., MP3) player 106, a tablet computer (e.g., Apple iPad or similar device) 108, music console 110, a notebook computer 111, or any similar audio playback device. The audio may comprise music, dialog, effects, or any digital audio that may be desired to be listened to over headphones, and such audio may be streamed wirelessly from a content source, played back locally from storage media (e.g., disk, flash drive, etc.), or generated locally. In the following description, the term “headphone” usually refers specifically to a close-coupled playback device worn by the user directly over his or her ears or in-ear listening devices; it may also refer generally to at least some of the processing performed to render signals intended for playback on headphones as an alternative to the terms “headphone processing” or “headphone rendering.”
In an embodiment, the audio processed by the system may comprise channel-based audio, object-based audio or object and channel-based audio (e.g., hybrid or adaptive audio). The audio comprises or is associated with metadata that dictates how the audio is rendered for playback on specific endpoint devices and listening environments. Channel-based audio generally refers to an audio signal plus metadata in which the position is coded as a channel identifier, where the audio is formatted for playback through a pre-defined set of speaker zones with associated nominal surround-sound locations, e.g., 5.1, 7.1, and so on; and object-based means one or more audio channels with a parametric source description, such as apparent source position (e.g., 3D coordinates), apparent source width, etc. The term “adaptive audio” may be used to mean channel-based and/or object-based audio signals plus metadata that renders the audio signals based on the playback environment using an audio stream plus metadata in which the position is coded as a 3D position in space. In general, the listening environment may be any open, partially enclosed, or fully enclosed area, such as a room, but embodiments described herein are generally directed to playback through headphones or other close proximity endpoint devices. Audio objects can be considered as groups of sound elements that may be perceived to emanate from a particular physical location or locations in the environment, and such objects can be static or dynamic. The audio objects are controlled by metadata, which among other things, details the position of the sound at a given point in time, and upon playback they are rendered according to the positional metadata. In a hybrid audio system, channel-based content (e.g., ‘beds’) may be processed in addition to audio objects, where beds are effectively channel-based sub-mixes or stems. These can be delivered for final playback (rendering) and can be created in different channel-based configurations such as 5.1, 7.1.
As shown in FIG. 1 , the headphone utilized by the user may be a legacy or passive headphone 118 that only includes non-powered transducers that simply recreate the audio signal, or it may be an enabled headphone 118 that includes sensors and other components (powered or non-powered) that provide certain operational parameters back to the renderer for further processing and optimization of the audio content. Headphones 116 or 118 may be embodied in any appropriate close-ear device, such as open or closed headphones, over-ear or in-ear headphones, earbuds, earpads, noise-cancelling, isolation, or other type of headphone device. Such headphones may be wired or wireless with regard to its connection to the sound source or device 104.
In an embodiment, the audio content from authoring tool 102 includes stereo or channel based audio (e.g., 5.1 or 7.1 surround sound) in addition to object-based audio. For the embodiment of FIG. 1 , a renderer 112 receives the audio content from the authoring tool and provides certain functions that optimize the audio content for playback through device 104 and headphones 116 or 118. In an embodiment, the renderer 112 includes a pre-processing stage 113, a binaural rendering stage 114, and a post-processing stage 115. The pre-processing stage 113 generally performs certain segmentation operations on the input audio, such as segmenting the audio based on its content type, among other functions; the binaural rendering stage 114 generally combines and processes the metadata associated with the channel and object components of the audio and generates a binaural stereo or multi-channel audio output with binaural stereo and additional low frequency outputs; and the post-processing component 115 generally performs downmixing, equalization, gain/loudness/dynamic range control, and other functions prior to transmission of the audio signal to the device 104. It should be noted that while the renderer will likely generate two-channel signals in most cases, it could be configured to provide more than two channels of input to specific enabled headphones, for instance to deliver separate bass channels (similar to LFE 0.1 channel in traditional surround sound). The enabled headphone may have specific sets of drivers to reproduce bass components separately from the mid to higher frequency sound.
It should be noted that the components of FIG. 1 generally represent the main functional blocks of the audio generation, rendering, and playback systems, and that certain functions may be incorporated as part of one or more other components. For example, one or more portions of the renderer 112 may be incorporated in part or in whole in the device 104. In this case, the audio player or tablet (or other device) may include a renderer component integrated within the device. Similarly, the enabled headphone 116 may include at least some functions associated with the playback device and/or renderer. In such a case, a fully integrated headphone may include an integrated playback device (e.g., built-in content decoder, e.g. MP3 player) as well as an integrated rendering component. Additionally, one or more components of the renderer 112, such as the pre-processing component 113 may be implemented at least in part in the authoring tool, or as part of a separate pre-processing component.
FIG. 2A is a block diagram of an authoring tool used in an object-based headphone rendering system, under an embodiment. As shown in FIG. 2A, input audio 202 from an audio source (e.g., live source, recording, etc.) is input to a digital audio workstation (DAW) 204 for processing by a sound engineer. The input audio 201 is typically in digital form, and if analog audio is used, an A/D (analog-to-digital) conversion step (not shown) is required. This audio typically comprises object and channel-based content, such as may be used in an adaptive audio system (e.g., Dolby Atmos), and often includes several different types of content. The input audio may be segmented through an (optional) audio segmentation pre-process, 204 that separates (or segments) the audio based on its content type so that different types of audio may be rendered differently. For example, dialog may be rendered differently than transient signals or ambient signals. The DAW 204 may be implemented as a workstation for editing and processing the segmented or unsegmented digital audio 202, and may include a mixing console, control surface, audio converter, data storage and other appropriate elements. In an embodiment, DAW is a processing platform that runs digital audio software that provides comprehensive editing functionality as well as an interface for one or more plug-in programs, such as a panner plug-in, among other functions, such as equalizers, synthesizers, effects, and so on. The panner plug-in shown in DAW 204 performs a panning function configured to distribute each object signal to specific speaker pairs or locations in 2D/3D space in a manner that conveys the desired position of each respective object signal to the listener.
In authoring tool 102 a, the processed audio from DAW 204 is input to a binaural rendering component 206. This component includes an audio processing function that produces binaural audio output 210 as well as binaural rendering metadata 208 and spatial media type metadata 212. The audio 210 and metadata components 208 and 212 form a coded audio bitstream with binaural metadata payload 214. In general, the audio component 210 comprises channel and object-based audio that is passed to the bitstream 214 with the metadata components 208 and 212; however, it should be noted that the audio component 210 may be standard multi-channel audio, binaurally rendered audio, or a combination of these two audio types. A binaural rendering component 206 also includes a binaural metadata input function that directly produces a headphone output 216 for direct connection to the headphones. For the embodiment of FIG. 2A, the metadata for binaural rendering is generated at mixing time within the authoring tool 102 a. In an alternative embodiment, the metadata may be generated at encoding time, as shown with reference to FIG. 2B. As shown in FIG. 2A, a mixer 203 uses an application or tool to create audio data and the binaural and spatial metadata. The mixer 203 provides inputs to the DAW 204. Alternatively, it may also provide inputs directly to the binaural rendering process 206. In an embodiment, the mixer receives the headphone audio output 216 so that the mixer may monitor the effect of the audio and metadata input. This effectively constitutes a feedback loop in which the mixer receives the headphone rendered audio output through headphones to determine if any input changes are needed. The mixer 203 may be a person operating equipment, such as a mixing console or computer, or it may be an automated process that is remotely controlled or pre-programmed.
FIG. 2B is a block diagram of an authoring tool used in an object-based headphone rendering system, under an alternative embodiment. In this embodiment, the metadata for binaural rendering is generated at encoding time, and the encoder runs a content classifier and metadata generator to generate additional metadata from legacy channel-based content. For the authoring tool 102 b of FIG. 2B, legacy multichannel content 220, which does not include any audio objects, but channel-based audio only is input to an encoding tool and rendering headphone emulation component 226. The object-based content 222 is separately input to this component as well. The channel-based legacy content 220 may first be input to an optional audio segmentation pre-processor 224 for separation of different content types for individual rendering. In authoring tool 102 b, the binaural rendering component 226 includes a headphone emulation function that produces binaural audio output 230 as well as binaural rendering metadata 228 and spatial media type metadata 232. The audio 230 and metadata components 228 and 232 form a coded audio bitstream with binaural metadata payload 236. As stated above, the audio component 230 usually comprises channel and object-based audio that is passed to the bitstream 236 with the metadata components 228 and 232; however, it should be noted that the audio component 230 may be standard multi-channel audio, binaurally rendered audio, or a combination of these two audio types. When legacy content is input, the output coded audio bitstream could contain explicitly separated sub-components audio data or metadata implicitly describing content type allowing the receiving endpoint to perform segmentation and process each sub-component appropriately. The binaural rendering component 226 also includes a binaural metadata input function that directly produces a headphone output 234 for direct connection to the headphones. As shown in FIG. 2B, an optional mixer (person or process) 223 may be included to monitor the headphone output 234 to input and modify audio data and metadata inputs that may be provided directly to the rendering process 226.
With regard to content type and the operation of the content classifier, audio is generally classified into one of a number of defined content types, such as dialog, music, ambience, special effects, and so on. An object may change content type throughout its duration, but at any specific point in time it is generally only one type of content. In an embodiment, the content type is expressed as a probability that the object is a particular type of content at any point in time. Thus, for example, a constant dialog object would be expressed as a one-hundred percent probability dialog object, while an object that transforms from dialog to music may be expressed as fifty percent dialog/fifty percent music. Processing objects that have different content types could be performed by averaging their respective probabilities for each content type, selecting the content type probabilities for the most dominant object within a group of objects, or a single object over time, or some other logical combination of content type measures. The content type may also be expressed as an n-dimensional vector (where n is the total number of different content types, e.g., four, in the case of dialog/music/ambience/effects). The content type metadata may be embodied as a combined content type metadata definition, where a combination of content types reflects the probability distributions that are combined (e.g., a vector of probabilities of music, speech, and so on).
With regard to classification of audio, in an embodiment, the process operates on a per time-frame basis to analyze the signal, identify features of the signal and compare the identified features to features of known classes in order to determine how well the features of the object match the features of a particular class. Based on how well the features match a particular class, the classifier can identify a probability of an object belonging to a particular class. For example, if at time t=T the features of an object match very well with dialog features, then the object would be classified as dialog with a high probability. If, at time=T+N, the features of an object match very well with music features, the object would be classified as music with a high probability. Finally, if at time T=T+2N the features of an object do not match particularly well with either dialog or music, the object might be classified as 50% music and 50% dialog. Thus, in an embodiment, based on the content type probabilities, audio content can be separated into different sub-signals corresponding to the different content types. This is accomplished, for example, by sending some percentage of the original signal to each sub-signal (either on a wide-band basis or on a per frequency sub-band basis), in a proportion driven by the computed media type probabilities.
With reference to FIG. 1 , the output from authoring tool 102 is input to renderer 112 for rendering as audio output for playback through headphones or other endpoint devices. FIG. 3A is a block diagram of a rendering component 112 a used in an object-based headphone rendering system, under an embodiment. FIG. 3A illustrates the pre-processing 113, binaural rendering 114, and post-processing 115 sub-components of renderer 112 in greater detail. From the authoring tool 102, the metadata and audio are input into processing or pre-processing components in the form of a coded audio bitstream 301. The metadata 302 is input to a metadata processing component 306, and the audio 304 is input to an optional audio segmentation pro-processor 308. As shown with reference to FIGS. 2A and 2B, audio segmentation may be performed by the authoring tool through pre-processors 202 or 224. If such audio segmentation is not performed by the authoring tool, the renderer may perform this task through pre-processor 308. The processed metadata and segmented audio is then input to a binaural rendering component 310. This component performs certain headphone specific rendering functions, such as 3D positioning, distance control, head size processing, and so on. The binaural rendered audio is then input to audio post-processor 314, which applies certain audio operations, such as level management, equalization, noise compensation or cancellation, and so on. The post-processed audio is then output 312 for playback through headphones 116 or 118. For an embodiment in which the headphones or playback device 104 are fitted with sensors and/or microphones for feedback to the renderer, the microphone and sensor data 316 is input back to at least one of the metadata processing component 306, the binaural rendering component 310 or the audio post-processing component 314. For standard headphones that are not fitted with sensors, headtracking could be replaced by a simpler pseudo-randomly generated head ‘jitter’ that mimics continuously changing small head movements. This allows any relevant environmental or operational data at the point of playback to be used by the rendering system to further modify the audio to counteract or enhance certain playback conditions.
As mentioned above, segmentation of the audio may be performed by the authoring tool or the renderer. For the embodiment in which the audio is pre-segmented, the renderer processes this audio directly. FIG. 3B is a block diagram of a rendering component used in an object-based headphone rendering system, under this alternative embodiment. As shown for renderer 112 b, coded audio bitstream 321 from the authoring tool is provided in its constituent parts of metadata 322 input to metadata processing component 326, and audio 324 to binaural rendering component 330. For the embodiment of FIG. 3B, the audio is pre-segmented by an audio pre-segmentation process 202 or 224 in the appropriate authoring tool. The binaural rendering component 330 performs certain headphone specific rendering functions, such as 3D positioning, distance control, head size processing, and so on. The binaural rendered audio is then input to audio post-processor 334, which applies certain audio operations, such as level management, equalization, noise compensation or cancellation, and so on. The post-processed audio is then output 332 for playback through headphones 116 or 118. For an embodiment in which the headphones or playback device 104 are fitted with sensors and/or microphones for feedback to the renderer, the microphone and sensor data 336 is input back to at least one of the metadata processing component 326, the binaural rendering component 330 or the audio post-processing component 334. The authoring and rendering systems of FIGS. 2A, 2B, 3A and 3B allow content authors to create and encode specific binaural rendering metadata at content creation time using authoring tool 102. This allows the audio data to be used to instruct the renderer to process parts of the audio content with different algorithms or with different settings. In an embodiment, authoring tool 102 represents a workstation or computer application that allows a content creator (author) to select or create audio content for playback and define certain characteristics for each of the channels and/or objects that make up the audio content. The authoring tool may include a mixer type console interface or a graphical user interface (GUI) representation of a mixing console. FIG. 5 illustrates an authoring tool GUI that may be used with embodiments of a headphone rendering system, under an embodiment. As can be seen in GUI display 500, a number of different characteristics are allowed to be set by the author such as gain levels, low frequency characteristics, equalization, panning, object position and density, delays, fades, and so on. For the embodiment shown, user input is facilitated by the use of virtual sliders for the author to specify setting values, though other virtualized or direct input means are also possible, such as direct text entry, potentiometer settings, rotary dials, and so on. At least some of the parameter settings entered by the user are encoded as metadata that is associated with the relevant channels or audio objects for transport with the audio content. In an embodiment, the metadata may be packaged as part of an additional specific headphone payload in the codec (coder/decoder) circuits in the audio system. Using enabled devices, real-time metadata that encodes certain operational and environmental conditions (e.g., head tracking, head-size sensing, room sensing, ambient conditions, noise levels, etc.) can be provided live to the binaural renderer. The binaural renderer combines the authored metadata content and the real-time locally generated metadata to provide an optimized listening experience for the user. In general, the object controls provided by the authoring tools and user input interfaces allow the user to control certain important headphone-specific parameters, such as binaural and stereo-bypass dynamic rendering modes, LFE (low-frequency element) gain and object gains, media intelligence and content-dependent controls. More specifically, rendering mode could be selected on a content-type basis or object basis between stereo (Lo/Ro), matrixed stereo (Lt/Rt), using a combination of interaural time delays and stereo amplitude or intensity panning, or full binaural rendering (i.e. combination of interaural time delays and levels as well as frequency dependent spectral cues). In addition, a frequency cross over point can be specified to revert to stereo processing below a given frequency. Low frequency gains can also be specified to attenuate low frequency components or LFE content. Low frequency content could also be transported separately to enabled headphones, as described in greater detail below. Other metadata can be specified on a per-content type or per-channel/object basis such as room model generally described by a direct/reverberant gain and a frequency dependent reverberation time and interaural target cross-correlation. It could also include other more detailed modeling of the room (e.g., early reflections positions, gains and late reverberation gain). It could also include directly specified filters modeling a particular room response. Other metadata includes warp to screen flags (that controls how objects are remapped to fit screen aspect ratio and viewing angle as function of distance). Finally a listener relative flag (i.e., to apply headtracking information or not), preferred scaling (specify a default size/aspect ratio of ‘virtual room’ for rendering the content used to scale the object positions as well as remap to screen (as a function of device screen size and distance to device)) as well as distance model exponent that controls distance attenuation law (e.g., 1/(1+rα)) are also possible It is also possible to signal parameter groups or ‘presets’ that can be applied to different channels/objects or depending on content-type.
As shown with respect to the pre-segmentation components of the authoring tool and/or renderer, different types of content (e.g., dialog, music, effects, etc.) may be processed differently based on the intent of the author and the optimum rendering configuration. Separation of content based on type or other salient characteristic can be achieved a priori during authoring, e.g. by manually keeping dialog separated in their own set of tracks or objects, or a posteriori live prior to rendering in the receiving device. Additional media intelligence tools can be used during authoring to classify content according to different characteristics and generate additional channels or objects that may carry different sets of rendering metadata. For example, having knowledge of the stems (music, dialog, Foley, effects, etc.) and an associated surround (e.g., 5.1) mix, media classifiers could be trained for the content creation process to develop a model to identify different stem mix proportions. An associated source separation technique could be employed to extract the approximate stems using weighting functions derived from the media classifier. From the extracted stems, binaural parameters that would be encoded as metadata may be applied during authoring. In an embodiment, a mirrored process is applied in the end-user device whereby using the decoded metadata parameters would create a substantially similar experience as during content creation.
In an embodiment, extensions to existing studio authoring tools include binaural monitoring and metadata recording. Typical metadata captured at authoring time include: channel/object position/size information for each channel and audio object, channel/object gain adjustment, content dependent metadata (can vary based on content type), bypass flags to indicate settings, such as stereo/left/right rendering should be used instead of binaural, crossover points and levels indicating that bass frequency below cross over point must be bypassed and/or attenuated, and room model information to describe a direct/reverberant gain and a frequency dependent reverberation time or other characteristics, such as early reflections and late reverberation gain. Other content dependent metadata could provide warp to screen functionality that remaps images to fit screen aspect ratio or change the viewing angle as a function of distance. Head tracking information can be applied to provide a listener relative experience. Metadata could also be used that implements a distance model exponent that controls distance as a function of attenuation law (e.g., 1/(1+rα). These represent only certain characteristics that may be encoded by the metadata, and other characteristics may also be encoded.
FIG. 4 is a block diagram that provides an overview of the dual-ended binaural rendering system, under an embodiment. In an embodiment, system 400 provides content-dependent metadata and rendering settings that affect how different types of audio content are to be rendered. For example, the original audio content may comprise different audio elements, such as dialog, music, effects, ambient sounds, transients, and so on. Each of these elements may be optimally rendered in different ways, instead of limiting them to be rendered all in only one way. For the embodiment of system 400, audio input 401 comprises a multi-channel signal, object-based channel or hybrid audio of channel plus objects. The audio is input to an encoder 402 that adds or modifies metadata associated with the audio objects and channels. As shown in system 400, the audio is input to a headphone monitoring component 410 that applies user adjustable parametric tools to control headphone processing, equalization, downmix, and other characteristics appropriate for headphone playback. The user-optimized parameter set (M) is then embedded as metadata or additional metadata by the encoder 402 to form a bitstream that is transmitted to decoder 404. The decoder 404 decodes the metadata and the parameter set M of the object and channel-based audio for controlling the headphone processing and downmix component 406, which produces headphone optimized and downmixed (e.g., 5.1 to stereo) audio output 408 to the headphones. Although certain content dependent processing has been implemented in present systems and post-processing chains, it has generally not been applied to binaural rendering, such as illustrated in system 400 of FIG. 4 .
As shown in FIG. 4 , certain metadata may be provided by a headphone monitoring component 410 that provides specific user adjustable parametric tools to control headphone-specific playback. Such a component may be configured to provide a user some degree of control over headphone rendering for legacy headphones 118 that passively playback transmitted audio content. Alternatively, the endpoint device may be an enabled headphone 116 that includes sensors and/or some degree of processing capability to generate metadata or signal data that can be encoded as compatible metadata to further modify the authored metadata to optimize the audio content for rendering over headphones. Thus, at the receiving end of the content, rendering is performed live and can account for locally generated sensor array data which can be generated either by a headset or an actual mobile device 104 to which headsets are attached, and such hardware-generated metadata can be further combined with the metadata created by the content creator at authoring time to enhance the binaural rendering experience.
As stated above, in some embodiments, low frequency content may be transported separately to enabled headphones allowing more than stereo input (typically 3 or 4 audio inputs), or encoded and modulated into the higher frequencies of the main stereo waveforms carried to a headset with only stereo input. This would allow further low frequency processing to occur in the headphones (e.g. routing to specific drivers optimized for low frequencies). Such headphones may include low frequency specific drivers and/or filter plus crossover and amplification circuitry to optimize playback of low frequency signals.
In an embodiment, a link from the headphones to the headphone processing component is provided on the playback side to enable manual identification of the headphones for automatic headphone preset loading or other configuration of the headphones. Such a link may be implemented as a wireless or wired link from the headphones to headphone process 406 in FIG. 4 , for example. The identification may be used to configure the target headphones or to send specific content or specifically rendered content to a specific set of headphones if multiple target headphones are being used. The headphone identifier may be embodied in any appropriate alphanumeric or binary code that is processed by the rendering process as either part of the metadata or a separate data processing operation.
FIG. 6 illustrates an enabled headphone that comprises one or more sensors that sense playback conditions for encoding as metadata used in a headphone rendering system, under an embodiment. The various sensors may be arranged in a sensor array that can be used to provide live metadata to the renderer at render time. For the example headphone 600 of FIG. 6 , the sensors include a range sensor (such as infrared IR or time-of-flight TOF camera) 602, tension/headsize sensor 604, gyroscopic sensor 606, external microphone (or pair) 610, ambient noise cancellation processor 608, internal microphone (or pair) 612, among other appropriate sensors. As shown in FIG. 6 , the sensor array can comprise both audio sensors (i.e., microphones) as well as data sensors (e.g., orientation, size, tension/stress, and range sensors). Specifically for use with headphones, orientation data can be used to ‘lock’ or rotate the spatial audio object according to the listener's head motion, tension sensors or external microphones can be used to infer the size of the listener's head (e.g., by monitoring audio cross correlation at two external microphones located on the earcups) and adjust relevant binaural rendering parameters (e.g., interaural time delays, shoulder reflection timing, etc.). Range sensors 602 can be used to evaluate distance to the display in case of a mobile A/V playback and correct the location of on-screen objects to account for the distance-dependent viewing angle (i.e. render objects wider as the screen is brought closer to the listener) or adjust global gain and room model to convey appropriate distance rendering. Such a sensor function is useful if the audio content is part of A/V content that is played back on devices that may range from small mobile phones (e.g., 2-4″ screen size) to tablets (e.g., 7-10″ screen size) to laptop computers (e.g., 15-17″ screen size). In addition sensors can also be used to automatically detect and set the routing of the left and right audio outputs to the correct transducers, not requiring a specific a priori orientation or explicit “Left/Right” markings on the headphones.
As shown in FIG. 1 , the audio or A/V content transmitted to the headphones 116 or 118 may be provided through a handheld or portable device 104. In an embodiment, the device 104 itself may include one or more sensors. For example, if the device is a handheld game console or game controller, certain gyro sensors and accelerometers may be provided to track object movement and position. For this embodiment, the device 104 to which the headset is connected can also provide additional sensor data such as orientation, head size, camera, etc., as device metadata.
For this embodiment, certain headphone-to-device communication means are implemented. For example, the headset can be connected to the device either through a wired or wireless digital link or an analog audio link (microphone input), in which case the metadata will be frequency modulated and added to the analog microphone input. FIG. 7 illustrates the connection between a headphone and device 104 including a headphone sensor processor 702, under an embodiment. As shown in system 700, headphone 600 transmits certain sensor, audio and microphone data 701 over a wired or wireless link to headphone sensor processor 702. The processed data from processor 702 may comprise analog audio with metadata 704 or spatial audio output 706. As shown in FIG. 7 , each of the connections comprises a bi-directional link between the headphone, processor, and outputs. This allows sensor and microphone data to be transmitted to and from the headphones and device for creation or modification of appropriate metadata. In addition to hardware generated metadata, user controls can also be provided to complement or generate appropriate metadata if not available through hardware sensor arrays. Example user controls can include: elevation emphasis, binaural on/off switch, preferred sound radius or size, and other similar characteristics. Such user controls may be provided through hardware or software interface elements associated with the headphone processor component, playback device, and/or headphones.
FIG. 8 is a block diagram illustrating the different metadata components that may be used in a headphone rendering system, under an embodiment. As shown in diagram 800, the metadata processed by the headphone processor 806 comprises authored metadata, such as that produced by authoring tool 102 and mixing console 500, and hardware generated metadata 804. The hardware generated metadata 804 may include user input metadata, device-side metadata provided by or generated from data sent from device 808, and/or headphone-side metadata provided by or generated from data sent from headphone 810.
In an embodiment, the authored 802 and/or hardware-generated 804 metadata is processed in a binaural rendering component 114 of renderer 112. The metadata provides control over specific audio channels and/or objects to optimize playback over headphones 116 or 118. FIG. 9 illustrates functional components of a binaural rendering component for headphone processing, under an embodiment. As shown in system 900, decoder 902 outputs the multi-channel signal or the channel plus object tracks along with the decoded parameter set, M, for controlling the headphone processing performed by headphone processor 904. The headphone processor 904 also receives certain spatial parameter updates 906 from camera-based or sensor-based tracking device 910. Tracking device 910 is a face-tracking or head-tracking device that measures certain angular and positional parameters (r, θ, ϕ) associated with the user's head. The spatial parameters may correspond to distance and certain orientation angles, such as yaw, pitch, and roll. An original set of spatial parameters, x, may be updated as the sensor data 910 is processed. These spatial parameters updates Y are then passed to the headphone processor 904 for further modification of the parameter set M. The processed audio data is then transmitted to a post-processing stage 908 that performs certain audio processing such as timbre-correction, filtering, downmixing, and other relevant processes. The audio is then equalized by equalizer 912 and transmitted to the headphones. In an embodiment, the equalizer 912 may perform equalization with or without using a pressure-division-ratio (PDR) transform, as described in further detail in the description that follows.
FIG. 10 illustrates a binaural rendering system for rendering audio objects in a headphone rendering system, under an embodiment. FIG. 10 illustrates some of the signal components as they are processed through a binaural headphone processor. As shown in diagram 1000, object audio components are input to an unmixer 1002 that separates direct and diffuse components (e.g., direct from reverb path) of the audio. The direct component is input to a downmix component 1006 that downmixes surround channels (e.g., 5.1 surround) to stereo with phase shift information. The direct component is also input to a direct content binaural renderer 1008. Both two-channel components are then input to a dynamic timbre equalizer 1012. For the object based audio input, the object position and user control signals are input to a virtualizer steerer component 1004. This generates a scaled object position that is input to the binaural renderer 1008 along with the direct component. The diffuse component of the audio is input to a separate binaural renderer 1010, and is combined with the rendered direct content by an adder circuit prior to output as two-channel output audio.
FIG. 11 illustrates a more detailed representation of the binaural rendering system of FIG. 10 , under an embodiment. As shown in diagram 1100 of FIG. 11 , the multi-channel and object-based audio is input to unmixer 1102 for separation into direct and diffuse components. The direct content is processed by direct binaural renderer 1118, and the diffuse content is processed by diffuse binaural renderer 1120. After downmixing 1116 and timbre equalization 1124 of the direct content the diffuse and direct audio components are then combined through an adder circuit for post-processing, such as by headphone equalizer 1122, and other possible circuits. As shown in FIG. 11 , certain user input and feedback data are used to modify the binaural rendering of the diffuse content in diffuse binaural renderer 1120. For the embodiment of system 1100, playback environment sensor 1106 provides data regarding listening room properties and noise estimation (ambient sound levels), head/face tracking sensor 1108 provides head position, orientation, and size data, device tracking sensor 1110 provides device position data, and user input 1112 provides playback radius data. This data may be provided by sensors located in the headphone 116 and/or device 104. The various sensor data and user input data is combined with content metadata, which provides object position and room parameter information in a virtualizer steerer component 1104. This component also receives direct and diffuse energy information from the unmixer 1102. The virtualizer steerer 1104 outputs data including object position, head position/orientation/size, room parameters, and other relevant information to the diffuse content binaural renderer 1120. In this manner, the diffuse content of the input audio is adjusted to accommodate sensor and user input data.
While optimal performance of the virtualizer steerer is achieved when sensor data, user input data, and content metadata are received, it is possible to achieve beneficial performance of the virtualizer steerer even in the absence of one or more of these inputs. For example, when processing legacy content (e.g., encoded bitstreams which do not contain binaural rendering metadata) for playback over conventional headphones (e.g., headphones which do not include various sensors, microphones, etc.), a beneficial result may still be obtained by providing the direct energy and diffuse energy outputs of the unmixer 1102 to the virtualizer steerer 1104 to generate control information for the diffuse content binaural renderer 1120, even in the absence of one or more other inputs to the virtualizer steerer.
In an embodiment, rendering system 1100 of FIG. 11 allows the binaural headphone renderer to efficiently provide individualization based on interaural time difference (ITD) and interaural level difference (ILD) and sensing of head size. ILD and ITD are important cues for azimuth, which is the angle of an audio signal relative to the head when produced in the horizontal plane. ITD is defined as the difference in arrival time of a sound between two ears, and the ILD effect uses differences in sound level entering the ears to provide localization cues. It is generally accepted that ITDs are used to localize low frequency sound and ILDs are used to localize high frequency sounds, while both are used for content that contains both high and low frequencies.
Rendering system 1100 also allows accommodation for source distance control and room model. It further allows for direct versus diffuse/reverb (dry/wet) content extraction and processing, optimization of room reflections, and timbral matching.
HRTF Model
In spatial audio reproduction, certain sound source cues are virtualized. For example, sounds intended to be heard from behind the listeners may be generated by speakers physically located behind them, and as such, all of the listeners perceive these sounds as coming from behind. With virtual spatial rendering over headphones, on the other hand, perception of audio from behind is controlled by head related transfer functions (HRTF) that are used to generate the binaural signal. In an embodiment, the metadata-based headphone processing system 100 may include certain HRTF modeling mechanisms. The foundation of such a system generally builds upon the structural model of the head and torso. This approach allows algorithms to be built upon the core model in a modular approach. In this algorithm, the modular algorithms are referred to as ‘tools.’ In addition to providing ITD and ILD cues, the model approach provides a point of reference with respect to the position of the ears on the head, and more broadly to the tools that are built upon the model. The system could be tuned or modified according to anthropometric features of the user. Other benefits of the modular approach allow for accentuating certain features in order to amplify specific spatial cues. For instance, certain cues could be exaggerated beyond what an acoustic binaural filter would impart to an individual. FIG. 12 is a system diagram showing the different tools used in an HRTF modeling system used in a headphone rendering system, under an embodiment. As shown in FIG. 12 , certain inputs including azimuth, elevation, fs, and range are input to modeling stage 1204, after at least some input components are filtered 1202. In an embodiment, filter stage 1202 may comprise a snowman filter model that consists of a spherical head on top of a spherical body and accounts for the contributions of the torso as well as the head to the HRTF. Modeling stage 1204 computes the pinna and torso models and the left and right (l, r) components are post-processed 1206 for final output 1208.
Metadata Structure
As described above, the audio content processed by the headphone playback system comprises channels, objects and associated metadata that provides the spatial and processing cues necessary to optimize rendering of the audio through headphones. Such metadata can be generated as authored metadata from authoring tools as well as hardware generated metadata from one or more endpoint devices. FIG. 13 illustrates a data structure that enables delivery of metadata for a headphone rendering system, under an embodiment. In an embodiment, the metadata structure of FIG. 13 is configured to supplement the metadata delivered in other portions of a bitstream that may be packaged in accordance with a known channel-based audio format, such as Dolby digital AC-3 or Enhanced AC-3 bit stream syntax. As shown in FIG. 13 , the data structure consists of a container 1300 that contains one or more data payloads 1304. Each payload is identified in the container using a unique payload identifier value to provide an unambiguous indication of the type of data present in the payload. The order of payloads within the container is undefined. Payloads can be stored in any order, and a parser must be able to parse the entire container to extract relevant payloads, and ignore payloads that are either not relevant or are unsupported. Protection data 1306 follows the final payload in the container that can be used by a decoding device to verify that the container and the payloads within the container are error-free. A preliminary portion 1302 containing sync, version, and key-ID information precedes the first payload in the container.
The data structure supports extensibility through the use of versioning and identifiers for specific payload types. The metadata payloads may be used to describe the nature or configuration of the audio program being delivered in an AC-3 or Enhanced AC-3 (or other type) bit stream, or may be used to control audio processing algorithms that are designed to further process the output of the decoding process.
Containers may be defined using different programming structures, based on implementation preferences. The table below illustrates example syntax of a container, under an embodiment.
container ( )
{
 Version...................................................................................................2
 if (version == 3)
 {
  version += variable_bits(2).........................................variable_bits (2)
 }
 key_id......................................................................................................3
 if (key_id == 7)
 {
  key_id += variable_bits(3)...........................................variable_bits (3)
 }
 payload id ...............................................................................................5
 while (payload_id != 0x0)
 {
  if (payload_id == 0x1F)
  {
   payload_id += variable_bits(5).................................variable bits (5)
  }
  payload_config( )
  payload_size..................................................................variable bits (8)
  for (i = 0; i < payload_size; i++)
  {
   payload_bytes...................................................................................8
  }
 }
 protection( )
}
An example of possible syntax of the variable bits for the example container syntax provided above is shown in the following table:
Syntax
variable_bits (n_bits)
}
 value = 0;
 Do
 {
  value += read...................................................................................n_bits
  read_more................................................................................................1
  if (read_more)
  {
   value <<= n_bits;
   value += (1 << n_bits);
  }
 }
 while (read_more);
 return value;
}
An example of possible syntax of the payload configuration for the example container syntax provided above is shown in the following table:
Syntax                       No. of bits
payload_config( )
{
 Smploffste...................................................................................................1
 if (smploffste) {smploffst}........................................................................11
 Reserved......................................................................................................1
 Duratione.....................................................................................................1
 if (duratione) {duration}...................................................variable_bits (11)
 Groupide......................................................................................................1
 if (groupide) {groupid}.......................................................variable_bits (2)
 Codecdatae..................................................................................................1
 if (codecdatae) {reserved}..........................................................................8
 discard_unknown_payload..........................................................................1
 if (discard_unknown_payload == 0)
 {
  if (smploffste == 0)
  {
   payload_frame_aligned.......................................................................1
   if (payload_frame_aligned)
   {
    create_duplicate...............................................................................1
    remove_duplicate............................................................................1
   }
  }
  if (smpoffste == 1 | | payload_frame_aligned == 1)
  {
   Priority................................................................................................5
   proc_allowed......................................................................................2
  }
 }
}
The above syntax definitions are provided as example implementations, and are not meant to be limiting as many other different program structures may be used. In an embodiment, a number of fields within the container structure and payload data are encoded using a method known as variable-bits. This method enables efficient coding of small field values with extensibility to be able to express arbitrarily large field values. When variable-bit coding is used, the field is consists of one or more groups of n-bits, with each group followed by a 1-bit read-more field. At a minimum, coding of n bits requires n+1 bits to be transmitted. All fields coded using variable_bits are interpreted as unsigned integers. Various other different coding aspects may be implemented according to practices and methods known to those of ordinary skill in the art. The above tables and FIG. 13 illustrate an example metadata structure, format, and program content. It should be noted that these are intended to represent one example embodiment of a metadata representation, and other metadata definitions and content are also possible.
Headphone EQ and Correction
As illustrated in FIG. 1 , certain post-processing functions 115 may be performed by the renderer 112. One such post-processing function comprises headphone equalization, as shown in element 912 of FIG. 9 . In an embodiment, equalization may be performed by obtaining blocked-ear canal impulse response measurements for different headphone placements for each ear. FIG. 14 illustrates an example case of three impulse response measurements for each ear, in an embodiment of a headphone equalization process. The equalization post-process compute the Fast Fourier Transform (FFT) of each response and performs an RMS (root-mean squared) averaging of the derived response. The responses may be variable, octave smoothed, ERB smoothed, etc. The process then computes the inversion, |F(ω)|, of the RMS average with constraints on the limits (+/−x dB) of the inversion magnitude response at mid- and high-frequencies. The process then determines the time-domain filter.
The post-process may also include a closed-to-open transform function. This pressure-division-ratio (PDR) method involves designing a transform to match the acoustical impedance between eardrum and free-field for closed-back headphones with modifications in terms of how the measurements are obtained for free-field sound transmission as a function of direction of arrival first-arriving sound. This indirectly enables matching the eardrum pressure signals between closed-back headphones and free-field equivalent conditions without requiring complicated eardrum measurements.
FIG. 15A illustrates a circuit for calculating the free-field sound transmission, under an embodiment. Circuit 1500 is based on a free-field acoustical impedance model. In this model, P1(ω) is the Thevenin pressure measured at the entrance of the blocked ear canal with a loudspeaker at θ degrees about the median plane (e.g., about 30 degrees to the left and front of the listener) involving extraction of direct sound from the measured impulse response. Measurement P1(ω) can be done at the entrance of the ear canal or at a certain distance inside (x mm) inside the ear canal (or at the eardrum) from the opening for the same loudspeaker at the same placement for measuring P1(ω) involving extraction of direct sound from the measured impulse response.
For this model, the ratio of P2(ω)/P1(ω) is calculated as follows:
P 2 ( ω ) P 1 ( ω ) = Z eardrum ( ω ) Z eardrum ( ω ) + Z radiation ( ω )
FIG. 15B illustrates a circuit for calculating the headphone sound transmission, under an embodiment. Circuit 1510 is based on a headphone acoustical impedance analog model. In this model, P4 is measured at the entrance of the blocked ear canal with headphone (RMS averaged) steady-state measurement, and measure P5(ω) is made at the entrance to the ear canal or at a distance inside the ear canal (or at the eardrum) from the opening for the same headphone placement used for measuring P4(ω).
For this model, the ratio of P5(ω)/P4(ω) is calculated as follows:
P 5 ( ω ) P 4 ( ω ) = Z eardrum ( ω ) Z eardrum ( ω ) + Z headphone ( ω )
The pressure-division-ratio (PDR) can then be calculated using the following formula:
P D R ( ω ) = P 2 ( ω ) P 1 ( ω ) ÷ P 5 ( ω ) P 4 ( ω )
Aspects of the methods and systems described herein may be implemented in an appropriate computer-based sound processing network environment for processing digital or digitized audio files. Portions of the adaptive audio system may include one or more networks that comprise any desired number of individual machines, including one or more routers (not shown) that serve to buffer and route the data transmitted among the computers. Such a network may be built on various different network protocols, and may be the Internet, a Wide Area Network (WAN), a Local Area Network (LAN), or any combination thereof. In an embodiment in which the network comprises the Internet, one or more machines may be configured to access the Internet through web browser programs.
One or more of the components, blocks, processes or other functional components may be implemented through a computer program that controls execution of a processor-based computing device of the system. It should also be noted that the various functions disclosed herein may be described using any number of combinations of hardware, firmware, and/or as data and/or instructions embodied in various machine-readable or computer-readable media, in terms of their behavioral, register transfer, logic component, and/or other characteristics. Computer-readable media in which such formatted data and/or instructions may be embodied include, but are not limited to, physical (non-transitory), non-volatile storage media in various forms, such as optical, magnetic or semiconductor storage media.
Unless the context clearly requires otherwise, throughout the description and the claims, the words “comprise,” “comprising,” and the like are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense; that is to say, in a sense of “including, but not limited to.” Words using the singular or plural number also include the plural or singular number respectively. Additionally, the words “herein,” “hereunder,” “above,” “below,” and words of similar import refer to this application as a whole and not to any particular portions of this application. When the word “or” is used in reference to a list of two or more items, that word covers all of the following interpretations of the word: any of the items in the list, all of the items in the list and any combination of the items in the list.
While one or more implementations have been described by way of example and in terms of the specific embodiments, it is to be understood that one or more implementations are not limited to the disclosed embodiments. To the contrary, it is intended to cover various modifications and similar arrangements as would be apparent to those skilled in the art. Therefore, the scope of the appended claims should be accorded the broadest interpretation so as to encompass all such modifications and similar arrangements.

Claims (3)

What is claimed is:
1. A method, performed by an audio signal processing device, for generating a binaural rendering of digital audio content for playback through headphones, the method comprising:
receiving an encoded signal comprising the digital audio content and rendering metadata, wherein the digital audio content comprises a plurality of audio object signals;
receiving playback control metadata comprising local setup information;
decoding the encoded signal to obtain the plurality of audio object signals; and
generating the binaural rendering of the digital audio content in response to the plurality of audio object signals, the rendering metadata, and the playback control metadata;
wherein the rendering metadata indicates, for each audio object signal, position, gain, and whether to apply head tracking information to the audio object signal;
wherein the local setup information comprises listener specific characteristics including head orientation information indicating yaw, pitch, and roll angles;
wherein, when the rendering metadata indicates, for an audio object signal, not to apply head tracking information to the audio object signal, generating the binaural rendering comprises, for the audio object signal, ignoring the head orientation information; and
wherein, when the rendering metadata indicates, for an audio object signal, to apply head tracking information to the audio object signal, generating the binaural rendering comprises, for the audio object signal, applying the head orientation information to the audio object signal.
2. An audio signal processing device for generating a binaural rendering of digital audio content for playback through headphones, the audio signal processing device comprising one or more processors to:
receive an encoded signal comprising the digital audio content and rendering metadata, wherein the digital audio content comprises a plurality of audio object signals;
receive playback control metadata comprising local setup information;
decode the encoded signal to obtain the plurality of audio object signals; and
generate the binaural rendering of the digital audio content in response to the plurality of audio object signals, the rendering metadata, and the playback control metadata;
wherein the rendering metadata indicates, for each audio object signal, position, gain, and whether to apply head tracking information to the audio object signal;
wherein the local setup information comprises listener specific characteristics including head orientation information indicating yaw, pitch, and roll angles;
wherein, when the rendering metadata indicates, for an audio object signal, not to apply head tracking information to the audio object signal, generating the binaural rendering comprises, for the audio object signal, ignoring the head orientation information; and
wherein, when the rendering metadata indicates, for an audio object signal, to apply head tracking information to the audio object signal, generating the binaural rendering comprises, for the audio object signal, applying the head orientation information to the audio object signal.
3. A non-transitory computer readable storage medium comprising a sequence of instructions, which, when executed by an audio signal processing device, cause the audio signal processing device to perform a method for generating a binaural rendering of digital audio content for playback through headphones, the method comprising:
receiving an encoded signal comprising the digital audio content and rendering metadata, wherein the digital audio content comprises a plurality of audio object signals;
receiving playback control metadata comprising local setup information;
decoding the encoded signal to obtain the plurality of audio object signals; and
generating the binaural rendering of the digital audio content in response to the plurality of audio object signals, the rendering metadata, and the playback control metadata;
wherein the rendering metadata indicates, for each audio object signal, position, gain, and whether to apply head tracking information to the audio object signal;
wherein the local setup information comprises listener specific characteristics including head orientation information indicating yaw, pitch, and roll angles;
wherein, when the rendering metadata indicates, for an audio object signal, not to apply head tracking information to the audio object signal, generating the binaural rendering comprises, for the audio object signal, ignoring the head orientation information; and
wherein, when the rendering metadata indicates, for an audio object signal, to apply head tracking information to the audio object signal, generating the binaural rendering comprises, for the audio object signal, applying the head orientation information to the audio object signal.
US18/305,618 2013-10-31 2023-04-24 Binaural rendering for headphones using metadata processing Active US12061835B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US18/305,618 US12061835B2 (en) 2013-10-31 2023-04-24 Binaural rendering for headphones using metadata processing

Applications Claiming Priority (9)

Application Number Priority Date Filing Date Title
US201361898365P 2013-10-31 2013-10-31
PCT/US2014/062705 WO2015066062A1 (en) 2013-10-31 2014-10-28 Binaural rendering for headphones using metadata processing
US201615031953A 2016-04-25 2016-04-25
US15/934,849 US10255027B2 (en) 2013-10-31 2018-03-23 Binaural rendering for headphones using metadata processing
US16/352,607 US10503461B2 (en) 2013-10-31 2019-03-13 Binaural rendering for headphones using metadata processing
US16/673,849 US10838684B2 (en) 2013-10-31 2019-11-04 Binaural rendering for headphones using metadata processing
US17/098,268 US11269586B2 (en) 2013-10-31 2020-11-13 Binaural rendering for headphones using metadata processing
US17/685,681 US11681490B2 (en) 2013-10-31 2022-03-03 Binaural rendering for headphones using metadata processing
US18/305,618 US12061835B2 (en) 2013-10-31 2023-04-24 Binaural rendering for headphones using metadata processing

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US17/685,681 Continuation US11681490B2 (en) 2013-10-31 2022-03-03 Binaural rendering for headphones using metadata processing

Publications (2)

Publication Number Publication Date
US20230385013A1 US20230385013A1 (en) 2023-11-30
US12061835B2 true US12061835B2 (en) 2024-08-13

Family

ID=51868366

Family Applications (7)

Application Number Title Priority Date Filing Date
US15/031,953 Active US9933989B2 (en) 2013-10-31 2014-10-28 Binaural rendering for headphones using metadata processing
US15/934,849 Active US10255027B2 (en) 2013-10-31 2018-03-23 Binaural rendering for headphones using metadata processing
US16/352,607 Active US10503461B2 (en) 2013-10-31 2019-03-13 Binaural rendering for headphones using metadata processing
US16/673,849 Active US10838684B2 (en) 2013-10-31 2019-11-04 Binaural rendering for headphones using metadata processing
US17/098,268 Active US11269586B2 (en) 2013-10-31 2020-11-13 Binaural rendering for headphones using metadata processing
US17/685,681 Active US11681490B2 (en) 2013-10-31 2022-03-03 Binaural rendering for headphones using metadata processing
US18/305,618 Active US12061835B2 (en) 2013-10-31 2023-04-24 Binaural rendering for headphones using metadata processing

Family Applications Before (6)

Application Number Title Priority Date Filing Date
US15/031,953 Active US9933989B2 (en) 2013-10-31 2014-10-28 Binaural rendering for headphones using metadata processing
US15/934,849 Active US10255027B2 (en) 2013-10-31 2018-03-23 Binaural rendering for headphones using metadata processing
US16/352,607 Active US10503461B2 (en) 2013-10-31 2019-03-13 Binaural rendering for headphones using metadata processing
US16/673,849 Active US10838684B2 (en) 2013-10-31 2019-11-04 Binaural rendering for headphones using metadata processing
US17/098,268 Active US11269586B2 (en) 2013-10-31 2020-11-13 Binaural rendering for headphones using metadata processing
US17/685,681 Active US11681490B2 (en) 2013-10-31 2022-03-03 Binaural rendering for headphones using metadata processing

Country Status (5)

Country Link
US (7) US9933989B2 (en)
EP (4) EP4421618A2 (en)
CN (6) CN108712711B (en)
ES (1) ES2755349T3 (en)
WO (1) WO2015066062A1 (en)

Families Citing this family (82)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3161828A4 (en) * 2014-05-27 2017-08-09 Chase, Stephen Video headphones, systems, helmets, methods and video content files
US10349197B2 (en) * 2014-08-13 2019-07-09 Samsung Electronics Co., Ltd. Method and device for generating and playing back audio signal
US10225814B2 (en) * 2015-04-05 2019-03-05 Qualcomm Incorporated Conference audio management
CA3149389A1 (en) * 2015-06-17 2016-12-22 Sony Corporation Transmitting device, transmitting method, receiving device, and receiving method
GB2543276A (en) * 2015-10-12 2017-04-19 Nokia Technologies Oy Distributed audio capture and mixing
GB2543275A (en) * 2015-10-12 2017-04-19 Nokia Technologies Oy Distributed audio capture and mixing
US9934790B2 (en) * 2015-07-31 2018-04-03 Apple Inc. Encoded audio metadata-based equalization
US9837086B2 (en) * 2015-07-31 2017-12-05 Apple Inc. Encoded audio extended metadata-based dynamic range control
US10341770B2 (en) 2015-09-30 2019-07-02 Apple Inc. Encoded audio metadata-based loudness equalization and dynamic equalization during DRC
EP3378239B1 (en) * 2015-11-17 2020-02-19 Dolby Laboratories Licensing Corporation Parametric binaural output system and method
US9749766B2 (en) * 2015-12-27 2017-08-29 Philip Scott Lyren Switching binaural sound
WO2017143067A1 (en) 2016-02-19 2017-08-24 Dolby Laboratories Licensing Corporation Sound capture for mobile devices
US11722821B2 (en) 2016-02-19 2023-08-08 Dolby Laboratories Licensing Corporation Sound capture for mobile devices
US9986363B2 (en) 2016-03-03 2018-05-29 Mach 1, Corp. Applications and format for immersive spatial sound
US10325610B2 (en) * 2016-03-30 2019-06-18 Microsoft Technology Licensing, Llc Adaptive audio rendering
GB201609089D0 (en) * 2016-05-24 2016-07-06 Smyth Stephen M F Improving the sound quality of virtualisation
GB2550877A (en) * 2016-05-26 2017-12-06 Univ Surrey Object-based audio rendering
CN105933826A (en) * 2016-06-07 2016-09-07 惠州Tcl移动通信有限公司 Method, system and earphone for automatically setting sound field
US10074012B2 (en) 2016-06-17 2018-09-11 Dolby Laboratories Licensing Corporation Sound and video object tracking
US10932082B2 (en) 2016-06-21 2021-02-23 Dolby Laboratories Licensing Corporation Headtracking for pre-rendered binaural audio
WO2018017394A1 (en) * 2016-07-20 2018-01-25 Dolby Laboratories Licensing Corporation Audio object clustering based on renderer-aware perceptual difference
EP3488623B1 (en) 2016-07-20 2020-12-02 Dolby Laboratories Licensing Corporation Audio object clustering based on renderer-aware perceptual difference
GB2552794B (en) * 2016-08-08 2019-12-04 Powerchord Group Ltd A method of authorising an audio download
JP2019533404A (en) * 2016-09-23 2019-11-14 ガウディオ・ラボ・インコーポレイテッド Binaural audio signal processing method and apparatus
US10187740B2 (en) * 2016-09-23 2019-01-22 Apple Inc. Producing headphone driver signals in a digital audio signal processing binaural rendering environment
GB2554447A (en) * 2016-09-28 2018-04-04 Nokia Technologies Oy Gain control in spatial audio systems
US10492016B2 (en) * 2016-09-29 2019-11-26 Lg Electronics Inc. Method for outputting audio signal using user position information in audio decoder and apparatus for outputting audio signal using same
US9980078B2 (en) * 2016-10-14 2018-05-22 Nokia Technologies Oy Audio object modification in free-viewpoint rendering
US20180115854A1 (en) * 2016-10-26 2018-04-26 Htc Corporation Sound-reproducing method and sound-reproducing system
WO2018079254A1 (en) * 2016-10-28 2018-05-03 Panasonic Intellectual Property Corporation Of America Binaural rendering apparatus and method for playing back of multiple audio sources
CN106412751B (en) * 2016-11-14 2019-08-20 惠州Tcl移动通信有限公司 A kind of earphone taken one's bearings and its implementation
EP3547718A4 (en) 2016-11-25 2019-11-13 Sony Corporation Reproducing device, reproducing method, information processing device, information processing method, and program
CN106713645B (en) * 2016-12-28 2019-11-15 努比亚技术有限公司 A kind of method and mobile terminal of the broadcasting of control loudspeaker
US11096004B2 (en) 2017-01-23 2021-08-17 Nokia Technologies Oy Spatial audio rendering point extension
US10531219B2 (en) 2017-03-20 2020-01-07 Nokia Technologies Oy Smooth rendering of overlapping audio-object interactions
US10321258B2 (en) * 2017-04-19 2019-06-11 Microsoft Technology Licensing, Llc Emulating spatial perception using virtual echolocation
US11074036B2 (en) 2017-05-05 2021-07-27 Nokia Technologies Oy Metadata-free audio-object interactions
US10165386B2 (en) 2017-05-16 2018-12-25 Nokia Technologies Oy VR audio superzoom
US11303689B2 (en) 2017-06-06 2022-04-12 Nokia Technologies Oy Method and apparatus for updating streamed content
US20180357038A1 (en) * 2017-06-09 2018-12-13 Qualcomm Incorporated Audio metadata modification at rendering device
GB2563635A (en) 2017-06-21 2018-12-26 Nokia Technologies Oy Recording and rendering audio signals
US11089425B2 (en) * 2017-06-27 2021-08-10 Lg Electronics Inc. Audio playback method and audio playback apparatus in six degrees of freedom environment
US10617842B2 (en) 2017-07-31 2020-04-14 Starkey Laboratories, Inc. Ear-worn electronic device for conducting and monitoring mental exercises
US11272308B2 (en) * 2017-09-29 2022-03-08 Apple Inc. File format for spatial audio
US11395087B2 (en) * 2017-09-29 2022-07-19 Nokia Technologies Oy Level-based audio-object interactions
FR3075443A1 (en) * 2017-12-19 2019-06-21 Orange PROCESSING A MONOPHONIC SIGNAL IN A 3D AUDIO DECODER RESTITUTING A BINAURAL CONTENT
USD861724S1 (en) * 2017-12-21 2019-10-01 Toontrack Music Ab Computer screen with a graphical user interface
GB201800920D0 (en) * 2018-01-19 2018-03-07 Nokia Technologies Oy Associated spatial audio playback
KR102527336B1 (en) * 2018-03-16 2023-05-03 한국전자통신연구원 Method and apparatus for reproducing audio signal according to movenemt of user in virtual space
US10542368B2 (en) 2018-03-27 2020-01-21 Nokia Technologies Oy Audio content modification for playback audio
BR112020021608A2 (en) * 2018-04-24 2021-01-26 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. apparatus and method for rendering an audio signal for reproduction for a user
EP4093057A1 (en) 2018-04-27 2022-11-23 Dolby Laboratories Licensing Corp. Blind detection of binauralized stereo content
US11929091B2 (en) 2018-04-27 2024-03-12 Dolby Laboratories Licensing Corporation Blind detection of binauralized stereo content
KR102036010B1 (en) * 2018-05-15 2019-10-25 박승민 Method for emotional calling using binaural sound and apparatus thereof
US10390170B1 (en) * 2018-05-18 2019-08-20 Nokia Technologies Oy Methods and apparatuses for implementing a head tracking headset
GB2593117A (en) * 2018-07-24 2021-09-22 Nokia Technologies Oy Apparatus, methods and computer programs for controlling band limited audio objects
DE102019107302A1 (en) * 2018-08-16 2020-02-20 Rheinisch-Westfälische Technische Hochschule (Rwth) Aachen Process for creating and playing back a binaural recording
TWM579049U (en) * 2018-11-23 2019-06-11 建菱科技股份有限公司 Stero sound source-positioning device externally coupled at earphone by tracking user's head
US11304021B2 (en) * 2018-11-29 2022-04-12 Sony Interactive Entertainment Inc. Deferred audio rendering
US11112389B1 (en) * 2019-01-30 2021-09-07 Facebook Technologies, Llc Room acoustic characterization using sensors
US11056127B2 (en) * 2019-04-30 2021-07-06 At&T Intellectual Property I, L.P. Method for embedding and executing audio semantics
WO2020231883A1 (en) * 2019-05-15 2020-11-19 Ocelot Laboratories Llc Separating and rendering voice and ambience signals
CN114391262B (en) 2019-07-30 2023-10-03 杜比实验室特许公司 Dynamic processing across devices with different playback capabilities
US11968268B2 (en) 2019-07-30 2024-04-23 Dolby Laboratories Licensing Corporation Coordination of audio devices
GB2588801A (en) * 2019-11-08 2021-05-12 Nokia Technologies Oy Determination of sound source direction
DE102019135690B4 (en) * 2019-12-23 2022-11-17 Sennheiser Electronic Gmbh & Co. Kg Method and device for audio signal processing for binaural virtualization
JP2023513746A (en) 2020-02-14 2023-04-03 マジック リープ, インコーポレイテッド Multi-application audio rendering
JP2023515886A (en) 2020-03-02 2023-04-14 マジック リープ, インコーポレイテッド Immersive audio platform
US11381209B2 (en) 2020-03-12 2022-07-05 Gaudio Lab, Inc. Audio signal processing method and apparatus for controlling loudness level and dynamic range
WO2022010454A1 (en) * 2020-07-06 2022-01-13 Hewlett-Packard Development Company, L.P. Binaural down-mixing of audio signals
CN111918176A (en) * 2020-07-31 2020-11-10 北京全景声信息科技有限公司 Audio processing method, device, wireless earphone and storage medium
WO2022093162A1 (en) * 2020-10-26 2022-05-05 Hewlett-Packard Development Company, L.P. Calculation of left and right binaural signals for output
US20240056760A1 (en) * 2020-12-17 2024-02-15 Dolby Laboratories Licensing Corporation Binaural signal post-processing
EP4030783A1 (en) * 2021-01-19 2022-07-20 Nokia Technologies Oy Indication of responsibility for audio playback
JP2022144499A (en) 2021-03-19 2022-10-03 ヤマハ株式会社 Sound field support method and sound field support device
US11388513B1 (en) * 2021-03-24 2022-07-12 Iyo Inc. Ear-mountable listening device with orientation discovery for rotational correction of microphone array outputs
GB2605970B (en) 2021-04-19 2023-08-30 Waves Audio Ltd Content based spatial remixing
GB2609667A (en) * 2021-08-13 2023-02-15 British Broadcasting Corp Audio rendering
JP2024531564A (en) * 2021-09-09 2024-08-29 ドルビー ラボラトリーズ ライセンシング コーポレイション Headphones rendering metadata that preserves spatial coding
WO2023215405A2 (en) * 2022-05-05 2023-11-09 Dolby Laboratories Licensing Corporation Customized binaural rendering of audio content
WO2024036113A1 (en) * 2022-08-09 2024-02-15 Dolby Laboratories Licensing Corporation Spatial enhancement for user-generated content
WO2024044113A2 (en) * 2022-08-24 2024-02-29 Dolby Laboratories Licensing Corporation Rendering audio captured with multiple devices

Citations (34)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040260682A1 (en) 2003-06-19 2004-12-23 Microsoft Corporation System and method for identifying content and managing information corresponding to objects in a signal
CN1720764A (en) 2002-12-06 2006-01-11 皇家飞利浦电子股份有限公司 Personalized surround sound headphone system
US20070160218A1 (en) 2006-01-09 2007-07-12 Nokia Corporation Decoding of binaural audio signals
WO2007078254A2 (en) 2006-01-05 2007-07-12 Telefonaktiebolaget Lm Ericsson (Publ) Personalized decoding of multi-channel surround sound
WO2007080212A1 (en) 2006-01-09 2007-07-19 Nokia Corporation Controlling the decoding of binaural audio signals
US20080008342A1 (en) 2006-07-07 2008-01-10 Harris Corporation Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system
US20080031462A1 (en) 2006-08-07 2008-02-07 Creative Technology Ltd Spatial audio enhancement processing method and apparatus
US20080192941A1 (en) 2006-12-07 2008-08-14 Lg Electronics, Inc. Method and an Apparatus for Decoding an Audio Signal
US20090190766A1 (en) 1996-11-07 2009-07-30 Srs Labs, Inc. Multi-channel audio enhancement system for use in recording playback and methods for providing same
US20090198356A1 (en) 2008-02-04 2009-08-06 Creative Technology Ltd Primary-Ambient Decomposition of Stereo Audio Signals Using a Complex Similarity Index
US20090222118A1 (en) 2008-01-23 2009-09-03 Lg Electronics Inc. Method and an apparatus for processing an audio signal
WO2009111798A2 (en) 2008-03-07 2009-09-11 Sennheiser Electronic Gmbh & Co. Kg Methods and devices for reproducing surround audio signals
CN101569093A (en) 2006-12-21 2009-10-28 摩托罗拉公司 Dynamically learning a user's response via user-preferred audio settings in response to different noise environments
US20100014692A1 (en) 2008-07-17 2010-01-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating audio output signals using object based metadata
US20100076772A1 (en) 2007-02-14 2010-03-25 Lg Electronics Inc. Methods and Apparatuses for Encoding and Decoding Object-Based Audio Signals
US20100191537A1 (en) 2007-06-26 2010-07-29 Koninklijke Philips Electronics N.V. Binaural object-oriented audio decoder
CN101794208A (en) 2009-01-30 2010-08-04 苹果公司 The audio user interface that is used for the electronic equipment of displayless
US20100284549A1 (en) 2008-01-01 2010-11-11 Hyen-O Oh method and an apparatus for processing an audio signal
WO2011086060A1 (en) 2010-01-15 2011-07-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for extracting a direct/ambience signal from a downmix signal and spatial parametric information
US20120057715A1 (en) 2010-09-08 2012-03-08 Johnston James D Spatial audio encoding and reproduction
US20120099733A1 (en) 2010-10-20 2012-04-26 Srs Labs, Inc. Audio adjustment system
CN102549655A (en) 2009-08-14 2012-07-04 Srs实验室有限公司 System for adaptively streaming audio objects
WO2012125855A1 (en) 2011-03-16 2012-09-20 Dts, Inc. Encoding and reproduction of three dimensional audio soundtracks
US8325929B2 (en) 2008-10-07 2012-12-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Binaural rendering of a multi-channel audio signal
CN102855464A (en) 2011-05-30 2013-01-02 索尼公司 Information processing apparatus, metadata setting method, and program
WO2013006338A2 (en) 2011-07-01 2013-01-10 Dolby Laboratories Licensing Corporation System and method for adaptive audio signal generation, coding and rendering
US8363865B1 (en) 2004-05-24 2013-01-29 Heather Bottum Multiple channel sound system using multi-speaker arrays
US20130041648A1 (en) 2008-10-27 2013-02-14 Sony Computer Entertainment Inc. Sound localization for user in motion
CN102945276A (en) 2011-11-09 2013-02-27 微软公司 Generation and update based on event playback experience
US20130094667A1 (en) 2011-10-14 2013-04-18 Nicholas A.J. Millington Systems, methods, apparatus, and articles of manufacture to control audio playback devices
US8509454B2 (en) 2007-11-01 2013-08-13 Nokia Corporation Focusing on a portion of an audio scene for an audio signal
RS1332U (en) 2013-04-24 2013-08-30 Tomislav Stanojević Total surround sound system with floor loudspeakers
CN103329571A (en) 2011-01-04 2013-09-25 Dts有限责任公司 Immersive audio rendering system
US20140270184A1 (en) 2012-05-31 2014-09-18 Dts, Inc. Audio depth dynamic range enhancement

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2001041451A1 (en) * 1999-11-29 2001-06-07 Sony Corporation Video/audio signal processing method and video/audio signal processing apparatus
US20030223602A1 (en) * 2002-06-04 2003-12-04 Elbit Systems Ltd. Method and system for audio imaging
US7890089B1 (en) * 2007-05-03 2011-02-15 Iwao Fujisaki Communication device
KR20130122516A (en) * 2010-04-26 2013-11-07 캠브리지 메카트로닉스 리미티드 Loudspeakers with position tracking
JP5856295B2 (en) * 2011-07-01 2016-02-09 ドルビー ラボラトリーズ ライセンシング コーポレイション Synchronization and switchover methods and systems for adaptive audio systems
EP2637427A1 (en) * 2012-03-06 2013-09-11 Thomson Licensing Method and apparatus for playback of a higher-order ambisonics audio signal

Patent Citations (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090190766A1 (en) 1996-11-07 2009-07-30 Srs Labs, Inc. Multi-channel audio enhancement system for use in recording playback and methods for providing same
CN1720764A (en) 2002-12-06 2006-01-11 皇家飞利浦电子股份有限公司 Personalized surround sound headphone system
US20040260682A1 (en) 2003-06-19 2004-12-23 Microsoft Corporation System and method for identifying content and managing information corresponding to objects in a signal
US8363865B1 (en) 2004-05-24 2013-01-29 Heather Bottum Multiple channel sound system using multi-speaker arrays
WO2007078254A2 (en) 2006-01-05 2007-07-12 Telefonaktiebolaget Lm Ericsson (Publ) Personalized decoding of multi-channel surround sound
US20070160218A1 (en) 2006-01-09 2007-07-12 Nokia Corporation Decoding of binaural audio signals
WO2007080212A1 (en) 2006-01-09 2007-07-19 Nokia Corporation Controlling the decoding of binaural audio signals
US20080008342A1 (en) 2006-07-07 2008-01-10 Harris Corporation Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system
CN101491116A (en) 2006-07-07 2009-07-22 贺利实公司 Method and apparatus for creating a multi-dimensional communication space for use in a binaural audio system
US20080031462A1 (en) 2006-08-07 2008-02-07 Creative Technology Ltd Spatial audio enhancement processing method and apparatus
US20080192941A1 (en) 2006-12-07 2008-08-14 Lg Electronics, Inc. Method and an Apparatus for Decoding an Audio Signal
CN101569093A (en) 2006-12-21 2009-10-28 摩托罗拉公司 Dynamically learning a user's response via user-preferred audio settings in response to different noise environments
US20100076772A1 (en) 2007-02-14 2010-03-25 Lg Electronics Inc. Methods and Apparatuses for Encoding and Decoding Object-Based Audio Signals
US20100191537A1 (en) 2007-06-26 2010-07-29 Koninklijke Philips Electronics N.V. Binaural object-oriented audio decoder
US8509454B2 (en) 2007-11-01 2013-08-13 Nokia Corporation Focusing on a portion of an audio scene for an audio signal
US20100284549A1 (en) 2008-01-01 2010-11-11 Hyen-O Oh method and an apparatus for processing an audio signal
US20090222118A1 (en) 2008-01-23 2009-09-03 Lg Electronics Inc. Method and an apparatus for processing an audio signal
US20090198356A1 (en) 2008-02-04 2009-08-06 Creative Technology Ltd Primary-Ambient Decomposition of Stereo Audio Signals Using a Complex Similarity Index
WO2009111798A2 (en) 2008-03-07 2009-09-11 Sennheiser Electronic Gmbh & Co. Kg Methods and devices for reproducing surround audio signals
US20100014692A1 (en) 2008-07-17 2010-01-21 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating audio output signals using object based metadata
US8315396B2 (en) 2008-07-17 2012-11-20 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for generating audio output signals using object based metadata
CN103354630A (en) 2008-07-17 2013-10-16 弗朗霍夫应用科学研究促进协会 Apparatus and method for generating audio output signals using object based metadata
US8325929B2 (en) 2008-10-07 2012-12-04 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Binaural rendering of a multi-channel audio signal
US20130041648A1 (en) 2008-10-27 2013-02-14 Sony Computer Entertainment Inc. Sound localization for user in motion
CN101794208A (en) 2009-01-30 2010-08-04 苹果公司 The audio user interface that is used for the electronic equipment of displayless
US8396575B2 (en) 2009-08-14 2013-03-12 Dts Llc Object-oriented audio streaming system
CN102549655A (en) 2009-08-14 2012-07-04 Srs实验室有限公司 System for adaptively streaming audio objects
WO2011086060A1 (en) 2010-01-15 2011-07-21 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for extracting a direct/ambience signal from a downmix signal and spatial parametric information
CN103270508A (en) 2010-09-08 2013-08-28 Dts(英属维尔京群岛)有限公司 Spatial audio encoding and reproduction of diffuse sound
US20120082319A1 (en) 2010-09-08 2012-04-05 Jean-Marc Jot Spatial audio encoding and reproduction of diffuse sound
US20120057715A1 (en) 2010-09-08 2012-03-08 Johnston James D Spatial audio encoding and reproduction
US20120099733A1 (en) 2010-10-20 2012-04-26 Srs Labs, Inc. Audio adjustment system
CN103329571A (en) 2011-01-04 2013-09-25 Dts有限责任公司 Immersive audio rendering system
WO2012125855A1 (en) 2011-03-16 2012-09-20 Dts, Inc. Encoding and reproduction of three dimensional audio soundtracks
CN102855464A (en) 2011-05-30 2013-01-02 索尼公司 Information processing apparatus, metadata setting method, and program
WO2013006338A2 (en) 2011-07-01 2013-01-10 Dolby Laboratories Licensing Corporation System and method for adaptive audio signal generation, coding and rendering
CN103218198A (en) 2011-08-12 2013-07-24 索尼电脑娱乐公司 Sound localization for user in motion
US20130094667A1 (en) 2011-10-14 2013-04-18 Nicholas A.J. Millington Systems, methods, apparatus, and articles of manufacture to control audio playback devices
CN102945276A (en) 2011-11-09 2013-02-27 微软公司 Generation and update based on event playback experience
US20140270184A1 (en) 2012-05-31 2014-09-18 Dts, Inc. Audio depth dynamic range enhancement
RS1332U (en) 2013-04-24 2013-08-30 Tomislav Stanojević Total surround sound system with floor loudspeakers

Non-Patent Citations (17)

* Cited by examiner, † Cited by third party
Title
"DTS Headphone:X" Oct. 3, 2013, https://web.archive.org/web/20131004062647/http://www.dts.com/professionals/sound-technologies/headphonex.aspx, 4 pages.
Breebaart, J. et al "Multi-Channel goes Mobile: MPEG Surround Binaural Rendering", AES International Conference, Audio for Mobile and handheld devices, Sep. 2, 2006, pp. 1-13.
Faller, C. et al "Binaural Reproduction of Stereo Signals Using Upmixing and Diffuse Rendering" AES Convention Paper 8541, presented at the 131st Convention, Oct. 20-23, 2011, New York, USA, pp. 1-8.
Harma, A. et al "Techniques and Applications of Wearable Augmented Reality Audio" AES Convention paper 5768, presented at the 114th Convention, Mar. 22-25, 2003, Amsterdam, The Netherlands, pp. 1-20.
Laitinen, M.V. et al "Binaural Reproduction for Directional Audio Coding" IEEE Workshop on Applications of Signal Processing to Audio and Acoustics, Oct. 18, 2009, pp. 337-340.
Merimaa, Juha "Modification of HRTF Filters to Reduce Timbral Effects in Binaural Synthesis" AES Convention Paper 7912, presented at the 127th Convention, Oct. 9-12, 2009, New York, USA, pp. 1-14.
Stanojevic, Tomislav "3-D Sound in Future HDTV Projection Systems," 132nd SMPTE Technical Conference, Jacob K. Javits Convention Center, New York City, New York, Oct. 13-17, 1990, 20 pages.
Stanojevic, Tomislav "Surround Sound for a New Generation of Theaters," Sound and Video Contractor, Dec. 20, 1995, 7 pages.
Stanojevic, Tomislav "Virtual Sound Sources in the Total Surround Sound System," SMPTE Conf. Proc., 1995, pp. 405-421.
Stanojevic, Tomislav et al. "Designing of TSS Halls," 13th International Congress on Acoustics, Yugoslavia, 1989, pp. 326-331.
Stanojevic, Tomislav et al. "Some Technical Possibilities of Using the Total Surround Sound Concept in the Motion Picture Technology," 133rd SMPTE Technical Conference and Equipment Exhibit, Los Angeles Convention Center, Los Angeles, California, Oct. 26-29, 1991, 3 pages.
Stanojevic, Tomislav et al. "The Total Surround Sound (TSS) Processor," SMPTE Journal, Nov. 1994, pp. 734-740.
Stanojevic, Tomislav et al. "The Total Surround Sound System (TSS System)", 86th AES Convention, Hamburg, Germany, Mar. 7-10, 1989, 21 pages.
Stanojevic, Tomislav et al. "TSS Processor" 135th SMPTE Technical Conference, Los Angeles Convention Center, Los Angeles, California, Society of Motion Picture and Television Engineers, Oct. 29-Nov. 2, 1993, 22 pages.
Stanojevic, Tomislav et al. "TSS System and Live Performance Sound" 88th AES Convention, Montreux, Switzerland, Mar. 13-16, 1990, 27 pages.
Thompson, J. et al "Direct-Diffuse Decomposition of Multichannel Signals Using a System of Pairwise Correlations" AES Convention Paper 8807 presented at the 133rd Convention, Oct. 26-29, 2012, San Francisco, CA, USA; pp. 1-15.
Vaananen, R. et al "Advanced AudioBIFS: Virtual Acoustics modeling in MPEG-4 Scene Description" IEEE Transactions on Multimedia, vol. 6, issue 5, pp. 661-675, Oct. 2004.

Also Published As

Publication number Publication date
CN108712711A (en) 2018-10-26
US20200065055A1 (en) 2020-02-27
US20190205085A1 (en) 2019-07-04
US20180210695A1 (en) 2018-07-26
CN108712711B (en) 2021-06-15
CN105684467B (en) 2018-09-11
EP3672285B1 (en) 2024-07-17
ES2755349T3 (en) 2020-04-22
EP3063955A1 (en) 2016-09-07
US20220269471A1 (en) 2022-08-25
CN113630711A (en) 2021-11-09
US20160266865A1 (en) 2016-09-15
US9933989B2 (en) 2018-04-03
US10503461B2 (en) 2019-12-10
CN109068263B (en) 2021-08-24
CN105684467A (en) 2016-06-15
CN109040946A (en) 2018-12-18
WO2015066062A1 (en) 2015-05-07
CN109068263A (en) 2018-12-21
CN117376809A (en) 2024-01-09
US20230385013A1 (en) 2023-11-30
US10838684B2 (en) 2020-11-17
CN113630711B (en) 2023-12-01
CN109040946B (en) 2021-09-14
US11269586B2 (en) 2022-03-08
EP4421617A2 (en) 2024-08-28
US20210132894A1 (en) 2021-05-06
EP4421618A2 (en) 2024-08-28
EP3672285A1 (en) 2020-06-24
US10255027B2 (en) 2019-04-09
EP3063955B1 (en) 2019-10-16
US11681490B2 (en) 2023-06-20

Similar Documents

Publication Publication Date Title
US12061835B2 (en) Binaural rendering for headphones using metadata processing
US10341799B2 (en) Impedance matching filters and equalization for headphone surround rendering
EP3114859A1 (en) Structural modeling of the head related impulse response
EP2805326A1 (en) Spatial audio rendering and encoding
US20160044432A1 (en) Audio signal processing apparatus
US20240056760A1 (en) Binaural signal post-processing
CA3142575A1 (en) Stereo headphone psychoacoustic sound localization system and method for reconstructing stereo psychoacoustic sound signals using same

Legal Events

Date Code Title Description
FEPP Fee payment procedure

Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY

AS Assignment

Owner name: DOLBY LABORATORIES LICENSING CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:TSINGOS, NICOLAS R.;BHARITKAR, SUNIL;WILSON, RHONDA;AND OTHERS;SIGNING DATES FROM 20141105 TO 20150331;REEL/FRAME:065429/0830

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: NOTICE OF ALLOWANCE MAILED -- APPLICATION RECEIVED IN OFFICE OF PUBLICATIONS

STPP Information on status: patent application and granting procedure in general

Free format text: PUBLICATIONS -- ISSUE FEE PAYMENT VERIFIED

STCF Information on status: patent grant

Free format text: PATENTED CASE