EP3249646A1 - Mesure, verification et correction de l'alimentation temporelle de multiples canaux audio et metadottes associées - Google Patents

Mesure, verification et correction de l'alimentation temporelle de multiples canaux audio et metadottes associées Download PDF

Info

Publication number
EP3249646A1
EP3249646A1 EP17172852.0A EP17172852A EP3249646A1 EP 3249646 A1 EP3249646 A1 EP 3249646A1 EP 17172852 A EP17172852 A EP 17172852A EP 3249646 A1 EP3249646 A1 EP 3249646A1
Authority
EP
European Patent Office
Prior art keywords
audio
block
channels
value
audio data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP17172852.0A
Other languages
German (de)
English (en)
Other versions
EP3249646B1 (fr
Inventor
Kent Bennet TERRY
Scott Gregory NORCROSS
Jeffrey Riedmiller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Publication of EP3249646A1 publication Critical patent/EP3249646A1/fr
Application granted granted Critical
Publication of EP3249646B1 publication Critical patent/EP3249646B1/fr
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Speech or voice signal processing techniques to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0324Details of processing therefor
    • G10L21/034Automatic adjustment

Definitions

  • This disclosure relates to audio data processing.
  • this disclosure relates to the synchronization of audio data.
  • audio data time alignment issues (which are also referred to herein as synchronization issues) may become more complex and challenging. Such audio data time alignment issues may be particularly challenging in the context of transmitting and receiving data between media processing nodes of a broadcast network. Improved methods and devices would be desirable.
  • a method of processing audio data may involve receiving a block of audio data and receiving metadata associated with the block of audio data.
  • the block may include N pulse code modulated (PCM) audio channels.
  • the block may include audio samples for each of the N channels.
  • the method may involve receiving a first set of values corresponding to reference audio samples.
  • the method may involve determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • the metadata may include position data.
  • the first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include a number, which may be referred to herein as B , of most significant bits of at least one sample.
  • the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • the first set of values may include a first block metric for at least one channel.
  • the first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • Such methods may involve determining a second block metric for at least one channel of the block of audio data.
  • the second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric.
  • the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • RMS root mean square
  • the above-described methods may be performed at a measurement point. Some such implementations may involve determining, at a reference point and during a reference time before the block of audio data was received, the first set of values corresponding to the reference audio samples.
  • the reference time may be a time during which the metadata was synchronized with reference audio data.
  • Some such implementations may involve associating the first set of values with the metadata and transmitting the first set of values, at least one block of the reference audio data and the metadata from the reference point to the measurement point.
  • Non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc.
  • the software may include instructions for controlling one or more devices for receiving a block of audio data and receiving metadata associated with the block of audio data.
  • the block may include N pulse code modulated (PCM) audio channels.
  • the block may include audio samples for each of the N channels.
  • the software may include instructions for receiving a first set of values corresponding to reference audio samples.
  • the software may include instructions for determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • the metadata may include position data.
  • the first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include a number, which may be referred to herein as B , of most significant bits of at least one sample.
  • the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • the first set of values may include a first block metric for at least one channel.
  • the first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • the software may include instructions for determining a second block metric for at least one channel of the block of audio data.
  • the second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric.
  • the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • RMS root mean square
  • the control system may include at least one of a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components.
  • the interface system may include a network interface.
  • the apparatus may include a memory system.
  • the interface system may include an interface between the control system and at least a portion of (e.g., at least one memory device of) the memory system.
  • the control system may be capable of receiving, via the interface system, a block of audio data and metadata associated with the block of audio data.
  • the block may include N pulse code modulated (PCM) audio channels.
  • the block may include audio samples for each of the N channels.
  • the control system may be capable of receiving, via the interface system, a first set of values corresponding to reference audio samples.
  • the control system may be capable of determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • the metadata may include position data.
  • the first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include a number, which may be referred to herein as B , of most significant bits of at least one sample.
  • the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • the first set of values may include a first block metric for at least one channel.
  • the first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • the control system may be capable of determining a second block metric for at least one channel of the block of audio data.
  • the second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric.
  • the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • RMS root mean square
  • audio object may refer to a stream of audio data signals and associated metadata.
  • the metadata may indicate one or more of the position of the audio object, the apparent size of the audio object, rendering constraints as well as content type (e.g. dialog, effects), etc.
  • the metadata may include other types of data, such as gain data, trajectory data, etc.
  • Some audio objects may be static, whereas others may move.
  • Audio object details may be authored or rendered according to the associated metadata which, among other things, may indicate the position of the audio object in a two-dimensional space or a three-dimensional space at a given point in time.
  • the audio objects may be rendered according to their position metadata and possibly other metadata, such as size metadata, according to the reproduction speaker layout of the reproduction environment.
  • audio data that includes associated metadata may be in the form of pulse code modulated (PCM) audio data.
  • PCM audio data the amplitude of an analog audio signal is sampled regularly at uniform intervals. Each sample may be quantized to the nearest value within a range of digital steps.
  • Linear pulse-code modulation (LPCM) is a specific type of PCM in which the quantization levels are linearly uniform. With other types of PCM audio data, quantization levels may vary as a function of amplitude.
  • the dynamic range of an analog signal may be modified for digitizing to produce PCM audio data. Examples include PCM audio data produced according to the G.711 standard of the International Telecommunication Union's Telecommunication Standardization Sector (ITU-T), such as PCM audio data produced according to the A-law algorithm or the ⁇ -law algorithm.
  • ITU-T International Telecommunication Union's Telecommunication Standardization Sector
  • the audio data may be segmented into blocks of PCM audio data, including audio samples for each of the blocks.
  • Some use cases contemplated by the inventors may involve transmitting and receiving multiple channels of PCM audio data in professional production workflows, for example, between media processing nodes.
  • Such media processing nodes may, in some implementations, be part of a broadcast network.
  • Such audio data may be encoded in any form during transmission, but for the purpose of describing many of the methods disclosed herein, it will be assumed that the audio data is represented in PCM form.
  • an "audio program” is considered to be a set of one or more audio signals that are intended to be reproduced simultaneously as part of a single presentation.
  • Time alignment of audio channels that are part of an audio program is known to be important in the production and presentation of the audio program.
  • an audio program may include metadata that is associated with the audio signals, including metadata that may affect the reproduction of the audio signals. For at least some types of metadata, time alignment is likewise known to be important in the production and presentation of the audio program.
  • an audio program includes a segment during which a bird is intended to be flying overhead, it would thwart the intention of the content creator and would be disconcerting to the listener(s) if instead the reproduced sounds indicated that a lawnmower were flying overhead.
  • This disclosure describes methods for measuring, verifying, and correcting time alignment of multiple audio channels and metadata that are part of an audio program.
  • Figure 1 shows an example of audio channels and associated metadata.
  • the audio data includes N channels of PCM audio data, which may be any type of PCM audio data disclosed herein or otherwise known by those of ordinary skill in the art.
  • the audio data is segmented into blocks, each of which includes k samples. The block boundaries are indicated by vertical lines in Figure 1 .
  • M represents a particular block index.
  • metadata associated with the N channels of audio data is grouped together and likewise segmented in blocks, such that each block of metadata is associated with each block of k audio samples.
  • the metadata may apply to audio data outside the range of a given block.
  • the metadata is sent on a block basis and, for the purposes of this discussion, the metadata will be described as "associated" with the block of audio data with which it is transmitted.
  • a reference point also referred to herein as a reference node
  • Samples of the synchronized audio data may sometimes be referred to herein as "reference audio samples.”
  • the audio channels and metadata may be transmitted in some manner between nodes of a network.
  • the time alignment between the audio channels and metadata and/or the time alignment between the audio channels themselves may be altered.
  • data corresponding with the time alignment at the reference point may be determined and may be transmitted with the audio channels and metadata.
  • the data corresponding with the time alignment at the reference point may be based, at least in part, on reference audio samples. Accordingly, the data corresponding with the time alignment at the reference point may sometimes be referred to herein as "values corresponding to reference audio samples.”
  • values corresponding to reference audio samples are disclosed herein.
  • the audio data, metadata and the values corresponding to reference audio samples may be received. Such data may sometimes be received directly from a reference node. In some examples, there may be multiple nodes between the reference node and the measurement node. At the measurement node, the time alignment may be measured, verified, and/or corrected if required. In some implementations, the measurement node may determine whether audio data is synchronized with corresponding metadata based, at least in part, on received values corresponding to reference audio samples. Various examples of using the values corresponding to reference audio samples at a measurement node are disclosed herein.
  • FIG. 2 is a block diagram that provides examples of components of an apparatus capable of implementing various methods described herein.
  • the apparatus 200 may, for example, be (or may be a portion of) an audio data processing system.
  • the apparatus 200 may be an instance of, or a portion of, a media processing node.
  • the media processing node may, in some examples, be a node of a broadcast network.
  • the apparatus 200 may be a server.
  • the apparatus 200 may be implemented in a component of a device, such as a line card of a server. Accordingly, in some implementations the apparatus 200 may be capable of performing the functions of a measurement node as disclosed herein. In some examples, the apparatus 200 may be capable of performing the functions of a reference node as disclosed herein.
  • the apparatus 200 includes an interface system 205 and a control system 210.
  • the control system 210 may be capable of implementing, at least in part, the methods disclosed herein.
  • the control system 210 may, for example, include a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, and/or discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the apparatus 200 includes a memory system 215.
  • the memory system 215 may include one or more suitable types of non-transitory storage media, such as flash memory, a hard drive, etc.
  • the interface system 205 may include a network interface, an interface between the control system and the memory system and/or an external device interface (such as a universal serial bus (USB) interface).
  • USB universal serial bus
  • the control system 210 may include at least some memory, which may be regarded as a portion of the memory system.
  • the memory system 215 may be capable of providing at least some control system functionality.
  • control system 210 is capable of receiving audio data and other information via the interface system 205.
  • control system 210 may include (or may implement), an audio processing apparatus such as those described herein.
  • control system 210 may be capable of performing at least some of the methods described herein according to software, which may be stored one or more non-transitory media.
  • the non-transitory media may include memory associated with the control system 210, such as random access memory (RAM) and/or read-only memory (ROM).
  • the non-transitory media may include memory of the memory system 215.
  • the control system 210 may be capable of sending and receiving data, including but not limited to software program code, via the interface system 205.
  • the control system 210 may be capable of requesting a software program from another device, such as a server, that is accessible on a network via the interface system 205.
  • the received software program may be executed by the control system 210 as it is received, and/or stored in a storage device for later execution.
  • the control system 210 may be implemented in more than one device.
  • some of the functionality described herein may be provided in a first device, such as a media processing node, and other functionality may be provided by a second device, such as a server, in response to a request from the first device.
  • Figure 3 is a flow diagram that shows example blocks of a method according to some disclosed implementations.
  • the blocks of method 300 provide an example of measurement node functionality. However, some associated methods disclosed herein may be performed at a reference point.
  • the blocks of Figure 3 (and those of other flow diagrams provided herein) may, for example, be performed by the control system 210 of Figure 2 or by a similar apparatus. Accordingly, some blocks of Figure 3 are described below with reference to one or more elements of Figure 2 .
  • the method outlined in Figure 3 may include more or fewer blocks than indicated.
  • the blocks of methods disclosed herein are not necessarily performed in the order indicated.
  • block 305 involves receiving a block of audio data.
  • the block includes N PCM audio channels, including audio samples for each of the N channels.
  • block 310 involves receiving metadata associated with the block of audio data that is received in block 305.
  • the metadata may in some examples be associated with one or more other blocks of audio data.
  • the metadata received in block 310 may, for example, indicate the position of an audio object, the apparent size of an audio object, rendering constraints, content type (e.g. dialog, effects), etc.
  • the metadata may include other types of data, such as gain data, trajectory data, etc.
  • block 315 involves receiving a first set of values corresponding to reference audio samples.
  • first set of values corresponding to reference audio samples were obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may have been determined at a reference point.
  • the first set of values corresponding to reference audio samples may have been determined during a reference time before the block of audio data was received in block 305.
  • the reference time may have been a time during which the metadata was synchronized with reference audio data.
  • the reference point may have been capable of associating the first set of values with the metadata that was received in block 310.
  • the reference point may have been capable of transmitting the first set of values, at least one block of the reference audio data and the metadata from the reference point to the measurement point.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to the at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include the B most significant bits of the at least one sample.
  • block 315 may involve receiving a value corresponding to at least one sample from each of the N channels. Blocks 305, 310 and 315 may, in some examples, involve receiving the audio data, metadata and first set of values corresponding to reference audio samples via an interface system, such as the interface system 205 of Figure 2 .
  • block 320 involves determining a second set of values corresponding to audio samples from the block of audio data.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels.
  • determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels.
  • the first set of values and the second set of values are determined in the same manner or substantially the same manner.
  • determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining a value corresponding to the same sample number and/or determining the same type of "audio metric.”
  • audio metrics are provided below.
  • block 325 involves making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples.
  • block 330 involves determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • Figure 4 provides examples of methods that may be performed at a reference point and at a measurement point. Some aspects of these methods are examples of method 300 of Figure 3 . Accordingly, in some portions of the following discussion of Figure 4 , the corresponding blocks of Figure 3 will be referenced.
  • the first sample value of every block is recorded for each audio channel.
  • the corresponding sample values are stored as the set M 1 .
  • the set of sample values M 1 is an example of the "first set of values corresponding to reference audio samples" referred to elsewhere herein. Accordingly, the set of sample values M 1 is an example of the first set of values corresponding to reference audio samples that may be transmitted with the audio data of block M and with associated metadata.
  • the second sample of every block, the third sample of every block or some other sample of every block may be used to determine the first set of values corresponding to reference audio samples.
  • more than one sample per channel may be used to determine the first set of values corresponding to reference audio samples.
  • the value corresponding to a sample may or may not correspond to all of the bits of the sample, depending on the particular implementation. Some implementations may involve determining a value corresponding to only a subset of the total number of bits of a sample. Some such implementations may involve determining a value corresponding to only some number B of most significant bits (MSBs) of a sample, wherein B is one or more bits. Such implementations are potentially advantageous because they may reduce the number of bits required for transmission of the first set of values corresponding to reference audio samples.
  • MSBs most significant bits
  • the number of bits required for transmission of the first set of values corresponding to reference audio samples may be reduced by sending reference audio samples for only a subset of audio channels.
  • Such examples may also provide the potential advantage of simplifying the operations performed at a measurement point, e.g., the operations corresponding with blocks 320, 325 and 330 of Figure 3 .
  • the values corresponding to reference audio samples are not necessarily determined for each one of the N channels. However, some implementations involve determining a value corresponding to at least one sample from at least one of the N channels to determine the first set of values corresponding to reference audio samples.
  • two or more audio channels may be treated as a group in transmission and known to be synchronized to one another at the measurement point. In this case sending the first audio sample for a single audio channel of the group is sufficient information for synchronizing all channels in the group.
  • the set of reference samples may be losslessly compressed prior to transmission by an appropriate method and decompressed after receipt by another device.
  • a second set of values corresponding to audio samples from a received block of audio data are determined in this example.
  • the second set of values are determined according to the first audio sample value of each block and each audio channel.
  • M' 1 represents an example of the second set of values.
  • other implementations may involve determining the second set of values in a different manner, e.g., as described above with reference to determining the first set of values at the reference point.
  • the process for determining the second set of values at the measurement point should generally be the same, or in substantially the same, as the process for determining the first set of values at the reference point.
  • determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining a value corresponding to the same sample number, etc.
  • the first set of values does not equal the second set of values (in this example, if M 1 ⁇ M' 1 )
  • a further analysis may be undertaken in an attempt to determine the time offset of the audio channels.
  • all audio channels may be offset equally.
  • a search for a set of samples that corresponds to the reference point samples should be sufficient for identifying the offset. For example, if all audio channels have been delayed by 10 samples, then the set of sample values based on the 11 th audio sample at the measurement point should equal the reference set of sample values (in other words, M 1 should equal M' 11 ).
  • each audio channel may have a different offset.
  • each channel would need to be searched independently to find a sample value that matches a value in the first set of values.
  • the offset for a particular channel could be determined according to the offset between the sample number in that channel and the sample number corresponding to the matching value in the first set of values.
  • the above-described search methods may be appropriate if the audio channels contain non-stationary audio signals of sufficient level in order to uniquely identify matching samples.
  • a static periodic audio signal for example, a test tone
  • these conditions can be identified, however, to flag an unreliable signal for offset estimation.
  • an all-zero condition could be flagged by sending a special all-zero code for each audio channel with all zero samples.
  • Figure 5 provides alternative examples of methods that may be performed at a reference point and at a measurement point.
  • the methods indicated in Figure 5 may be applicable for instances in which the metadata maintains bit accuracy between the reference point and the measurement point but the audio channels may not. In such instances, searching for bit-exact samples will not identify correctly matching samples in all cases.
  • Some examples involve the identification of one or more sample locations corresponding to what may be referred to herein as an "audio metric.”
  • audio metric locations within a block of audio data may include the location of a peak sample value for the block or a location of a first zero crossing for the block.
  • determining the second set of values may involve determining an audio metric location for at least one of the N channels.
  • determining the first set of values corresponding to reference audio samples at the reference point involves determining an audio metric location for each of the N channels.
  • the audio metric locations correspond to the locations of peak sample values for each of the N channels.
  • the set of sample values M p is an example of the first set of values corresponding to reference audio samples that may be transmitted with the audio data of block M and with associated metadata.
  • the measurement point performs a corresponding process: here, determining the second set of values corresponding to audio samples from the block of audio data involves determining an audio metric location for each of the N channels.
  • the audio metric locations correspond to the locations of peak sample values for each of the N channels. The result is the set of sample values M' p shown in Figure 5 .
  • the second set of values may be compared to the first set of values. If the two sets of values are equal, or approximately equal within a given threshold (in other words, if M p ⁇ M' p ) then it can be assumed that the audio channels at the measurement point are in the same time alignment (within a given tolerance) as the audio channels at the reference point.
  • the threshold of allowable deviation is application- and metric- dependent. For a metric of a peak sample location, some applications may consider a deviation of +/- 1 msec (e.g., 48 samples with 48kHz sampled PCM audio) reasonable. If not, a search as described above with reference to Figure 4 may be undertaken, in an attempt to locate offsets between the matching sample locations at the reference and measurement points to determine the offset for each audio channel.
  • the first set of values corresponding to reference audio samples determined at the reference point and the second set of values corresponding to audio samples determined at the measurement point may include what will be referred to herein as a "block metric" for at least one channel.
  • the first block metric may be based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • determining the second set of values may involve determining a second block metric for at least one channel of a block of audio data received by the measurement point.
  • the second block metric may be based on two or more samples of at least one channel of the audio data.
  • determining the first set of values and determining the second set of values may involve determining first and second block metrics that are based on all audio samples in a block (e.g., the entire set of k samples shown in Figure 5 ).
  • the first block metric and the second block metric may be based, at least in part, a root mean square (RMS) of sample values, a frequency-weighted RMS value and/or a loudness metric such as ITU-R BS.1770.
  • RMS root mean square
  • the offsets determined for a single block of any given audio channel may not be entirely reliable
  • performing the method for each block of a continuous series of blocks may substantially increase the reliability of the methods.
  • Evaluating more than one type of value corresponding to audio samples can also increase reliability. For example, evaluating both a block metric and the locations of audio metrics may increase the reliability of the method described with reference to Figure 5 .
  • the block metric can be derived at the measurement point (which may require audio samples from blocks before or after the block being analyzed, depending on the offset) and compared to the block metric from the reference point.
  • synchronization may be measured for every audio block.
  • synchronization may be measured only for certain audio blocks, in order to reduce computational workload or the amount of data transmitted.
  • the first set of values corresponding to reference audio samples may only be sent every few blocks (e.g. every 10 th block) from the reference point.
  • the synchronization may be checked every few blocks even if the information is sent from the reference point for every block.
  • EEEs enumerated example embodiments

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Mathematical Physics (AREA)
  • Quality & Reliability (AREA)
  • Stereophonic System (AREA)
EP17172852.0A 2016-05-24 2017-05-24 Mesure et verification de l'alimentation temporelle de multiples canaux audio et metadottes associées Active EP3249646B1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16170954 2016-05-24
US201662341474P 2016-05-25 2016-05-25

Publications (2)

Publication Number Publication Date
EP3249646A1 true EP3249646A1 (fr) 2017-11-29
EP3249646B1 EP3249646B1 (fr) 2019-04-17

Family

ID=56081282

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17172852.0A Active EP3249646B1 (fr) 2016-05-24 2017-05-24 Mesure et verification de l'alimentation temporelle de multiples canaux audio et metadottes associées

Country Status (1)

Country Link
EP (1) EP3249646B1 (fr)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013614A1 (en) * 2005-03-30 2008-01-17 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Device and method for generating a data stream and for generating a multi-channel representation
US20080114477A1 (en) * 2006-11-09 2008-05-15 David Wu Method and system for asynchronous pipeline architecture for multiple independent dual/stereo channel pcm processing
US20140156288A1 (en) * 2008-02-14 2014-06-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for synchronizing multichannel extension data with an audio signal and for processing the audio signal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080013614A1 (en) * 2005-03-30 2008-01-17 Fraunhofer-Gesellschaft Zur Forderung Der Angewandten Forschung E.V. Device and method for generating a data stream and for generating a multi-channel representation
US20080114477A1 (en) * 2006-11-09 2008-05-15 David Wu Method and system for asynchronous pipeline architecture for multiple independent dual/stereo channel pcm processing
US20140156288A1 (en) * 2008-02-14 2014-06-05 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Apparatus and method for synchronizing multichannel extension data with an audio signal and for processing the audio signal

Also Published As

Publication number Publication date
EP3249646B1 (fr) 2019-04-17

Similar Documents

Publication Publication Date Title
JP7215534B2 (ja) 復号化装置および方法、並びにプログラム
CN105868397B (zh) 一种歌曲确定方法和装置
US9947338B1 (en) Echo latency estimation
BR122017012321A2 (pt) codificador e decodificador de áudio com informações de programa ou metadados de estrutura de substream
KR101658316B1 (ko) 동기 오디오 재생 방법 및 장치
EP3048609A1 (fr) Dispositif et procédé de codage, dispositif et procédé de décodage, et programme
US9712934B2 (en) System and method for calibration and reproduction of audio signals based on auditory feedback
US8625027B2 (en) System and method for verification of media content synchronization
KR102614021B1 (ko) 오디오 컨텐츠 인식 방법 및 장치
CN108091352B (zh) 一种音频文件处理方法、装置、存储介质及终端设备
US11151981B2 (en) Audio quality of speech in sound systems
CN107481738B (zh) 实时音频比对方法及装置
JP2014176033A (ja) 通信システム、通信方法およびプログラム
US10445056B1 (en) System for deliverables versioning in audio mastering
US9401150B1 (en) Systems and methods to detect lost audio frames from a continuous audio signal
AU2019394097A8 (en) Apparatus, method and computer program for encoding, decoding, scene processing and other procedures related to DirAC based spatial audio coding using diffuse compensation
EP3249646B1 (fr) Mesure et verification de l'alimentation temporelle de multiples canaux audio et metadottes associées
US10015612B2 (en) Measurement, verification and correction of time alignment of multiple audio channels and associated metadata
CN104935975B (zh) 一种垫片播放方法及装置
CN110933483A (zh) 一种基于智能识别的epg辅助校准系统及方法
WO2020114369A1 (fr) Procédé de test de retard de communication sans fil, dispositif, dispositif informatique et support d'informations
US9813725B1 (en) System, method, and computer program for encoding and decoding a unique signature in a video file
JP2015046758A (ja) 情報処理装置、情報処理方法及びプログラム
CN112086106A (zh) 测试场景对齐方法、装置、介质和设备
US11330370B2 (en) Loudness control methods and devices

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180529

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20181105

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602017003296

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1122432

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190515

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190417

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190717

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190817

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190717

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190718

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1122432

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190817

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602017003296

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190531

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190524

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

26N No opposition filed

Effective date: 20200120

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190524

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190531

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20170524

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230513

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230420

Year of fee payment: 7

Ref country code: DE

Payment date: 20230419

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230420

Year of fee payment: 7