EP3249646B1 - Measurement and verification of time alignment of multiple audio channels and associated metadata - Google Patents

Measurement and verification of time alignment of multiple audio channels and associated metadata Download PDF

Info

Publication number
EP3249646B1
EP3249646B1 EP17172852.0A EP17172852A EP3249646B1 EP 3249646 B1 EP3249646 B1 EP 3249646B1 EP 17172852 A EP17172852 A EP 17172852A EP 3249646 B1 EP3249646 B1 EP 3249646B1
Authority
EP
European Patent Office
Prior art keywords
audio
block
value
channels
audio samples
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
EP17172852.0A
Other languages
German (de)
French (fr)
Other versions
EP3249646A1 (en
Inventor
Kent Bennet TERRY
Scott Gregory NORCROSS
Jeffrey Riedmiller
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dolby Laboratories Licensing Corp
Original Assignee
Dolby Laboratories Licensing Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dolby Laboratories Licensing Corp filed Critical Dolby Laboratories Licensing Corp
Publication of EP3249646A1 publication Critical patent/EP3249646A1/en
Application granted granted Critical
Publication of EP3249646B1 publication Critical patent/EP3249646B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L19/00Speech or audio signals analysis-synthesis techniques for redundancy reduction, e.g. in vocoders; Coding or decoding of speech or audio signals, using source filter models or psychoacoustic analysis
    • G10L19/008Multichannel audio signal coding or decoding using interchannel correlation to reduce redundancy, e.g. joint-stereo, intensity-coding or matrixing
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS OR SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING; SPEECH OR AUDIO CODING OR DECODING
    • G10L21/00Processing of the speech or voice signal to produce another audible or non-audible signal, e.g. visual or tactile, in order to modify its quality or its intelligibility
    • G10L21/02Speech enhancement, e.g. noise reduction or echo cancellation
    • G10L21/0316Speech enhancement, e.g. noise reduction or echo cancellation by changing the amplitude
    • G10L21/0324Details of processing therefor
    • G10L21/034Automatic adjustment

Definitions

  • This disclosure relates to audio data processing.
  • this disclosure relates to the synchronization of audio data.
  • audio data time alignment issues (which are also referred to herein as synchronization issues) may become more complex and challenging. Such audio data time alignment issues may be particularly challenging in the context of transmitting and receiving data between media processing nodes of a broadcast network. Improved methods and devices would be desirable.
  • document US 2008/0013614 A1 discloses an audio data processor for providing time synchronization of a data stream with multi-channel additional data and a data stream with data on at least one base channel.
  • a method of processing audio data may involve receiving a block of audio data and receiving metadata associated with the block of audio data.
  • the block may include N pulse code modulated (PCM) audio channels.
  • the block may include audio samples for each of the N channels.
  • the method may involve receiving a first set of values corresponding to reference audio samples.
  • the method may involve determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • the metadata may include position data.
  • the first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include a number, which may be referred to herein as B, of most significant bits of at least one sample.
  • the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • the first set of values may include a first block metric for at least one channel.
  • the first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • Such methods may involve determining a second block metric for at least one channel of the block of audio data.
  • the second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric.
  • the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • RMS root mean square
  • the above-described methods may be performed at a measurement point. Some such implementations may involve determining, at a reference point and during a reference time before the block of audio data was received, the first set of values corresponding to the reference audio samples.
  • the reference time may be a time during which the metadata was synchronized with reference audio data.
  • Some such implementations may involve associating the first set of values with the metadata and transmitting the first set of values, at least one block of the reference audio data and the metadata from the reference point to the measurement point.
  • Non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc.
  • the software may include instructions for controlling one or more devices for receiving a block of audio data and receiving metadata associated with the block of audio data.
  • the block may include N pulse code modulated (PCM) audio channels.
  • the block may include audio samples for each of the N channels.
  • the software may include instructions for receiving a first set of values corresponding to reference audio samples.
  • the software may include instructions for determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • the metadata may include position data.
  • the first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include a number, which may be referred to herein as B, of most significant bits of at least one sample.
  • the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • the first set of values may include a first block metric for at least one channel.
  • the first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • the software may include instructions for determining a second block metric for at least one channel of the block of audio data.
  • the second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric.
  • the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • RMS root mean square
  • the control system may include at least one of a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components.
  • the interface system may include a network interface.
  • the apparatus may include a memory system.
  • the interface system may include an interface between the control system and at least a portion of (e.g., at least one memory device of) the memory system.
  • the control system may be capable of receiving, via the interface system, a block of audio data and metadata associated with the block of audio data.
  • the block may include N pulse code modulated (PCM) audio channels.
  • the block may include audio samples for each of the N channels.
  • the control system may be capable of receiving, via the interface system, a first set of values corresponding to reference audio samples.
  • the control system may be capable of determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • the metadata may include position data.
  • the first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include a number, which may be referred to herein as B, of most significant bits of at least one sample.
  • the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • the first set of values may include a first block metric for at least one channel.
  • the first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • the control system may be capable of determining a second block metric for at least one channel of the block of audio data.
  • the second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric.
  • the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • RMS root mean square
  • audio object may refer to a stream of audio data signals and associated metadata.
  • the metadata may indicate one or more of the position of the audio object, the apparent size of the audio object, rendering constraints as well as content type (e.g. dialog, effects), etc.
  • the metadata may include other types of data, such as gain data, trajectory data, etc.
  • Some audio objects may be static, whereas others may move.
  • Audio object details may be authored or rendered according to the associated metadata which, among other things, may indicate the position of the audio object in a two-dimensional space or a three-dimensional space at a given point in time.
  • the audio objects may be rendered according to their position metadata and possibly other metadata, such as size metadata, according to the reproduction speaker layout of the reproduction environment.
  • audio data that includes associated metadata may be in the form of pulse code modulated (PCM) audio data.
  • PCM audio data the amplitude of an analog audio signal is sampled regularly at uniform intervals. Each sample may be quantized to the nearest value within a range of digital steps.
  • Linear pulse-code modulation (LPCM) is a specific type of PCM in which the quantization levels are linearly uniform. With other types of PCM audio data, quantization levels may vary as a function of amplitude.
  • the dynamic range of an analog signal may be modified for digitizing to produce PCM audio data. Examples include PCM audio data produced according to the G.711 standard of the International Telecommunication Union's Telecommunication Standardization Sector (ITU-T), such as PCM audio data produced according to the A-law algorithm or the ⁇ -law algorithm.
  • ITU-T International Telecommunication Union's Telecommunication Standardization Sector
  • the audio data may be segmented into blocks of PCM audio data, including audio samples for each of the blocks.
  • Some use cases contemplated by the inventors may involve transmitting and receiving multiple channels of PCM audio data in professional production workflows, for example, between media processing nodes.
  • Such media processing nodes may, in some implementations, be part of a broadcast network.
  • Such audio data may be encoded in any form during transmission, but for the purpose of describing many of the methods disclosed herein, it will be assumed that the audio data is represented in PCM form.
  • an "audio program” is considered to be a set of one or more audio signals that are intended to be reproduced simultaneously as part of a single presentation.
  • Time alignment of audio channels that are part of an audio program is known to be important in the production and presentation of the audio program.
  • an audio program may include metadata that is associated with the audio signals, including metadata that may affect the reproduction of the audio signals. For at least some types of metadata, time alignment is likewise known to be important in the production and presentation of the audio program.
  • an audio program includes a segment during which a bird is intended to be flying overhead, it would thwart the intention of the content creator and would be disconcerting to the listener(s) if instead the reproduced sounds indicated that a lawnmower were flying overhead.
  • This disclosure describes methods for measuring, verifying, and correcting time alignment of multiple audio channels and metadata that are part of an audio program.
  • Figure 1 shows an example of audio channels and associated metadata.
  • the audio data includes N channels of PCM audio data, which may be any type of PCM audio data disclosed herein or otherwise known by those of ordinary skill in the art.
  • the audio data is segmented into blocks, each of which includes k samples. The block boundaries are indicated by vertical lines in Figure 1 .
  • M represents a particular block index.
  • metadata associated with the N channels of audio data is grouped together and likewise segmented in blocks, such that each block of metadata is associated with each block of k audio samples.
  • the metadata may apply to audio data outside the range of a given block.
  • the metadata is sent on a block basis and, for the purposes of this discussion, the metadata will be described as "associated" with the block of audio data with which it is transmitted.
  • a reference point also referred to herein as a reference node
  • Samples of the synchronized audio data may sometimes be referred to herein as "reference audio samples.”
  • the audio channels and metadata may be transmitted in some manner between nodes of a network.
  • the time alignment between the audio channels and metadata and/or the time alignment between the audio channels themselves may be altered.
  • data corresponding with the time alignment at the reference point may be determined and may be transmitted with the audio channels and metadata.
  • the data corresponding with the time alignment at the reference point may be based, at least in part, on reference audio samples. Accordingly, the data corresponding with the time alignment at the reference point may sometimes be referred to herein as "values corresponding to reference audio samples.”
  • values corresponding to reference audio samples are disclosed herein.
  • the audio data, metadata and the values corresponding to reference audio samples may be received. Such data may sometimes be received directly from a reference node. In some examples, there may be multiple nodes between the reference node and the measurement node. At the measurement node, the time alignment may be measured, verified, and/or corrected if required. In some implementations, the measurement node may determine whether audio data is synchronized with corresponding metadata based, at least in part, on received values corresponding to reference audio samples. Various examples of using the values corresponding to reference audio samples at a measurement node are disclosed herein.
  • FIG. 2 is a block diagram that provides examples of components of an apparatus capable of implementing various methods described herein.
  • the apparatus 200 may, for example, be (or may be a portion of) an audio data processing system.
  • the apparatus 200 may be an instance of, or a portion of, a media processing node.
  • the media processing node may, in some examples, be a node of a broadcast network.
  • the apparatus 200 may be a server.
  • the apparatus 200 may be implemented in a component of a device, such as a line card of a server. Accordingly, in some implementations the apparatus 200 may be capable of performing the functions of a measurement node as disclosed herein. In some examples, the apparatus 200 may be capable of performing the functions of a reference node as disclosed herein.
  • the apparatus 200 includes an interface system 205 and a control system 210.
  • the control system 210 may be capable of implementing, at least in part, the methods disclosed herein.
  • the control system 210 may, for example, include a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, and/or discrete hardware components.
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the apparatus 200 includes a memory system 215.
  • the memory system 215 may include one or more suitable types of non-transitory storage media, such as flash memory, a hard drive, etc.
  • the interface system 205 may include a network interface, an interface between the control system and the memory system and/or an external device interface (such as a universal serial bus (USB) interface).
  • USB universal serial bus
  • the control system 210 may include at least some memory, which may be regarded as a portion of the memory system.
  • the memory system 215 may be capable of providing at least some control system functionality.
  • control system 210 is capable of receiving audio data and other information via the interface system 205.
  • control system 210 may include (or may implement), an audio processing apparatus such as those described herein.
  • control system 210 may be capable of performing at least some of the methods described herein according to software, which may be stored one or more non-transitory media.
  • the non-transitory media may include memory associated with the control system 210, such as random access memory (RAM) and/or read-only memory (ROM).
  • the non-transitory media may include memory of the memory system 215.
  • the control system 210 may be capable of sending and receiving data, including but not limited to software program code, via the interface system 205.
  • the control system 210 may be capable of requesting a software program from another device, such as a server, that is accessible on a network via the interface system 205.
  • the received software program may be executed by the control system 210 as it is received, and/or stored in a storage device for later execution.
  • the control system 210 may be implemented in more than one device.
  • some of the functionality described herein may be provided in a first device, such as a media processing node, and other functionality may be provided by a second device, such as a server, in response to a request from the first device.
  • Figure 3 is a flow diagram that shows example blocks of a method according to some disclosed implementations.
  • the blocks of method 300 provide an example of measurement node functionality. However, some associated methods disclosed herein may be performed at a reference point.
  • the blocks of Figure 3 (and those of other flow diagrams provided herein) may, for example, be performed by the control system 210 of Figure 2 or by a similar apparatus. Accordingly, some blocks of Figure 3 are described below with reference to one or more elements of Figure 2 .
  • the method outlined in Figure 3 may include more or fewer blocks than indicated.
  • the blocks of methods disclosed herein are not necessarily performed in the order indicated.
  • block 305 involves receiving a block of audio data.
  • the block includes N PCM audio channels, including audio samples for each of the N channels.
  • block 310 involves receiving metadata associated with the block of audio data that is received in block 305.
  • the metadata may in some examples be associated with one or more other blocks of audio data.
  • the metadata received in block 310 may, for example, indicate the position of an audio object, the apparent size of an audio object, rendering constraints, content type (e.g. dialog, effects), etc.
  • the metadata may include other types of data, such as gain data, trajectory data, etc.
  • block 315 involves receiving a first set of values corresponding to reference audio samples.
  • first set of values corresponding to reference audio samples were obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • the first set of values corresponding to reference audio samples may have been determined at a reference point.
  • the first set of values corresponding to reference audio samples may have been determined during a reference time before the block of audio data was received in block 305.
  • the reference time may have been a time during which the metadata was synchronized with reference audio data.
  • the reference point may have been capable of associating the first set of values with the metadata that was received in block 310.
  • the reference point may have been capable of transmitting the first set of values, at least one block of the reference audio data and the metadata from the reference point to the measurement point.
  • the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels.
  • the value corresponding to the at least one sample may correspond to a subset of a total number of bits of the at least one sample.
  • the subset may include the B most significant bits of the at least one sample.
  • block 315 may involve receiving a value corresponding to at least one sample from each of the N channels. Blocks 305, 310 and 315 may, in some examples, involve receiving the audio data, metadata and first set of values corresponding to reference audio samples via an interface system, such as the interface system 205 of Figure 2 .
  • block 320 involves determining a second set of values corresponding to audio samples from the block of audio data.
  • determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels.
  • determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels.
  • the first set of values and the second set of values are determined in the same manner or substantially the same manner.
  • determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining a value corresponding to the same sample number and/or determining the same type of "audio metric.”
  • audio metrics are provided below.
  • block 325 involves making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples.
  • block 330 involves determining, based on the comparison, whether the block of audio data is synchronized with the metadata.
  • Figure 4 provides examples of methods that may be performed at a reference point and at a measurement point. Some aspects of these methods are examples of method 300 of Figure 3 . Accordingly, in some portions of the following discussion of Figure 4 , the corresponding blocks of Figure 3 will be referenced.
  • the first sample value of every block is recorded for each audio channel.
  • the corresponding sample values are stored as the set M 1 .
  • the set of sample values M 1 is an example of the "first set of values corresponding to reference audio samples" referred to elsewhere herein. Accordingly, the set of sample values M 1 is an example of the first set of values corresponding to reference audio samples that may be transmitted with the audio data of block M and with associated metadata.
  • the second sample of every block, the third sample of every block or some other sample of every block may be used to determine the first set of values corresponding to reference audio samples.
  • more than one sample per channel may be used to determine the first set of values corresponding to reference audio samples.
  • the value corresponding to a sample may or may not correspond to all of the bits of the sample, depending on the particular implementation. Some implementations may involve determining a value corresponding to only a subset of the total number of bits of a sample. Some such implementations may involve determining a value corresponding to only some number B of most significant bits (MSBs) of a sample, wherein B is one or more bits. Such implementations are potentially advantageous because they may reduce the number of bits required for transmission of the first set of values corresponding to reference audio samples.
  • MSBs most significant bits
  • the number of bits required for transmission of the first set of values corresponding to reference audio samples may be reduced by sending reference audio samples for only a subset of audio channels.
  • Such examples may also provide the potential advantage of simplifying the operations performed at a measurement point, e.g., the operations corresponding with blocks 320, 325 and 330 of Figure 3 .
  • the values corresponding to reference audio samples are not necessarily determined for each one of the N channels. However, some implementations involve determining a value corresponding to at least one sample from at least one of the N channels to determine the first set of values corresponding to reference audio samples.
  • two or more audio channels may be treated as a group in transmission and known to be synchronized to one another at the measurement point. In this case sending the first audio sample for a single audio channel of the group is sufficient information for synchronizing all channels in the group.
  • the set of reference samples may be losslessly compressed prior to transmission by an appropriate method and decompressed after receipt by another device.
  • a second set of values corresponding to audio samples from a received block of audio data are determined in this example.
  • the second set of values are determined according to the first audio sample value of each block and each audio channel.
  • M' 1 represents an example of the second set of values.
  • other implementations may involve determining the second set of values in a different manner, e.g., as described above with reference to determining the first set of values at the reference point.
  • the process for determining the second set of values at the measurement point should generally be the same, or in substantially the same, as the process for determining the first set of values at the reference point.
  • determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining a value corresponding to the same sample number, etc.
  • the first set of values does not equal the second set of values (in this example, if M 1 ⁇ M' 1 )
  • a further analysis may be undertaken in an attempt to determine the time offset of the audio channels.
  • all audio channels may be offset equally.
  • a search for a set of samples that corresponds to the reference point samples should be sufficient for identifying the offset. For example, if all audio channels have been delayed by 10 samples, then the set of sample values based on the 11 th audio sample at the measurement point should equal the reference set of sample values (in other words, M 1 should equal M' 11 ).
  • each audio channel may have a different offset.
  • each channel would need to be searched independently to find a sample value that matches a value in the first set of values.
  • the offset for a particular channel could be determined according to the offset between the sample number in that channel and the sample number corresponding to the matching value in the first set of values.
  • the above-described search methods may be appropriate if the audio channels contain non-stationary audio signals of sufficient level in order to uniquely identify matching samples.
  • a static periodic audio signal for example, a test tone
  • these conditions can be identified, however, to flag an unreliable signal for offset estimation.
  • an all-zero condition could be flagged by sending a special all-zero code for each audio channel with all zero samples.
  • Figure 5 provides alternative examples of methods that may be performed at a reference point and at a measurement point.
  • the methods indicated in Figure 5 may be applicable for instances in which the metadata maintains bit accuracy between the reference point and the measurement point but the audio channels may not. In such instances, searching for bit-exact samples will not identify correctly matching samples in all cases.
  • Some examples involve the identification of one or more sample locations corresponding to what may be referred to herein as an "audio metric.”
  • audio metric locations within a block of audio data may include the location of a peak sample value for the block or a location of a first zero crossing for the block.
  • determining the second set of values may involve determining an audio metric location for at least one of the N channels.
  • determining the first set of values corresponding to reference audio samples at the reference point involves determining an audio metric location for each of the N channels.
  • the audio metric locations correspond to the locations of peak sample values for each of the N channels.
  • the set of sample values M p is an example of the first set of values corresponding to reference audio samples that may be transmitted with the audio data of block M and with associated metadata.
  • the measurement point performs a corresponding process: here, determining the second set of values corresponding to audio samples from the block of audio data involves determining an audio metric location for each of the N channels.
  • the audio metric locations correspond to the locations of peak sample values for each of the N channels. The result is the set of sample values M' p shown in Figure 5 .
  • the second set of values may be compared to the first set of values. If the two sets of values are equal, or approximately equal within a given threshold (in other words, if M p ⁇ M' p ) then it can be assumed that the audio channels at the measurement point are in the same time alignment (within a given tolerance) as the audio channels at the reference point.
  • the threshold of allowable deviation is application- and metric- dependent. For a metric of a peak sample location, some applications may consider a deviation of +/- 1 msec (e.g., 48 samples with 48kHz sampled PCM audio) reasonable. If not, a search as described above with reference to Figure 4 may be undertaken, in an attempt to locate offsets between the matching sample locations at the reference and measurement points to determine the offset for each audio channel.
  • the first set of values corresponding to reference audio samples determined at the reference point and the second set of values corresponding to audio samples determined at the measurement point may include what will be referred to herein as a "block metric" for at least one channel.
  • the first block metric may be based on two or more reference audio samples of at least one reference channel of a reference block of audio data.
  • determining the second set of values may involve determining a second block metric for at least one channel of a block of audio data received by the measurement point.
  • the second block metric may be based on two or more samples of at least one channel of the audio data.
  • determining the first set of values and determining the second set of values may involve determining first and second block metrics that are based on all audio samples in a block (e.g., the entire set of k samples shown in Figure 5 ).
  • the first block metric and the second block metric may be based, at least in part, a root mean square (RMS) of sample values, a frequency-weighted RMS value and/or a loudness metric such as ITU-R BS.1770.
  • RMS root mean square
  • the offsets determined for a single block of any given audio channel may not be entirely reliable
  • performing the method for each block of a continuous series of blocks may substantially increase the reliability of the methods.
  • Evaluating more than one type of value corresponding to audio samples can also increase reliability. For example, evaluating both a block metric and the locations of audio metrics may increase the reliability of the method described with reference to Figure 5 .
  • the block metric can be derived at the measurement point (which may require audio samples from blocks before or after the block being analyzed, depending on the offset) and compared to the block metric from the reference point.
  • synchronization may be measured for every audio block.
  • synchronization may be measured only for certain audio blocks, in order to reduce computational workload or the amount of data transmitted.
  • the first set of values corresponding to reference audio samples may only be sent every few blocks (e.g. every 10 th block) from the reference point.
  • the synchronization may be checked every few blocks even if the information is sent from the reference point for every block.

Description

    TECHNICAL FIELD
  • This disclosure relates to audio data processing. In particular, this disclosure relates to the synchronization of audio data.
  • BACKGROUND
  • As the number of channels increases and the loudspeaker layout transitions from a planar two-dimensional (2D) array to a three-dimensional (3D) array including height speakers, the tasks of authoring and rendering sounds are becoming increasingly complex. In some instances, the increased complexity has involved a commensurate increase in the amount of audio data that needs to be stored and/or streamed. In some examples, audio data time alignment issues (which are also referred to herein as synchronization issues) may become more complex and challenging. Such audio data time alignment issues may be particularly challenging in the context of transmitting and receiving data between media processing nodes of a broadcast network. Improved methods and devices would be desirable.
  • In the prior art, document US 2008/0013614 A1 discloses an audio data processor for providing time synchronization of a data stream with multi-channel additional data and a data stream with data on at least one base channel.
  • SUMMARY
  • The object of the present invention is achieved by the independent claims. Specific embodiments are defined in the dependent claims.
  • As described in detail herein, in some implementations a method of processing audio data may involve receiving a block of audio data and receiving metadata associated with the block of audio data. The block may include N pulse code modulated (PCM) audio channels. The block may include audio samples for each of the N channels. The method may involve receiving a first set of values corresponding to reference audio samples. In some examples, the method may involve determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata. In some examples, the metadata may include position data.
  • The first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data. In some examples, the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels. In some implementations, the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample. For example, the subset may include a number, which may be referred to herein as B, of most significant bits of at least one sample.
  • In some examples, the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • According to some examples, determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • According to some implementations, the first set of values may include a first block metric for at least one channel. The first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data. Such methods may involve determining a second block metric for at least one channel of the block of audio data. The second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric. In some examples, the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • According to some implementations, the above-described methods may be performed at a measurement point. Some such implementations may involve determining, at a reference point and during a reference time before the block of audio data was received, the first set of values corresponding to the reference audio samples. The reference time may be a time during which the metadata was synchronized with reference audio data. Some such implementations may involve associating the first set of values with the metadata and transmitting the first set of values, at least one block of the reference audio data and the metadata from the reference point to the measurement point.
  • Some or all of the methods described herein may be performed by one or more devices according to instructions (e.g., software) stored on one or more non-transitory media. Such non-transitory media may include memory devices such as those described herein, including but not limited to random access memory (RAM) devices, read-only memory (ROM) devices, etc. For example, the software may include instructions for controlling one or more devices for receiving a block of audio data and receiving metadata associated with the block of audio data. The block may include N pulse code modulated (PCM) audio channels. The block may include audio samples for each of the N channels. The software may include instructions for receiving a first set of values corresponding to reference audio samples. In some examples, the software may include instructions for determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata. In some examples, the metadata may include position data.
  • The first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data. In some examples, the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels. In some implementations, the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample. For example, the subset may include a number, which may be referred to herein as B, of most significant bits of at least one sample.
  • In some examples, the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • According to some examples, determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • According to some implementations, the first set of values may include a first block metric for at least one channel. The first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data. The software may include instructions for determining a second block metric for at least one channel of the block of audio data. The second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric. In some examples, the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • At least some aspects of this disclosure may be implemented in an apparatus that includes an interface system and a control system. The control system may include at least one of a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, or discrete hardware components. The interface system may include a network interface. In some implementations, the apparatus may include a memory system. The interface system may include an interface between the control system and at least a portion of (e.g., at least one memory device of) the memory system.
  • The control system may be capable of receiving, via the interface system, a block of audio data and metadata associated with the block of audio data. The block may include N pulse code modulated (PCM) audio channels. The block may include audio samples for each of the N channels. The control system may be capable of receiving, via the interface system, a first set of values corresponding to reference audio samples. In some examples, the control system may be capable of determining a second set of values corresponding to audio samples from the block of audio data, making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples, and determining, based on the comparison, whether the block of audio data is synchronized with the metadata. In some examples, the metadata may include position data.
  • The first set of values corresponding to reference audio samples may have been obtained at a reference time at which the metadata was synchronized with corresponding audio data. In some examples, the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels. In some implementations, the value corresponding to at least one sample may correspond to a subset of a total number of bits of the at least one sample. For example, the subset may include a number, which may be referred to herein as B, of most significant bits of at least one sample.
  • In some examples, the first set of values and the second set of values may be determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining the value corresponding to a same sample number and/or determining the same audio metric.
  • According to some examples, determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. Determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some implementations, determining the second set of values may involve determining an audio metric for at least one of the N channels. A location of an audio metric may, for example, be a location of a peak sample value for the block or a location of a first zero crossing for the block.
  • According to some implementations, the first set of values may include a first block metric for at least one channel. The first block metric being based on two or more reference audio samples of at least one reference channel of a reference block of audio data. The control system may be capable of determining a second block metric for at least one channel of the block of audio data. The second block metric may be based on two or more samples of at least one channel. Determining whether the block of audio data is synchronized with the metadata may be based, at least in part, on a comparison of the first block metric with the second block metric. In some examples, the first block metric and the second block metric may be based, at least in part, on a root mean square (RMS) of sample values in a block, a frequency-weighted RMS value and/or a loudness metric.
  • Details of one or more implementations of the subject matter described in this specification are set forth in the accompanying drawings and the description below. Other features, aspects, and advantages will become apparent from the description, the drawings, and the claims. Note that the relative dimensions of the following figures may not be drawn to scale.
  • BRIEF DESCRIPTION OF THE DRAWINGS
    • Figure 1 shows an example of audio channels and associated metadata.
    • Figure 2 is a block diagram that provides examples of components of an apparatus capable of implementing various methods described herein.
    • Figure 3 is a flow diagram that shows example blocks of a method according to some disclosed implementations.
    • Figure 4 provides examples of methods that may be performed at a reference point and at a measurement point.
    • Figure 5 provides alternative examples of methods that may be performed at a reference point and at a measurement point.
  • Like reference numbers and designations in the various drawings indicate like elements.
  • DESCRIPTION OF EXAMPLE EMBODIMENTS
  • The following description is directed to certain implementations for the purposes of describing some innovative aspects of this disclosure, as well as examples of contexts in which these innovative aspects may be implemented. However, the teachings herein can be applied in various different ways. Accordingly, the teachings of this disclosure are not intended to be limited to the implementations shown in the figures and/or described herein, but instead have wide applicability.
  • As used herein, the term "audio object" may refer to a stream of audio data signals and associated metadata. The metadata may indicate one or more of the position of the audio object, the apparent size of the audio object, rendering constraints as well as content type (e.g. dialog, effects), etc. Depending on the implementation, the metadata may include other types of data, such as gain data, trajectory data, etc. Some audio objects may be static, whereas others may move. Audio object details may be authored or rendered according to the associated metadata which, among other things, may indicate the position of the audio object in a two-dimensional space or a three-dimensional space at a given point in time. When audio objects are monitored or played back in a reproduction environment, the audio objects may be rendered according to their position metadata and possibly other metadata, such as size metadata, according to the reproduction speaker layout of the reproduction environment.
  • In some instances audio data that includes associated metadata may be in the form of pulse code modulated (PCM) audio data. To produce PCM audio data, the amplitude of an analog audio signal is sampled regularly at uniform intervals. Each sample may be quantized to the nearest value within a range of digital steps. Linear pulse-code modulation (LPCM) is a specific type of PCM in which the quantization levels are linearly uniform. With other types of PCM audio data, quantization levels may vary as a function of amplitude. In some examples, the dynamic range of an analog signal may be modified for digitizing to produce PCM audio data. Examples include PCM audio data produced according to the G.711 standard of the International Telecommunication Union's Telecommunication Standardization Sector (ITU-T), such as PCM audio data produced according to the A-law algorithm or the µ-law algorithm.
  • For example, the audio data may be segmented into blocks of PCM audio data, including audio samples for each of the blocks. Some use cases contemplated by the inventors may involve transmitting and receiving multiple channels of PCM audio data in professional production workflows, for example, between media processing nodes. Such media processing nodes may, in some implementations, be part of a broadcast network. Such audio data may be encoded in any form during transmission, but for the purpose of describing many of the methods disclosed herein, it will be assumed that the audio data is represented in PCM form.
  • In the context of this disclosure, an "audio program" is considered to be a set of one or more audio signals that are intended to be reproduced simultaneously as part of a single presentation. Time alignment of audio channels that are part of an audio program is known to be important in the production and presentation of the audio program. As noted elsewhere herein, an audio program may include metadata that is associated with the audio signals, including metadata that may affect the reproduction of the audio signals. For at least some types of metadata, time alignment is likewise known to be important in the production and presentation of the audio program. For example, if an audio program includes a segment during which a bird is intended to be flying overhead, it would thwart the intention of the content creator and would be disconcerting to the listener(s) if instead the reproduced sounds indicated that a lawnmower were flying overhead. This disclosure describes methods for measuring, verifying, and correcting time alignment of multiple audio channels and metadata that are part of an audio program.
  • Figure 1 shows an example of audio channels and associated metadata. In this example, the audio data includes N channels of PCM audio data, which may be any type of PCM audio data disclosed herein or otherwise known by those of ordinary skill in the art. Here, the audio data is segmented into blocks, each of which includes k samples. The block boundaries are indicated by vertical lines in Figure 1. In the example shown in Figure 1, M represents a particular block index.
  • In this example, metadata associated with the N channels of audio data is grouped together and likewise segmented in blocks, such that each block of metadata is associated with each block of k audio samples. In some instances, the metadata may apply to audio data outside the range of a given block. However, in this example the metadata is sent on a block basis and, for the purposes of this discussion, the metadata will be described as "associated" with the block of audio data with which it is transmitted.
  • Various methods disclosed herein involve a reference point (also referred to herein as a reference node) at which audio channels and metadata are known to be synchronized. Samples of the synchronized audio data may sometimes be referred to herein as "reference audio samples." The audio channels and metadata may be transmitted in some manner between nodes of a network. In some instances, the time alignment between the audio channels and metadata and/or the time alignment between the audio channels themselves may be altered. In some methods disclosed herein, data corresponding with the time alignment at the reference point may be determined and may be transmitted with the audio channels and metadata. The data corresponding with the time alignment at the reference point may be based, at least in part, on reference audio samples. Accordingly, the data corresponding with the time alignment at the reference point may sometimes be referred to herein as "values corresponding to reference audio samples." Various examples of values corresponding to reference audio samples are disclosed herein.
  • At a measurement point of the network (also referred to herein as a measurement node), the audio data, metadata and the values corresponding to reference audio samples may be received. Such data may sometimes be received directly from a reference node. In some examples, there may be multiple nodes between the reference node and the measurement node. At the measurement node, the time alignment may be measured, verified, and/or corrected if required. In some implementations, the measurement node may determine whether audio data is synchronized with corresponding metadata based, at least in part, on received values corresponding to reference audio samples. Various examples of using the values corresponding to reference audio samples at a measurement node are disclosed herein.
  • Figure 2 is a block diagram that provides examples of components of an apparatus capable of implementing various methods described herein. The apparatus 200 may, for example, be (or may be a portion of) an audio data processing system. In some implementations, the apparatus 200 may be an instance of, or a portion of, a media processing node. The media processing node may, in some examples, be a node of a broadcast network. According to some implementations, the apparatus 200 may be a server. In some examples, the apparatus 200 may be implemented in a component of a device, such as a line card of a server. Accordingly, in some implementations the apparatus 200 may be capable of performing the functions of a measurement node as disclosed herein. In some examples, the apparatus 200 may be capable of performing the functions of a reference node as disclosed herein.
  • In this example, the apparatus 200 includes an interface system 205 and a control system 210. The control system 210 may be capable of implementing, at least in part, the methods disclosed herein. The control system 210 may, for example, include a general purpose single- or multi-chip processor, a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, and/or discrete hardware components.
  • In this implementation, the apparatus 200 includes a memory system 215. The memory system 215 may include one or more suitable types of non-transitory storage media, such as flash memory, a hard drive, etc. The interface system 205 may include a network interface, an interface between the control system and the memory system and/or an external device interface (such as a universal serial bus (USB) interface). Although the memory system 215 is depicted as a separate element in Figure 2, the control system 210 may include at least some memory, which may be regarded as a portion of the memory system. Similarly, in some implementations the memory system 215 may be capable of providing at least some control system functionality.
  • In this example, the control system 210 is capable of receiving audio data and other information via the interface system 205. In some implementations, the control system 210 may include (or may implement), an audio processing apparatus such as those described herein.
  • In some implementations, the control system 210 may be capable of performing at least some of the methods described herein according to software, which may be stored one or more non-transitory media. The non-transitory media may include memory associated with the control system 210, such as random access memory (RAM) and/or read-only memory (ROM). In some examples, the non-transitory media may include memory of the memory system 215. In some implementations, the control system 210 may be capable of sending and receiving data, including but not limited to software program code, via the interface system 205. For example, the control system 210 may be capable of requesting a software program from another device, such as a server, that is accessible on a network via the interface system 205. The received software program may be executed by the control system 210 as it is received, and/or stored in a storage device for later execution. According to some examples, the control system 210 may be implemented in more than one device. For example, some of the functionality described herein may be provided in a first device, such as a media processing node, and other functionality may be provided by a second device, such as a server, in response to a request from the first device.
  • Figure 3 is a flow diagram that shows example blocks of a method according to some disclosed implementations. The blocks of method 300 provide an example of measurement node functionality. However, some associated methods disclosed herein may be performed at a reference point. The blocks of Figure 3 (and those of other flow diagrams provided herein) may, for example, be performed by the control system 210 of Figure 2 or by a similar apparatus. Accordingly, some blocks of Figure 3 are described below with reference to one or more elements of Figure 2. As with other methods disclosed herein, the method outlined in Figure 3 may include more or fewer blocks than indicated. Moreover, the blocks of methods disclosed herein are not necessarily performed in the order indicated.
  • Here, block 305 involves receiving a block of audio data. In this example, the block includes N PCM audio channels, including audio samples for each of the N channels.
  • In this example, block 310 involves receiving metadata associated with the block of audio data that is received in block 305. As noted elsewhere herein, the metadata may in some examples be associated with one or more other blocks of audio data. The metadata received in block 310 may, for example, indicate the position of an audio object, the apparent size of an audio object, rendering constraints, content type (e.g. dialog, effects), etc. Depending on the implementation, the metadata may include other types of data, such as gain data, trajectory data, etc.
  • According to this implementation, block 315 involves receiving a first set of values corresponding to reference audio samples. In some such implementations, first set of values corresponding to reference audio samples were obtained at a reference time at which the metadata was synchronized with corresponding audio data.
  • Accordingly, the first set of values corresponding to reference audio samples may have been determined at a reference point. The first set of values corresponding to reference audio samples may have been determined during a reference time before the block of audio data was received in block 305. The reference time may have been a time during which the metadata was synchronized with reference audio data. The reference point may have been capable of associating the first set of values with the metadata that was received in block 310. The reference point may have been capable of transmitting the first set of values, at least one block of the reference audio data and the metadata from the reference point to the measurement point.
  • In some examples, the first set of values corresponding to reference audio samples may include a value corresponding to at least one sample from at least one of the N channels. In some implementations, the value corresponding to the at least one sample may correspond to a subset of a total number of bits of the at least one sample. In some such implementations, the subset may include the B most significant bits of the at least one sample. According to some examples, block 315 may involve receiving a value corresponding to at least one sample from each of the N channels. Blocks 305, 310 and 315 may, in some examples, involve receiving the audio data, metadata and first set of values corresponding to reference audio samples via an interface system, such as the interface system 205 of Figure 2.
  • In this example, block 320 involves determining a second set of values corresponding to audio samples from the block of audio data. In some implementations, determining the second set of values may involve determining a value corresponding to the same sample number in at least one of the N channels. In some such implementations, determining the second set of values may involve determining a value corresponding to the first sample of the block in at least one of the N channels. In some examples, the first set of values and the second set of values are determined in the same manner or substantially the same manner. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining a value corresponding to the same sample number and/or determining the same type of "audio metric." Some examples of audio metrics are provided below.
  • In this example, block 325 involves making a comparison of the second set of values corresponding to audio samples and the first set of values corresponding to reference audio samples. According to this example, block 330 involves determining, based on the comparison, whether the block of audio data is synchronized with the metadata. Some examples are described below.
  • Figure 4 provides examples of methods that may be performed at a reference point and at a measurement point. Some aspects of these methods are examples of method 300 of Figure 3. Accordingly, in some portions of the following discussion of Figure 4, the corresponding blocks of Figure 3 will be referenced.
  • An underlying assumption of the example shown in Figure 4 is that the individual audio channels and the metadata are in time alignment at the reference point. Therefore, the audio samples at the reference point are examples of the "reference audio samples" referred to elsewhere in this disclosure. Moreover, in this example it is presumed that the audio channels and the associated metadata maintain bit accuracy between the reference point and the measurement point. Therefore, according this example, it is presumed that only the time alignment between the individual audio channels and/or the metadata may potentially be altered.
  • According to this implementation, at the reference point the first sample value of every block is recorded for each audio channel. The corresponding sample values are stored as the set M1. The set of sample values M1 is an example of the "first set of values corresponding to reference audio samples" referred to elsewhere herein. Accordingly, the set of sample values M1 is an example of the first set of values corresponding to reference audio samples that may be transmitted with the audio data of block M and with associated metadata. In alternative examples, the second sample of every block, the third sample of every block or some other sample of every block may be used to determine the first set of values corresponding to reference audio samples. In some alternative examples, more than one sample per channel may be used to determine the first set of values corresponding to reference audio samples.
  • The value corresponding to a sample may or may not correspond to all of the bits of the sample, depending on the particular implementation. Some implementations may involve determining a value corresponding to only a subset of the total number of bits of a sample. Some such implementations may involve determining a value corresponding to only some number B of most significant bits (MSBs) of a sample, wherein B is one or more bits. Such implementations are potentially advantageous because they may reduce the number of bits required for transmission of the first set of values corresponding to reference audio samples.
  • In some examples, the number of bits required for transmission of the first set of values corresponding to reference audio samples may be reduced by sending reference audio samples for only a subset of audio channels. Such examples may also provide the potential advantage of simplifying the operations performed at a measurement point, e.g., the operations corresponding with blocks 320, 325 and 330 of Figure 3. Accordingly, in some alternative examples, the values corresponding to reference audio samples are not necessarily determined for each one of the N channels. However, some implementations involve determining a value corresponding to at least one sample from at least one of the N channels to determine the first set of values corresponding to reference audio samples.
  • In some implementations, two or more audio channels may be treated as a group in transmission and known to be synchronized to one another at the measurement point. In this case sending the first audio sample for a single audio channel of the group is sufficient information for synchronizing all channels in the group. In addition, or as an alternative, in some examples the set of reference samples may be losslessly compressed prior to transmission by an appropriate method and decompressed after receipt by another device.
  • At the measurement point, a second set of values corresponding to audio samples from a received block of audio data are determined in this example. This is one example of block 320 of Figure 3. In the example shown in Figure 4, the second set of values are determined according to the first audio sample value of each block and each audio channel. In Figure 4, M'1 represents an example of the second set of values. However, other implementations may involve determining the second set of values in a different manner, e.g., as described above with reference to determining the first set of values at the reference point. The process for determining the second set of values at the measurement point should generally be the same, or in substantially the same, as the process for determining the first set of values at the reference point. For example, determining the first set of values and determining the second set of values may both involve processing the same number of samples per channel, processing the same number of bits per sample, determining a value corresponding to the same sample number, etc.
  • In an example of block 325 of Figure 3, the first set of values (Mi) obtained from the reference point may be compared to the second set of values (M'1) that are determined at the measurement point. It may then be determined whether the block of audio data is synchronized with the metadata (block 330 of Figure 3). At least part of this determination may involve determining whether the audio channels are in time alignment with each other. If the first set of values obtained from the reference point equals the second set of values determined at the measurement point (M1 = M'1), in some examples it may be assumed that the audio channels are in the same time alignment as at the reference point.
  • However, if the first set of values does not equal the second set of values (in this example, if M1 ≠ M'1), then a further analysis may be undertaken in an attempt to determine the time offset of the audio channels. In some instances, all audio channels may be offset equally. In this case, a search for a set of samples that corresponds to the reference point samples should be sufficient for identifying the offset. For example, if all audio channels have been delayed by 10 samples, then the set of sample values based on the 11th audio sample at the measurement point should equal the reference set of sample values (in other words, M1 should equal M'11).
  • In some instances, each audio channel may have a different offset. In such cases each channel would need to be searched independently to find a sample value that matches a value in the first set of values. The offset for a particular channel could be determined according to the offset between the sample number in that channel and the sample number corresponding to the matching value in the first set of values.
  • The above-described search methods may be appropriate if the audio channels contain non-stationary audio signals of sufficient level in order to uniquely identify matching samples. During any period during which an audio channel contains no signal (for example, all zeros) or a static periodic audio signal (for example, a test tone) such methods will not be able to determine an accurate offset until a dynamic signal returns. These conditions can be identified, however, to flag an unreliable signal for offset estimation. For example, at the reference point an all-zero condition could be flagged by sending a special all-zero code for each audio channel with all zero samples. Even with dynamic audio signals it may be possible to match the wrong sample within a given block. However, in this case measuring alignment over a series of successive audio blocks is likely to identify the correct offset.
  • Figure 5 provides alternative examples of methods that may be performed at a reference point and at a measurement point. The methods indicated in Figure 5 may be applicable for instances in which the metadata maintains bit accuracy between the reference point and the measurement point but the audio channels may not. In such instances, searching for bit-exact samples will not identify correctly matching samples in all cases.
  • Some examples involve the identification of one or more sample locations corresponding to what may be referred to herein as an "audio metric." Examples of audio metric locations within a block of audio data may include the location of a peak sample value for the block or a location of a first zero crossing for the block. In some such methods, determining the second set of values may involve determining an audio metric location for at least one of the N channels.
  • However, the example shown in Figure 5, determining the first set of values corresponding to reference audio samples at the reference point involves determining an audio metric location for each of the N channels. In this example, the audio metric locations correspond to the locations of peak sample values for each of the N channels. Accordingly, the set of sample values Mp is an example of the first set of values corresponding to reference audio samples that may be transmitted with the audio data of block M and with associated metadata.
  • In this example, the measurement point performs a corresponding process: here, determining the second set of values corresponding to audio samples from the block of audio data involves determining an audio metric location for each of the N channels. In this example, the audio metric locations correspond to the locations of peak sample values for each of the N channels. The result is the set of sample values M'p shown in Figure 5.
  • At the measurement point, the second set of values may be compared to the first set of values. If the two sets of values are equal, or approximately equal within a given threshold (in other words, if Mp ≈ M'p) then it can be assumed that the audio channels at the measurement point are in the same time alignment (within a given tolerance) as the audio channels at the reference point. The threshold of allowable deviation is application- and metric- dependent. For a metric of a peak sample location, some applications may consider a deviation of +/- 1 msec (e.g., 48 samples with 48kHz sampled PCM audio) reasonable. If not, a search as described above with reference to Figure 4 may be undertaken, in an attempt to locate offsets between the matching sample locations at the reference and measurement points to determine the offset for each audio channel.
  • Alternatively, or additionally, in some examples the first set of values corresponding to reference audio samples determined at the reference point and the second set of values corresponding to audio samples determined at the measurement point may include what will be referred to herein as a "block metric" for at least one channel. For example, the first block metric may be based on two or more reference audio samples of at least one reference channel of a reference block of audio data. At the measurement point, determining the second set of values may involve determining a second block metric for at least one channel of a block of audio data received by the measurement point. The second block metric may be based on two or more samples of at least one channel of the audio data. In some implementations, determining the first set of values and determining the second set of values may involve determining first and second block metrics that are based on all audio samples in a block (e.g., the entire set of k samples shown in Figure 5).
  • In some implementations, the first block metric and the second block metric may be based, at least in part, a root mean square (RMS) of sample values, a frequency-weighted RMS value and/or a loudness metric such as ITU-R BS.1770.
  • With various methods disclosed herein, although the offsets determined for a single block of any given audio channel may not be entirely reliable, performing the method for each block of a continuous series of blocks may substantially increase the reliability of the methods. Evaluating more than one type of value corresponding to audio samples can also increase reliability. For example, evaluating both a block metric and the locations of audio metrics may increase the reliability of the method described with reference to Figure 5. Once an offset has been determined, the block metric can be derived at the measurement point (which may require audio samples from blocks before or after the block being analyzed, depending on the offset) and compared to the block metric from the reference point. If the two are exactly, or approximately equal (e.g., Mrms ≈ M'rms) then this is further conformation the alignment is correct. Such methods also can give confidence that the audio data at the measurement point has not been substantially modified since transmission from the reference point.
  • As with other methods disclosed herein, the methods described with reference to Figure 5 can work satisfactorily if the audio channels contain dynamic audio signals of sufficient level in order to derive corresponding audio metrics and audio metric locations. For some methods disclosed herein, synchronization may be measured for every audio block. However, in some alternative methods disclosed herein, synchronization may be measured only for certain audio blocks, in order to reduce computational workload or the amount of data transmitted. For example, the first set of values corresponding to reference audio samples may only be sent every few blocks (e.g. every 10th block) from the reference point. At the measurement point, in some examples the synchronization may be checked every few blocks even if the information is sent from the reference point for every block.
  • Various modifications to the implementations described in this disclosure may be readily apparent to those having ordinary skill in the art. The general principles defined herein may be applied to other implementations. The scope of the present invention is defined by the appended claims.

Claims (15)

  1. A method of processing audio data, the method comprising:
    receiving a block of audio data, the block including N pulse code modulated (PCM) audio channels, including audio samples for each of the N channels;
    receiving metadata associated with the block of audio data, the metadata including an expected value of a specific one of the audio samples;
    obtaining an actual value of the specific one of the audio samples;
    determining whether the expected value is substantially the same as the actual value; and
    determining that the block of audio data is synchronized with the metadata if the actual value is determined to be substantially the same as the expected value.
  2. The method of claim 1, wherein:
    each of the audio samples has B1 bits;
    the metadata includes a B2 -bit value which represents the expected value of the specific one of the audio samples, B2 < B1 ; and
    determining whether the expected value is substantially the same as the actual value comprises comparing B2 bits of the actual value with the B2 -bit value from the metadata.
  3. The method of claim 2, wherein the B2 bits of the actual value are the B2 most significant bits of the actual value.
  4. The method of any preceding claim, wherein the specific one of the audio samples is the first sample of the block in one of the N channels.
  5. A method of processing audio data, the method comprising:
    receiving a block of audio data, the block including N pulse code modulated (PCM) audio channels, including audio samples for each of the N channels;
    receiving metadata associated with the block of audio data, the metadata identifying one of the audio samples that is expected to have a specific value or property;
    determining which one of the audio samples has the specific value or property;
    determining whether the one of the audio samples that has the specific value or property is the one of the audio samples that is expected to have the specific value or property; and
    determining that the block of audio data is synchronized with the metadata if the one of the audio samples that has the specific value or property is determined to be the one of the audio samples that is expected to have the specific value or property.
  6. The method of claim 5, wherein the specific value or property comprises the property of being the peak value of the audio samples of one of the N channels, and determining which one of the audio samples has the specific value or property comprises identifying the one of the audio samples that has the peak value of the audio samples of the one of the N channels.
  7. The method of claim 5, wherein the specific value or property is the property of being the first zero-crossing audio sample of one of the N channels, and determining which one of the audio samples has the specific value comprises identifying the first zero-crossing audio sample of the one of the N channels.
  8. A method of generating an audio data bitstream, the method comprising:
    obtaining a block of audio data, the block including N pulse code modulated (PCM) audio channels, including audio samples for each of the N channels;
    obtaining metadata associated with the block of audio data;
    obtaining the value of a specific one of the audio samples;
    augmenting the metadata with the value of the specific one of the audio samples or with a value derived therefrom;
    assembling at least the block of audio data and the augmented metadata to form the audio data bitstream.
  9. The method of claim 8, wherein:
    each of the audio samples has B1 bits;
    the augmented metadata includes a B2 -bit value derived from the value of the specific one of the audio samples.
  10. The method of claim 9, wherein the B2 -bit value is equal to the B2 most significant bits of the specific one of the audio samples.
  11. The method of any one of claims 8 to 10, wherein the specific one of the audio samples is the first sample of the block in one of the N channels.
  12. A method of generating an audio data bitstream, the method comprising:
    obtaining a block of audio data, the block including N pulse code modulated (PCM) audio channels, including audio samples for each of the N channels;
    obtaining metadata associated with the block of audio data;
    determining which one of the audio samples has a predetermined specific value or property;
    augmenting the metadata with data which identifies said one of the audio samples;
    assembling at least the block of audio data and the augmented metadata to form the audio data bitstream.
  13. The method of claim 12, wherein the specific value or property comprises the property of being the peak value of the audio samples of one of the N channels, or wherein the specific value or property is the property of being the first zero-crossing audio sample of one of the N channels.
  14. A data-processing system configured to perform the method of any of claims 1-13.
  15. Computer program product having instructions which, when executed by a computing device or system, cause said computing device or system to perform the method of any of claims 1-13.
EP17172852.0A 2016-05-24 2017-05-24 Measurement and verification of time alignment of multiple audio channels and associated metadata Active EP3249646B1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP16170954 2016-05-24
US201662341474P 2016-05-25 2016-05-25

Publications (2)

Publication Number Publication Date
EP3249646A1 EP3249646A1 (en) 2017-11-29
EP3249646B1 true EP3249646B1 (en) 2019-04-17

Family

ID=56081282

Family Applications (1)

Application Number Title Priority Date Filing Date
EP17172852.0A Active EP3249646B1 (en) 2016-05-24 2017-05-24 Measurement and verification of time alignment of multiple audio channels and associated metadata

Country Status (1)

Country Link
EP (1) EP3249646B1 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102005014477A1 (en) * 2005-03-30 2006-10-12 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a data stream and generating a multi-channel representation
US8805678B2 (en) * 2006-11-09 2014-08-12 Broadcom Corporation Method and system for asynchronous pipeline architecture for multiple independent dual/stereo channel PCM processing
DE102008009024A1 (en) * 2008-02-14 2009-08-27 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for synchronizing multichannel extension data with an audio signal and for processing the audio signal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
None *

Also Published As

Publication number Publication date
EP3249646A1 (en) 2017-11-29

Similar Documents

Publication Publication Date Title
CN109313907B (en) Combining audio signals and spatial metadata
CN105868397B (en) Song determination method and device
US9875746B2 (en) Encoding device and method, decoding device and method, and program
BR122017012321A2 (en) audio encoder and decoder with substream structure program information or metadata
US9075806B2 (en) Alignment and re-association of metadata for media streams within a computing device
US20200015021A1 (en) Distributed Audio Capture and Mixing Controlling
PH12019501434A1 (en) System and method for blockchain-based data management
CN107533850B (en) Audio content identification method and device
US9712934B2 (en) System and method for calibration and reproduction of audio signals based on auditory feedback
BR112020020404A2 (en) INFORMATION PROCESSING DEVICE AND METHOD, AND, PROGRAM.
US9401150B1 (en) Systems and methods to detect lost audio frames from a continuous audio signal
AU2019394097A8 (en) Apparatus, method and computer program for encoding, decoding, scene processing and other procedures related to DirAC based spatial audio coding using diffuse compensation
CN111726740A (en) Electronic equipment testing method and device
EP3249646B1 (en) Measurement and verification of time alignment of multiple audio channels and associated metadata
US10015612B2 (en) Measurement, verification and correction of time alignment of multiple audio channels and associated metadata
US11557303B2 (en) Frictionless handoff of audio content playing using overlaid ultrasonic codes
US11330370B2 (en) Loudness control methods and devices
WO2020114369A1 (en) Wireless communication delay test method, device, computer device and storage medium
US9165561B2 (en) Apparatus and method for processing voice signal
JP2015046758A (en) Information processor, information processing method, and program
CN112086106A (en) Test scene alignment method, device, medium and equipment
MX2021016056A (en) Methods, apparatus and systems for representation, encoding, and decoding of discrete directivity data.
US20200335111A1 (en) Audio stream dependency information
KR20190033983A (en) Audio device and control method thereof
CN112312270B (en) Audio frequency response and phase testing method and device based on computer sound card

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20180529

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20181105

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602017003296

Country of ref document: DE

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1122432

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190515

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20190417

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190717

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190817

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190717

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190718

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1122432

Country of ref document: AT

Kind code of ref document: T

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190817

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602017003296

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20190531

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190524

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

26N No opposition filed

Effective date: 20200120

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190524

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190531

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200531

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO

Effective date: 20170524

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20190417

P01 Opt-out of the competence of the unified patent court (upc) registered

Effective date: 20230513

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20230420

Year of fee payment: 7

Ref country code: DE

Payment date: 20230419

Year of fee payment: 7

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20230420

Year of fee payment: 7