EP2342708B1 - Verfahren zum analysieren eines digitalen musikaudiosignals - Google Patents

Verfahren zum analysieren eines digitalen musikaudiosignals Download PDF

Info

Publication number
EP2342708B1
EP2342708B1 EP08875184A EP08875184A EP2342708B1 EP 2342708 B1 EP2342708 B1 EP 2342708B1 EP 08875184 A EP08875184 A EP 08875184A EP 08875184 A EP08875184 A EP 08875184A EP 2342708 B1 EP2342708 B1 EP 2342708B1
Authority
EP
European Patent Office
Prior art keywords
data
duration
music
algorithm
audio
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Not-in-force
Application number
EP08875184A
Other languages
English (en)
French (fr)
Other versions
EP2342708A1 (de
Inventor
Lars FÄRNSTRÖM
Riccardo Leonardi
Nicolas Scaringella
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Museeka SA
Original Assignee
Museeka SA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Museeka SA filed Critical Museeka SA
Publication of EP2342708A1 publication Critical patent/EP2342708A1/de
Application granted granted Critical
Publication of EP2342708B1 publication Critical patent/EP2342708B1/de
Not-in-force legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H1/00Details of electrophonic musical instruments
    • G10H1/36Accompaniment arrangements
    • G10H1/38Chord
    • G10H1/383Chord detection and/or recognition, e.g. for correction, or automatic bass generation
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/066Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for pitch analysis as part of wider processing for musical purposes, e.g. transcription, musical performance evaluation; Pitch recognition, e.g. in polyphonic sounds; Estimation or use of missing fundamental
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2210/00Aspects or methods of musical processing having intrinsic musical character, i.e. involving musical theory or musical parameters or relying on musical knowledge, as applied in electrophonic musical tools or instruments
    • G10H2210/031Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal
    • G10H2210/081Musical analysis, i.e. isolation, extraction or identification of musical elements or musical parameters from a raw acoustic signal or from an encoded audio signal for automatic key or tonality recognition, e.g. using musical rules or a knowledge base
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10HELECTROPHONIC MUSICAL INSTRUMENTS; INSTRUMENTS IN WHICH THE TONES ARE GENERATED BY ELECTROMECHANICAL MEANS OR ELECTRONIC GENERATORS, OR IN WHICH THE TONES ARE SYNTHESISED FROM A DATA STORE
    • G10H2250/00Aspects of algorithms or signal processing methods without intrinsic musical character, yet specifically adapted for or used in electrophonic musical processing
    • G10H2250/311Neural networks for electrophonic musical instruments or musical processing, e.g. for musical recognition or control, automatic composition or improvisation

Definitions

  • the invention relates to automatic analysis of music audio signal, preferably a digital audio music signal.
  • the present invention relates to a music audio representation method and apparatus for analyzing a music audio signal in order to extract a set of characteristics representative of the informative content of the audio music signal, according to the preamble of claim 1 and 17 respectively.
  • Pitch - Perceived fundamental frequency of a sound A pitch is associated to a single (possibly isolated) sound and is instantaneous (the percept is more or less as long as the sound itself, typically 200 to 500 ms duration in music signals).
  • the pitches over the register of a piano have been associated to their corresponding fundamental frequencies (in Hertz) assuming a standard tuning, i.e. the pitch A3 corresponds to a fundamental frequency of 440Hz.
  • Octave An interval that corresponds to a doubling of fundamental frequency.
  • Pitch Class - A set of all pitches that are a whole number of octaves apart, e.g. the pitch class C consists of the Cs in all octaves.
  • Chord In music theory, a chord is two or more different pitches that occur simultaneously; in this paper, single pitches may also be referred to as chords (see figure 1a and 1b for a sketch).
  • Chord Root The note or pitch upon which a chord is perceived or labelled as being built or hierarchically centred upon (see figure 1a and 1b for a sketch).
  • Chord Family - A chord family is a set of chords that share a number of characteristics including (see figure 1a and 1b for an illustration):
  • PCP Pitch Class Profiles
  • the PCP/Chroma approach is a general low-level feature extraction method that measures the strength of pitch classes in the audio music signal.
  • the intensity of each of the twelve semitones of the tonal scale is measured.
  • Such implementation consists in mapping some time/frequency representation to a time/pitch-class representation; in other words the spectrum peaks (or spectrum bins) are associated to the closest pitch of the chromatic scale.
  • PCPs algorithms of this type decrease the quantization level to less than a semitone.
  • a pitched instrument will not only exhibit an energy peak around a single frequency, but that it will also exhibit significant energy for some (more or less) harmonic frequencies.
  • the template based approach to high-level musical feature extraction is however restricted by the choice of templates.
  • state-of the-art algorithms use templates for the Major key and for the Minor key (one such template for each of the 12 possible pitch class).
  • US 2008245215 discloses a signal processing apparatus, including: removal means for removing, from a sound signal in the form of a stereo signal, a center component which is a component of sound positioned at the center between the left and the right; extraction means for extracting, from the sound signal from which the center component is removed, first feature quantity representative of characteristics of sounds of different tones of the 12-tone equal temperament within a predetermined range; and decision means for deciding a chord within the predetermined range using the first feature quantity.
  • US 6057502 discloses a time fraction or short duration of a musical sound wave is first analyzed by the FFT processing into frequency components in the form of a frequency spectrum having a number of peak energy levels, a predetermined frequency range (e.g. 63.5-2032 Hz) of the spectrum is cut out for the analysis of chord recognition, the cut-out frequency spectrum is then folded on an octave span basis to enhance spectrum peaks within a musical octave span, the frequency axis is adjusted by an amount of difference between the reference tone pitch as defined by the peak frequency positions of the analyzed spectrum and the reference tone pitch used in the processing system, and then a chord is determined from the locations of those peaks in the established octave spectrum by pattern comparison with the reference frequency component patterns of the respective chord types.
  • a predetermined frequency range e.g. 63.5-2032 Hz
  • Autocorrelation method may preferably be utilized to take the autocorrelation among the frequency components in the octave profile on the basic unit of a semitone span in order to enhance the peaks in the frequency spectrum of the octave profile on a semitone basis.
  • the object of the present invention is to develop a feature extraction algorithm able to compute a musicologically valid description of the pitch content of the audio signal of a music piece.
  • a further object of the present invention is to map directly spectral observations to a chord space without using an intermediate note identification unit.
  • the present invention it is possible to characterize the content of music pieces with an audio feature extraction method that generates compact descriptions of pieces that may be stored e.g. in a database or that may be embedded in audio files like e.g. ID3 tags.
  • the digital music audio signal 2 can be an extract of a signal audio representing a song or a complete version of a song.
  • the method 1 comprises the step of:
  • tonality it is encompassed a combination of chord roots and chord family hierarchically organized around a tonal centre, i.e. a combination of chord roots and chord family, which perceived significance is measured relatively to a tonal centre.
  • the step a) of the method 1, i.e. the first algorithm 4 is able to extract the first data 5 representing the combination of chord roots and chord families observed in the digital music audio signal 2, that is the first data 5 contains the tonal context of the digital music audio signal 2.
  • the step a) of the method 1, i.e. the first algorithm 4 does not aim explicitly at detecting chord roots and chord families contained in the digital music audio signal 2. On the contrary, it aims at obtaining an abstract, and possibly redundant, representation correlated with the chord roots and chord families observed in the digital music audio signal 2.
  • step b) of the method 1, i.e. the second algorithm 6, is able to elaborate the first data 5 for providing second data 7 which represent the tonal centre Tc contained in said first data 5, that is in the second data 7 the dominating pitch class of a particular tonal context upon which all other pitches are hierarchically referenced (see figure 2a and 2b ) are contained.
  • the method 1 further comprises the step of:
  • the first algorithm 4 comprises the steps of:
  • the first data 5 comprise a plurality of vectors v1, v2, v3, ..., vi, wherein each vector of the plurality of vectors v1, v2, v3, ..., vi is associated to the respective audio segment s-on-1, s-on-2, s-on-3, s-on-i.
  • each vector v1, v2, v3, vi has a dimension equal to the twelve pitches (A to G#) times a predefined number "n" of chord type.
  • the predefined number "n” of chord type can be set equal to five so as to represent, for example, "pitches”, “major chords”, “minor chords”, “diminished chords” and “augmented chords”.
  • step a1) of the first algorithm 4 is performed by an onset detection algorithm in order to detect the attacks of musical events of the audio signal 2.
  • each peak p1, p2, p3, ..., pi represents an attack of musical event in the respective audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i.
  • the onset detection algorithm 10 can be implemented as described in [ J.P. Bello, L. Daudet, S. Abdallah, C. Duxbury, M. Davies, M. Sandler, " A tutorial on Onset Detection in Music Signals", in IEEE Transactions on Speech and Audio Processing, 2005 ].
  • step a2) of the first algorithm 4 divides the audio music signal 2 into the plurality of audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i each audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i having a duration "T".
  • the step a2) of the first algorithm 4 divides the audio music signal 2 into the audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i and each audio segments s-on-1, son-2, s-on-3, ..., s-on-i has its own duration "T".
  • step a3) of the first algorithm 4 applies, advantageously, the frequency analysis to each audio segment s-on-1, s-on-2, s-on-3, ..., s-on-i only during a predetermined sub-duration "t", wherein the sub-duration "t" is less than the duration "T".
  • the audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i are further analysed in frequency only during the sub-duration "t" even if they extend over such sub-duration "t".
  • prefixed sub-duration "t" can be set manually by the user.
  • the prefixed sub-duration "t" is within a range from 250 to 350 msec.
  • duration "T" audio segment s-on-1, s-on-2, s-on-3, ..., s-on-i is longer than the pre-defined duration "t", i.e. more than 250-350 msec, only the data contained in the sub-duration "t" are considered while the rest of the segment is assumed to contain irrelevant data and therefore such remaining data are disregarded.
  • the frequency analysis will be limited to the smallest time interval, i.e. the duration "T".
  • the frequency analysis of each audio segment s-on-1, son-2, s-on-3, ..., s-on-i is performed only using the music samples occurring during the duration T, i.e. the smallest duration.
  • the frequency analysis, applied during step a3), is performed, in the preferred embodiment, by a D.F.T. (Discrete Fourier Transform).
  • step a3) can also be performed a further step during which is applied a function that reduces the uncertainty in the time-frequency representation of the audio signal 2.
  • an apodization function such as a Hanning window.
  • the length of the Hanning window equals the length "T" of the audio segment s-on-1, s-on-2, s-on-3, s-on-i.
  • the apodization function is applied to audio segment s-on-1, s-on-2, s-on-3, s-on-i. by multiplying on a sample by sample basis to the audio data of the corresponding segment prior to applying the frequency analysis performed by the D.F.T..
  • a further reason for which the apodization function is used is for the attenuation of the musical event attacks p1, p2, p3, ..., pi, since they were located around the boundaries of the apodization window. In this way it is possible to create an attenuated version of the musical event attacks p1, p2, p3, ..., pi.
  • the power spectrum is computed with the D.F.T. or any of its fast implementations, for example F.F.T. (Fast Fourier Transform).
  • the choice of the sub-duration "t” allows for controlling the frequency resolution of the FFT (i.e. the longer the duration "t", the higher the frequency resolution) and normalizes the frequency resolution so that it remains constant even if the initial duration "T" of audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i is different for each segment.
  • the choice of the sub-duration "t" is such that the length in samples of the resulting segment equals a power of two.
  • the computation network 12 can be implemented, preferably, with a trained machine-learning algorithm.
  • the trained machine-learning algorithm consists in a Multi-layer Perceptron (MLP).
  • MLP Multi-layer Perceptron
  • the task of the Multi-layer Perceptron is to estimate the posterior probabilities of each combination chord family (i.e a chord type) and chord root (i.e. a pitch class), given the spectrum segments sp-1, sp-2, sp-3, sp-i.
  • Multi-layer Perceptron is trained in two steps:
  • the trained machine-learning algorithm 12 is trained in two steps: a first supervised training with few hand labelled training data and a subsequent unsupervised training with a larger set of unlabelled training data.
  • the set of hand labelled training data consists of isolated chords saved as MIDI files.
  • the set of chords should cover each considered chord type (Major, Minor, Diminished, Augmented%), each pitch class (C, C#, D%) and should cover a number of octaves.
  • a large variety of audio training data is created from these MIDI files by using a variety of MIDI instruments. These audio examples together with their pitch class and chord type are used to train the machine-learning algorithm 12, which is set to produce from the ground truth a single output per "pitch class / chord type" pair.
  • the training of the various weights " ⁇ " of the machine learning algorithm is performed thanks to a standard stochastic gradient descent. Once such training has been achieved, at the end of this 1st training step, a first preliminary mapping for any input spectral segment sp-1, sp-2, sp-3, sp-i to chord families can be produced.
  • the training of the trained machine-learning algorithm 12 needs to be refined by using the data from a larger set of music pieces.
  • the machine-learning algorithm 12 is trained in an unsupervised fashion.
  • the initially trained machine-learning algorithm 12 after the 1 st step is cascaded with a mirrored version of itself which uses as initial weights the same weights " ⁇ " of the trained machine-learning network after the 1 st step (so as to operate some sort of inversion of the corresponding operator, were it linear).
  • the machine-learning algorithm 12 (were it a linear operator) would achieve a projection of the high-dimensional input data (the spectral segments) into a low-dimensional space corresponding to the chord families. Its mirrored version attempts to go from the low-dimensional chord features back to the initial high dimensional spectral peak representation.
  • the initial setting of the cascaded algorithm adopts initially the transposed set of weights of the training engine algorithm.
  • This training approach is reminiscent of the training of auto-encoder networks.
  • the initialisation of the network with a supervised strategy ensures finding an initial set of weights for the network which is consistent with the physical essence of a low level representation in terms of chord families.
  • the first algorithm 4 may comprise the further step a5) of filtering, after the D.F.T. step a3).
  • Such filtering step a5) also called peak detection 15, is an optional step of the method 1.
  • the filtering step a5) is able to filter the plurality of spectrum segments sp-1, sp-2, sp-3, ..., sp-i, generated by the block 11, by a moving average in order to emphasize the peak p1', p2', p3', ..., pi' in each of said plurality of spectrum segments sp-1, sp-2, sp-3, sp-i.
  • a moving average 20 typically operating over the power spectrum 21 as result from the step a4) is computed and the spectral components having power below this moving average are zeroed.
  • the music audio analysis method 1 comprises, before the computing step a4), a further step of decorrelating, also called whitening 16.
  • the plurality of spectrum segments sp-1', sp-2', sp-3', ..., sp-i' is de-correlated with reference to a predetermined database 19 ( Figure 8 ) of audio segment spectra in order to provide a plurality of decorrelated spectrum segments sp-1 ", sp-2", sp-3", ..., sp-i".
  • the second algorithm 6 of the music audio analysis method 1 comprises the steps of:
  • the first prefixed duration T1 of said first window "w1" is much longer than the sub-duration "t" of each plurality of audio segments s-on-1, son-2, s-on-3, ..., s-on-i.
  • the second algorithm 6 comprises the further step of:
  • the second window "w2" is shifted by a prefixed duration Ts with respect to said temporal duration T1 of the first window "w".
  • the second prefixed duration T2 can vary in the range between T1-Ts and the first prefixed duration T1.
  • the second prefixed duration T2 is much longer than the subperiod t.
  • the prefixed time Ts is considered to be less of the first prefixed duration T1, so that the first group g1 of vectors and the second group g2 of vectors overlap each other.
  • chords/pitches have to be more expected than others.
  • chords typically change with musical bars - or even faster at the beat level - tonality requires a longer time duration to be perceived.
  • the first prefixed duration T1 is typically set in the range of 25 - 35 sec, more preferably about 30 sec.
  • the prefixed time Ts is typically set in the range of 10 - 20 sec, more preferably about 15 sec..
  • the prefixed time Ts is equal to the first prefixed duration T1
  • the first group g1 of vectors is contiguous with the second group of vectors g2.
  • the second algorithm 6 of the music audio analysis method 1 comprises also the further step of:
  • windows w3 and w4 have to be overlapping or at most consecutive without gaps but any subsequent window, i.e. windows w4, must not be contained in the previous windows, i.e. w1, w2 and w3.
  • the prefixed duration of the window w2 i.e. duration T2
  • T2 could be equal to the prefixed duration T1 of the window w1 or could be greater than the prefixed duration T1, i.e. T2 ⁇ 3/2 T1; T2 could also be adjusted locally to its associated window, so as to be tailored to local properties of the underlying audio signal, without however violating the principle of partial overlapping.
  • the durations and positions of windows "w" may be tailored to the overall structure of the music signal, i.e. windows may be set so as to match sections like e.g. verse or chorus of a song.
  • An automatic estimation of the temporal boundaries of these structural sections may be obtained by using a state-of-the-art music summarization algorithm such as well known to the skilled man in the art.
  • different windows may have different durations and may be contiguous instead of overlapping.
  • a first way to generate the second data 7 being representative of the tonal centre of said first data 5 is to elaborate a mean vector "m" of said first data 5 and choose the highest chord root value in such mean vector "m" in order to set the tonal centre.
  • the statistical estimates measured over time such as mean, variance and first order covariance of the vectors contained in the first group g1 and the same statistical estimates for the others groups (i.e. g2, ..., gi) can be used to recover a better description of the local tonal context of each audio segments s-on-1, s-on-2, s-on-3, ..., s-on-i.
  • D is the dimension
  • F is the number of considered chord families
  • 3 is the number of statistical estimates measured over time, i.e. mean, variance and first order covariance.
  • the most stable pitches producing the percept of tonality are typically played in synchrony with the metrical grid while less relevant pitches are more likely to be played on unmetrical time positions.
  • the incorporation of metrical information during the tonality estimation is as follows.
  • Each audio segment s-on-1, s-on-2, ..., s-on-i is associated to a particular metrical weight depending on its synchronisation with identified metrical events. For example, it is possible to assign a weight of 1.0 to the audio segment if a musical bar position has been detected at some time position covered by the corresponding audio segment. A lower weight of e.g. 0.5 may be used if a beat position has been detected at some time position covered by the audio segment. Finally, the smallest weight of e.g. 0.25 may be used if no metrical event corresponds to the audio segment.
  • the step b5) of the second algorithm 6 of the music audio analysis method 1, i.e. the extraction of data 7 being representative of the evolution of the tonal centre of the music piece given data 8, is implemented as follows.
  • MLP Multi-Layer Perceptron
  • the architecture of the MLP is such that its number of inputs matches the size of the vectors in data 7A.
  • the number of inputs of the MLP corresponds to the number of features describing the tonal context of window "w" (or generic window “wi").
  • the MLP may be built with an arbitrary number of hidden layers and hidden neurons.
  • the number of outputs is however fixed to 12 so that each output corresponds to one of the 12 possible pitches of the chromatic scale.
  • the parameters of the MLP are trained in a supervised fashion with stochastic gradient descent.
  • the training data consists of a large set of feature vectors describing the tonal context of window "w" (or generic window “wi") for a variety of different music pieces.
  • a target tonal centre was manually associated by a number of expert musicologists.
  • the corresponding training data i.e. the pairs feature vectors / tonal centre targets
  • the corresponding training data can be enlarged by a factor 12 by considering all 12 possible transpositions of the CFP vectors (refer to third algorithm 8 for the transposition of CFPs hereinafter described).
  • the training consists in finding the set of parameters that maximises the output corresponding to the target tonal centre and that minimises the other outputs given the corresponding input data.
  • the MLP outputs will estimate tonal centre posterior probabilities, i.e. each output will be bounded between 0 and 1 and they will sum to 1.
  • non-linearity functions e.g. sigmoid function
  • training cost function e.g. cross-entropy cost function
  • This dependence between consecutive local estimates is modelled thanks to a transition matrix, which encodes the probability of going from the tonal centre estimate i-1 to the tonal centre estimate i.
  • transition probabilities could be learnt from data, it is set manually according to some expert musicological knowledge (see table 2 for an example).
  • the problem of finding data 7, i.e. the optimal sequence of tonal centres over the course of the music piece, can be formulated as follows.
  • Tc1*, Tc2*,..., Tcn* be the optimal sequence of tonal centres and let Obs1, Obs2,..., Obsn be the sequence of feature vectors fed independently into the local tonal centre estimation MLP.
  • the most likely sequence of tonal centres Tc1*, Tc2*,..., Tcn* can be obtained thanks to the Viterbi algorithm.
  • the Viterbi algorithm is a dynamic programming algorithm for finding the most likely sequence of hidden states, in this case the most likely sequence of tonal centres, that results in a sequence of observed events, in this case the local tonal centre estimations of the MLP.
  • the modelling of the tonal context is implemented in practice by the computation of mean/variance/covariance 7A of the CFPs in a generic window "wi" together with the MLP in charge of estimating the probability of each tonal centre Tci.
  • FIGS 7A to 7D illustrate graphically the algorithm 6 once it has been applied on the first data 5.
  • Figure 7B shows a graphical representation of a sequence of D dimensional vectors representative of the tonal content over window "wi", i.e. second data 7, having in axis the vector for a generic window "wi" and in ordinate the dimension. Particularly, Figure 7B shows the longer-term vectors corresponding to the mean/variance/covariance of the shorter-term CFP vectors over the windows "w".
  • Figure 7C shows a graphical representation of a sequence of local tonal centre estimates, i.e. the 12 dimensional outputs of the MLP, having in axis the vector for a generic window "wi" and in ordinate the pitch class.
  • Figure 7D finally shows a graphical representation of the corresponding optimal sequence of tonal centres obtained thanks to the Viterbi algorithm, i.e. the final tonal centre estimates for each window "wi", having in axis the vector for a generic window "wi" and in ordinate the pitch class.
  • the third algorithm 8 comprises the step c1) of transposing to a reference pitch the first data 5 in function of second data 7 so as to generate the third data 9.
  • the third data 9 are made invariant with respect to the second data 7.
  • each CFP vectors of the group g1 (or g2, ..., gi) is made invariant to transposition by transposing the vector values to a reference pitch.
  • the reference pitch can be C.
  • the step c1) of transposing to a reference pitch the first data 5 is a normalization operation, that allows to compare any kind of audio music signal based upon tonal considerations.
  • the apparatus able to perform the method heretofore described comprises:
  • the processor unit 18 is configured to extract the second data 7 representative of the tonal centre of the audio music signal 2.

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Auxiliary Devices For Music (AREA)

Claims (19)

  1. Musik-Audio-Analyseverfahren zum Analysieren eines digitalen Musik-Audiosignals (2), um eine Reihe von Akkordfamilienprofilen (CFP) zu extrahieren, die in dem digitalen Musik-Audiosignal (2) enthalten sind, wobei das Verfahren folgende Schritte umfasst:
    a) Anwendung eines ersten Algorithmus (4) auf das Musik-Audiosignal (2), um erste Daten (5), bestehend aus mehreren Vektoren (v1, v2, v3, ..., vi) zu extrahieren, die jeweils ein Akkordfamilienprofil so beschreiben, dass der tonale Kontext des Musik Audiosignals (2) repräsentiert wird, wobei die ersten Daten (5) durch den ersten Algorithmus (4) abgerufen werden, mittels:
    - Teilung des Musik-Audiosignals (2) in mehrere Audiosegmente s-on-1, s-on-2, s-on-3, s-on-i), die jeweils eine voreingestellte ersten Dauer (T) besitzen;
    - Festlegung einer Zeitposition von mehreren Spitzenwerten (p1, p2, p3, ..., pi), wobei jedes Audiosegment (s-on-1, s-on-2, s-on-3, s-on-i) einen Spitzenwert (p1, p2, p3, ..., pi) besitzt, wobei das Audiosegment (s-on-1, s-on-2, s-on-3, s-on-i) während einer Unter-Dauer (t) der Dauer (T) einer Frequenzanalyse unterzogen wird, um mehrere Spektrumsegmente (sp-1, sp-2, sp-3, sp-i) zu erhalten;
    - Abarbeitung der mehreren Spektrumsegmente (sp-1, sp-2, sp-3, ..., sp-i) mit einem Berechnungsnetzwerk (12);
    b) Anwendung eines zweiten Algorithmus (6) auf die ersten Daten (5), um zweite Daten (7) bereitzustellen, die repräsentativ für die Entwicklung eines tonalen Zentrums (Tc) sind, das in den ersten Daten (5) enthalten ist, wobei die zweiten Daten (7) vom zweiten Algorithmus (6) über mehrere Analysefenster (w1, w2,..., wi) einer voreingestellten zweiten Dauer (T1, T2, ...,Ti) ausgewertet werden, die um eine versetzte Zeit (Ts) versetzt wird, wobei die Unter-Dauer (t) kürzer als die zweite Dauer (T1, T2, ...,Ti) der mehreren Analysefenster (w1, w2,..., wi) ist, wobei jedes der mehreren Analysefenster (wi) eine Gruppe (gi) der Vektoren enthält, um das in den ersten Daten (5) enthaltene tonale Zentrum (Tc) zu schätzen,
    dadurch gekennzeichnet, dass
    der erste Algorithmus (4) folgende Schritte umfasst:
    a1) Bestimmung (10) einer Sequenz von Note-On-Einstellungen im Musik-Audiosignal (2), um die Zeitposition von mehreren Spitzenwerten (p1, p2, p3, ..., pi) festzulegen;
    a2) Teilung des Musik-Audiosignals (2) in mehrere Audiosegmente (s-on-1, s-on- 2, s-on-3, ..., s-on-i) mit voreingestellter Dauer (T);
    a3) Anwendung der Frequenzanalyse auf jedes Audiosegment (s-on-1, s-on-2, s-on-3, s-on-i) während der vorgegebenen Unter-Dauer (t), um mehrere Spektrumsegmente (sp-1, sp-2, sp-3, spi) zu erhalten;
    a4) Abarbeitung mehrerer Spektrumsegmente (sp-1, sp-2, sp-3, ..., sp-i) durch das Berechnungsnetzwerk (12), um erste Daten (5) bereitzustellen, wobei jeder Vektor der mehreren Vektoren (v1, v2, v3, ..., vi) dem jeweiligen Audiosegment (s-on-1, s-on-2, s-on-3, ..., s-on-i) entspricht, wobei das Berechnungsnetzwerk (12) mit einem trainierten Maschinenlern-Algorithmus implementiert ist.
  2. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei der trainierte Maschinenlern-Algorithmus (12) in zwei Schritten trainiert wird:
    - erster Schritt in einem beaufsichtigten Training mit wenigen, von Hand gekennzeichneten Trainingsdaten (13) und
    - zweiter Schritt in einem nicht beaufsichtigten Training mit einem größeren Satz (14) nicht gekennzeichneter Trainingsdaten.
  3. Musik-Audio-Analyseverfahren nach Anspruch 2, wobei der zweite Schritt ausgeführt wird, um einen nach dem ersten Schritt erhaltenen Satz von Gewichten (ω) des trainierten Maschinenlern-Algorithmus (12) zu verfeinern.
  4. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei der erste Algorithmus nach dem Frequenzanalyse-Schritt a3) ferner den Schritt umfasst:
    a5) Filtern der mehreren Spektrumsegmente (sp-1, sp-2, sp-3, ..., sp-i) mittels eines gleitenden Durchschnitts, um den Spitzenwert (p1', p2', p3' ..., pi') in jedem der mehreren Spektrumsegmenten (sp-1, sp-2, sp-3, ..., sp-i) zu betonen.
  5. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei die Berechnungsstufe a4) für jedes der mehreren Segmente zwischen zwei aufeinander folgenden, festgestellten Segmenten berechnet wird.
  6. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei die Frequenzanalyse nur während der Unter-Dauer(t) ausgeführt wird, wobei die Unter-Dauer (t) im Bereich von 250-350 ms liegt.
  7. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei jeder Vektor (v1, v2, v3, vi) dieselbe Größe wie die zwölf Tonhöhenzeiten einer zuvor festgelegten Anzahl "n" einer Akkordart besitzt.
  8. Musik-Audio-Analyseverfahren nach einem beliebigen der vorstehenden Ansprüche von 1 bis 7, wobei der zweite Algorithmus (6) folgende Schritte umfasst:
    b1) Bereitstellung eines ersten Analysefensters (w1) mit einer voreingestellten Dauer (T1), die eine erste Gruppe (g1) von mehreren Vektoren enthält, die die ersten Daten (5) bilden;
    b2) Verarbeitung der ersten Gruppe (g1) von mehreren Vektoren, die in dem Analysefenster (w) zur Schätzung eines ersten tonalen Kontexts (Tc 1) enthalten sind, der repräsentativ für das in dem ersten Fenster (w1) enthaltene tonale Zentrum ist;
    b3) Bereitstellung eines zweiten Analysefensters (w2) mit einer zweiten voreingestellten Dauer (T2), wobei das zweite Analysefenster (w2) ein verschobenes Fenster der verschobenen Zeit (Ts) des ersten Analysefensters (w1) ist, so dass das zweite Analysefenster (w2) bezüglich des ersten Analysefensters (w1) überdeckt wird, wobei das zweite Analysefenster (w2) eine zweite Gruppe (g2) von mehreren Vektoren umfasst;
    b4) Berechnung der zweiten Gruppe (g2) von mehreren Vektoren, die in dem zweiten Analysefenster (w2) zur Schätzung eines zweiten tonalen Kontexts (Tc2) enthalten sind, der repräsentativ für das in dem zweiten Analysefenster (w2) enthaltene tonale Zentrum ist;
    b5) Verarbeitung des tonalen Kontexts (Tc1) des ersten Analysefensters (w1) und des tonalen Kontexts (Tc2) des zweiten Analysefensters (w2) unter Verwendung eines Viterbi-Algorithmus, um die zweiten Daten (7) zu generieren, wobei diese repräsentativ für die Entwicklung des tonalen Zentrums der ersten Daten (5) sind.
  9. Musik-Audio-Analyseverfahren nach Anspruch 8, wobei der zweite Algorithmus ferner folgenden Schritt umfasst:
    b6) Wiederholung der Schritte von b3) bis b5) zur Festlegung weiterer Analysefenster (wi), wobei jedes weitere Analysefenster (wi) eine Gruppe (gi) von Vektoren zur Schätzung des tonalen Kontexts (Tc) umfasst, der in den ersten Daten (5) enthalten ist.
  10. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei die erste voreingestellte Dauer (T1) auf einen Bereich von 25 - 35 Sek., vorzugsweise circa 30 Sek. eingestellt ist.
  11. Musik-Audio-Analyseverfahren nach Anspruch 1, wobei die voreingestellte, verschobene Zeit (Ts) auf einen Bereich von 10 - 20 Sek.,
    vorzugsweise auf circa 15 Sek. eingestellt ist, wobei die zweite voreingestellte Dauer (T2) variiert im Bereich der Differenz zwischen:
    - der ersten voreingestellten Dauer (T1) und der voreingestellten, verschobenen Zeit (Ts)
    und
    - der ersten voreingestellten Dauer(T1).
  12. Musik-Audio-Analyseverfahren nach Anspruch 8, wobei der Schritt b5) von einem Multi-Layer Perceptron (MLP) implementiert wird.
  13. Musik-Audio-Analyseverfahren nach den Ansprüchen 8, 9 und 12, wobei der Viterbi-Algorithmus des Schrittes b5) die Entscheidung unter Berücksichtigung aller lokalen tonalen Wahrscheinlichkeiten trifft, die von dem Multi-Layer Perceptron (MLP) für jedes Analysefenster (wi) ausgegeben werden.
  14. Musik-Audio-Analyseverfahren nach Anspruch 13, wobei die zweiten Daten (7) durch Verarbeitung von im Laufe der Zeit gewonnenen statistischen Schätzungen generiert werden, wobei die statistischen Schätzungen aus Mittelwert, Varianz und Erstbefehl-Kovarianz der Vektoren bestehen, die in den Gruppen (gl, g2, ..., gi) enthalten sind, um die verarbeiteten Daten (7A) gemäß den folgenden Formeln zu bilden: μ = 1 N 1 N X i
    Figure imgb0016
    σ 2 = 1 N - 1 1 N X i - μ 2
    Figure imgb0017
    cov_ 1 = 1 N - 2 1 N X i - µ * X i - 1 - µ
    Figure imgb0018

    wobei N die Anzahl von Vektoren innerhalb der Gruppe ,,gi" des Fensters "wi", µ der Mittelwert, σ2 die Varianz und cov_1 die Erstbefehl-Kovarianz ist.
  15. Musik-Audio-Analyseverfahren nach einem beliebigen der vorstehenden Ansprüche von 1 bis 14, wobei das Verfahren ferner den Schritt c) der Anwendung eines dritten Algorithmus (8) auf die ersten Daten (5) in Abhängigkeit von den zweiten Daten (7) umfasst, um dritte Daten (9) bereitzustellen, die eine normalisierte Version der ersten Daten (5) sind, wobei die ersten Daten (5) transpositionsinvariant gemacht werden, indem sie in Abhängigkeit von der erfassten Entwicklung des tonalen Zentrums (Tc) normalisiert werden.
  16. Musik-Audio-Analyseverfahren nach Anspruch 14, wobei die Transposition durch eine zirkuläre Permutation gemäß folgender Formel implementiert wird: TCFP t i , mod j - Tt , 12 = CFPt i j
    Figure imgb0019

    wobei TCFPt der transponierte CFP-Vektor in der Zeit t, i der Akkordfamilien-Index, j die Tonhöhenklasse und Tt die Tonhöhenklasse des tonalen Zentrums in der Zeit t ist.
  17. Computerprogramm-Produkt umfassend ein Programm zur Analyse eines Musik-Audiosignals, um mindestens einen Leistungssatz zu extrahieren, der repräsentativ für den Inhalt des Audio-Musiksignals ist, wobei das Computerprogramm-Produkt, wenn dieses in einem Computer abläuft, den Computer zur Ausführung folgender Schritte veranlasst:
    a) Anwendung eines ersten Algorithmus (4) auf das Musik-Audiosignal (2), um erste Daten (5) zu extrahieren, bestehend aus mehreren Vektoren (v1, v2, v3, ..., vi), die jeweils ein Akkordfamilienprofil so beschreiben, dass der tonale Kontext des Musik-Audiosignals (2) repräsentiert wird, wobei die ersten Daten (5) durch den ersten Algorithmus (4) abgerufen werden durch Teilung des Musik-Audiosignals (2) in mehrere Audiosegmente (s-on-1, s-on-2, s-on-3, s-on-i) mit einer ersten Dauer (T) und durch Bestimmung einer Zeitposition von mehreren Spitzenwerten (p1, p2, p3, ..., pi), wobei jedes Audiosegment (s-on-1, s-on-2, s-on-3, s-on-i) einen Spitzenwert (p1, p2, p3, ..., pi) enthält, wobei das Audiosegment (s-on-1, s-on-2, s-on-3, s-on-i) einer Frequenzanalyse während einer Unter-Dauer (t) der Dauer (T) unterzogen wird und durch Abarbeitung der mehreren Spektrumsegmente (sp-1, sp-2, sp-3, ..., sp-i) mit einem Berechnungsnetzwerk (12);
    b) Anwendung eines zweiten Algorithmus (6) auf die ersten Daten (5), um zweite Daten (7) bereitzustellen, die repräsentativ für die Entwicklung eines tonalen Zentrums (Tc) sind, das in den ersten Daten (5) enthalten ist, wobei die zweiten Daten (7) von dem zweiten Algorithmus (6) über mehrere Analysefenster (w1, w2,..., wi) der zweiten Dauer (T1, T2, ...,Ti) ausgewertet werden, die durch eine verschobene Zeit (Ts) verschoben wird, wobei die Unter-Dauer (t) kürzer als die zweite Dauer (T1, T2, ...,Ti) der mehreren Analysefenster (w1, w2,..., wi) ist, wobei jedes weitere Analysefenster (wi) eine Gruppe (gi) der Vektoren zur Schätzung des tonalen Kontexts (Tc) enthält, der in den ersten Daten (5) enthalten ist,
    dadurch gekennzeichnet, dass der erste Algorithmus (4) folgende Schritte umfasst:
    al) Bestimmung (10) einer Sequenz von Note-On-Einstellungen im Musik-Audiosignal (2), um die Zeitposition von mehreren Spitzenwerten (p1, p2, p3, ..., pi) festzulegen; pi) festzulegen;
    a2) Teilung des Musik-Audiosignals (2) in mehrere Audiosegmente (s-on-1, s-on-2, s-on-3, ..., s-on-i) mit voreingestellter Dauer (T);
    a3) Anwendung der Frequenzanalyse auf jedes Audiosegment (s-on-1, s-on-2, s-on-3, s-on-i) während der vorgegebenen Unter-Dauer (t), um mehrere Spektrumsegmente (sp-1, sp-2, sp-3, spi) zu erhalten;
    a4) Abarbeitung mehrerer Spektrumsegmente (sp-1, sp-2, sp-3, ..., sp-i) durch das Berechnungsnetzwerk (12), um erste Daten (5) bereitzustellen, wobei jeder Vektor der mehreren Vektoren (v1, v2, v3, ..., vi) dem jeweiligen Audiosegment (s-on-1, s-on-2, s-on-3, ..., s-on-i) entspricht, wobei das Berechnungsnetzwerk (12) mit einem trainierten Maschinenlern-Algorithmus implementiert ist.
  18. Computerprogramm-Produkt nach Anspruch 17, umfassend ferner einen Schritt c) der Anwendung eines dritten Algorithmus (8) auf die ersten Daten (5) in Abhängigkeit von den zweiten Daten (7), um dritte Daten (9) bereitzustellen, die eine normalisierte Version der ersten Daten (5) sind, wobei die ersten Daten (5) transpositionsinvariant gemacht werden, indem sie in Abhängigkeit von der erfassten Entwicklung des tonalen Zentrums (Tc) normalisiert werden.
  19. Vorrichtung zum Analysieren eines Musik-Audiosignals, um mindestens einen Leistungssatz zu extrahieren, der repräsentativ für den Inhalt des Musik-Audiosignals (2)ist, wobei die Vorrichtung Folgendes umfasst:
    - einen Eingang zum Empfangen eines digitalen Musik-Audiosignals (2);
    - eine Prozessoreinheit (18) zum Verarbeiten des digitalen Musik-Audiosignals (2);
    - und eine Datenbank (19), in der Repräsentanten ähnlicher oder unterschiedlicher Musikevents gespeichert werden,
    dadurch gekennzeichnet, dass die Prozessoreinheit (18) konfiguriert ist, um den Leistungssatz zu extrahieren, der repräsentativ für den Inhalt des digitalen Musik-Audiosignals (2) gemäß dem Analyseverfahren nach einem der vorstehenden Ansprüche von 1 bis 16 ist.
EP08875184A 2008-10-15 2008-10-15 Verfahren zum analysieren eines digitalen musikaudiosignals Not-in-force EP2342708B1 (de)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/EP2008/063911 WO2010043258A1 (en) 2008-10-15 2008-10-15 Method for analyzing a digital music audio signal

Publications (2)

Publication Number Publication Date
EP2342708A1 EP2342708A1 (de) 2011-07-13
EP2342708B1 true EP2342708B1 (de) 2012-07-18

Family

ID=40344486

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08875184A Not-in-force EP2342708B1 (de) 2008-10-15 2008-10-15 Verfahren zum analysieren eines digitalen musikaudiosignals

Country Status (7)

Country Link
EP (1) EP2342708B1 (de)
JP (1) JP2012506061A (de)
CN (1) CN102187386A (de)
BR (1) BRPI0823192A2 (de)
CA (1) CA2740638A1 (de)
EA (1) EA201170559A1 (de)
WO (1) WO2010043258A1 (de)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110254688A1 (en) * 2010-04-15 2011-10-20 Samsung Electronics Co., Ltd. User state recognition in a wireless communication system
US9263060B2 (en) 2012-08-21 2016-02-16 Marian Mason Publishing Company, Llc Artificial neural network based system for classification of the emotional content of digital music
US9257954B2 (en) * 2013-09-19 2016-02-09 Microsoft Technology Licensing, Llc Automatic audio harmonization based on pitch distributions
JP6671245B2 (ja) * 2016-06-01 2020-03-25 株式会社Nttドコモ 識別装置
CN107135578B (zh) * 2017-06-08 2020-01-10 复旦大学 基于TonaLighting调节技术的智能音乐和弦-氛围灯系统
US11024288B2 (en) * 2018-09-04 2021-06-01 Gracenote, Inc. Methods and apparatus to segment audio and determine audio segment similarities
JP7375302B2 (ja) * 2019-01-11 2023-11-08 ヤマハ株式会社 音響解析方法、音響解析装置およびプログラム

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1091199A (ja) * 1996-09-18 1998-04-10 Mitsubishi Electric Corp 記録再生装置
US6057502A (en) * 1999-03-30 2000-05-02 Yamaha Corporation Apparatus and method for recognizing musical chords
JP3870727B2 (ja) * 2001-06-20 2007-01-24 ヤマハ株式会社 演奏タイミング抽出方法
JP2006202235A (ja) * 2005-01-24 2006-08-03 Nara Institute Of Science & Technology 経時的現象発生解析装置及び経時的現象発生解析方法
JP2007041234A (ja) * 2005-08-02 2007-02-15 Univ Of Tokyo 音楽音響信号の調推定方法および調推定装置
JP4722738B2 (ja) * 2006-03-14 2011-07-13 三菱電機株式会社 楽曲分析方法及び楽曲分析装置
JP4823804B2 (ja) * 2006-08-09 2011-11-24 株式会社河合楽器製作所 コード名検出装置及びコード名検出用プログラム
JP4214491B2 (ja) * 2006-10-20 2009-01-28 ソニー株式会社 信号処理装置および方法、プログラム、並びに記録媒体
JP4315180B2 (ja) 2006-10-20 2009-08-19 ソニー株式会社 信号処理装置および方法、プログラム、並びに記録媒体

Also Published As

Publication number Publication date
WO2010043258A1 (en) 2010-04-22
CA2740638A1 (en) 2010-04-22
EA201170559A1 (ru) 2012-01-30
EP2342708A1 (de) 2011-07-13
JP2012506061A (ja) 2012-03-08
BRPI0823192A2 (pt) 2018-10-23
CN102187386A (zh) 2011-09-14

Similar Documents

Publication Publication Date Title
Klapuri et al. Automatic transcription of music
EP2342708B1 (de) Verfahren zum analysieren eines digitalen musikaudiosignals
Lee et al. Acoustic chord transcription and key extraction from audio using key-dependent HMMs trained on synthesized audio
JP5282548B2 (ja) 情報処理装置、音素材の切り出し方法、及びプログラム
US7908135B2 (en) Music-piece classification based on sustain regions
US5880392A (en) Control structure for sound synthesis
EP1895506A1 (de) Vorrichtung und Programm zur Schallanalyse
EP2400488A1 (de) System zur erzeugung akustischer musiksignale
CN101116134A (zh) 信息处理设备、方法及程序
Scheirer Extracting expressive performance information from recorded music
Zhang et al. Melody extraction from polyphonic music using particle filter and dynamic programming
Wager et al. Deep autotuner: A pitch correcting network for singing performances
Lerch Software-based extraction of objective parameters from music performances
CN112634841B (zh) 一种基于声音识别的吉他谱自动生成方法
Jehan Hierarchical multi-class self similarities
Dressler Automatic transcription of the melody from polyphonic music
Noland et al. Influences of signal processing, tone profiles, and chord progressions on a model for estimating the musical key from audio
Camurri et al. An experiment on analysis and synthesis of musical expressivity
Stein et al. Evaluation and comparison of audio chroma feature extraction methods
Wang et al. A framework for automated pop-song melody generation with piano accompaniment arrangement
Odekerken et al. Decibel: Improving audio chord estimation for popular music by alignment and integration of crowd-sourced symbolic representations
Noland Computational tonality estimation: signal processing and hidden Markov models
Färnström et al. Method for Analyzing a Digital Music Audio Signal
Skowronek et al. Features for audio classification: Percussiveness of sounds
Pérez Fernández et al. A comparison of pitch chroma extraction algorithms

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20110330

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 567188

Country of ref document: AT

Kind code of ref document: T

Effective date: 20120815

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602008017323

Country of ref document: DE

Effective date: 20120913

REG Reference to a national code

Ref country code: CH

Ref legal event code: NV

Representative=s name: R. A. EGLI AND CO. PATENTANWAELTE, CH

REG Reference to a national code

Ref country code: NL

Ref legal event code: VDEP

Effective date: 20120718

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 567188

Country of ref document: AT

Kind code of ref document: T

Effective date: 20120718

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG4D

Effective date: 20120718

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121018

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121118

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121019

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121119

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20121030

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121031

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

26N No opposition filed

Effective date: 20130419

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

REG Reference to a national code

Ref country code: FR

Ref legal event code: ST

Effective date: 20130628

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121018

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121015

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602008017323

Country of ref document: DE

Effective date: 20130419

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121031

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20121029

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20120718

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: CH

Payment date: 20140328

Year of fee payment: 6

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20121015

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20131015

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602008017323

Country of ref document: DE

Effective date: 20140501

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20081015

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20131015

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20140501

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20121030

Year of fee payment: 5

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20141031

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20141031